From nobody Tue Dec 16 13:09:00 2025 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D43334BA4E for ; Thu, 4 Dec 2025 19:29:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764876601; cv=none; b=mPCtWgYjmoHARWr8bGfm2sIyuZtrkk+LX3YqG26pj0rfwyQDnQ0fmD9YENbumZQ86BMYyEGvxXKqZX6unOyeGV+9wOmScjydFMNegOwcoY2QkqnpAo22AyQk4H9MpmPthMhj9fMCKavyMA0Z+uazHNlXUr2Cz0r17UadmIRHUxI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764876601; c=relaxed/simple; bh=mEECQ/wo8pHlcOj88GTHTL5nEJiE8fkV3P8vt0L/P0Q=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=HAPs9dAvHh6z1YunEeanIupSVW0CiB04jssUt8biSybKtibdpM16RK1lUPaaOEOdKnrPDhhYsMtkmsXfZKPWtQeze3fHeDJNHOaC6+D6YhtxKi77qWX6C71jS1vqQIeel6CFw5bsQApRLbV1mAzkulsxKYLDoxybzmaHaM3/diY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=cOWPsdGi; arc=none smtp.client-ip=209.85.210.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cOWPsdGi" Received: by mail-pf1-f178.google.com with SMTP id d2e1a72fcca58-7b8eff36e3bso2103066b3a.2 for ; Thu, 04 Dec 2025 11:29:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764876598; x=1765481398; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Z74RzNdfSl8VMAwJnh85EBGE2QwwEumphH4Ky//xWjU=; b=cOWPsdGiVW76DrrC1MMNI9rcJ1oM/tOdWn9DHs3QF1GIzkl7YlV+FdBWJdSNaFlzU+ JBhR35oPgKNz/7iYN7DjHMUns1LKZWGJRPAGZolGcCBMlRErHLRb3JcHqqmAXIPfL1vu zug/53+IYtNuUTfNOcanjcmBs3SY8V3/LlW5viv/uRpSWRqLho20f4J+UTvBkfjdBT2W oVxE/1jJUYnGdXI0osUr9Yb77qZYkNqlPlZOme/bb4hIQew4Bi9k6o+YGoQUreAUBwre EfU4WDFgd6OTKe1WmKfzGWRDTpPQIFsEIpzdncEbCGmLdFvPTehn+r9GwbKKzP6PJSF+ ddsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764876598; x=1765481398; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=Z74RzNdfSl8VMAwJnh85EBGE2QwwEumphH4Ky//xWjU=; b=bGNlN8AAVAZj+tczxeIK5ewXHXfBa7Llpu507qHtwEFAR/yVEgHRYP6roaKDD/O7cW lRprFt1jj+x11EEjH6mC8V+D/0xCVhzSoFf+6UoCTikkqQBTITfNuhzqt/vMS6FxMGFp gRiIaXK3Zv4solJ/SxEw6XXvQ+U/xB7xi0TBLi+LLRw6iQGdtUWnk6b7GUSIfXqwTF3b bupfzodRloJqeVZd/Vz8RfUKeBa7/p++dnhC+LfQv5pbvC8tLzHIovTV1o5wYbQYckwy TdA9GUH9PH6O6CaUpaPVCrT8TF0rFaRoE4UY5VS84vsAp9Ed893pxpEz8EfYtLYcKFIo 6EBQ== X-Forwarded-Encrypted: i=1; AJvYcCW9AqHTR92Lbq+lcuJxJk5U05tKk6wlRjX4XYeHpKZP2YQJlP7SRqV6Aduf85u/gMR66bFbwlPCYHRY8Pk=@vger.kernel.org X-Gm-Message-State: AOJu0Yynqh3RxMtXnITOJu0S2X3sfnWDc8Dikq+Di3DHqcZ5s2AS6KGG xpGT5MJhlM639v9i2N5wATqYxg0OtzeLaOP4oY+hBIIrm2PGRNunGWZE X-Gm-Gg: ASbGncsZBC5XCBpWNNpggJYW9WXvlNNRgl0UxeL0QbgWYGaAmtevWF4tMHB9AWKjOwY bMQIaGi8uWmk6ReWZuXCcpnaXAwck5X8L6IpA/M3g/Fpx+tgmYKzLo2jtw/gJjnG571tMbt92kB ZLjBIGaTw6ybByKmsKZ3hie205QSUxLEGYTp3iWoikERPgMDPthun6EzvI4AobIaQRmc05pZjtb 3Dw5B4r1urNAWzUIpVxlOVamDzu+AY3L2jR3JjP6JbYW4G8DjsAQkw6o7Xn98I2IebpqhapsMBw QJRrDgyFjS7GmCOQ5hGsmOx+FPk1T9b0SUMXDLpYsTn7Xsl3b2okmV+1dYxv/PAMKkglotjCmwu 2Wv1ODw7Q1OyYsNDlDdRdcmjpocTGgVKSxJh5QrY9guKwqKYQFt3woYN0I68lHvP0QX0BFPIlOD 5MV2ZFWH/S68yJIyibL4zXNgtA3PtEFm9SFeoHb9kDayZn41Mq X-Google-Smtp-Source: AGHT+IGUfhOlcR84uG12iYI/zNuEssxSqcyplxyda4eDjMXB57WJfsD0fAS+avxYa45OMzapngs8TQ== X-Received: by 2002:a05:6300:210f:b0:35e:fce6:46e7 with SMTP id adf61e73a8af0-36403764175mr5059537637.5.1764876597809; Thu, 04 Dec 2025 11:29:57 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-bf686b3b5a9sm2552926a12.9.2025.12.04.11.29.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Dec 2025 11:29:57 -0800 (PST) From: Kairui Song Date: Fri, 05 Dec 2025 03:29:12 +0800 Subject: [PATCH v4 04/19] mm, swap: always try to free swap cache for SWP_SYNCHRONOUS_IO devices Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251205-swap-table-p2-v4-4-cb7e28a26a40@tencent.com> References: <20251205-swap-table-p2-v4-0-cb7e28a26a40@tencent.com> In-Reply-To: <20251205-swap-table-p2-v4-0-cb7e28a26a40@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Baoquan He , Barry Song , Chris Li , Nhat Pham , Yosry Ahmed , David Hildenbrand , Johannes Weiner , Youngjun Park , Hugh Dickins , Baolin Wang , Ying Huang , Kemeng Shi , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1764876574; l=3125; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=CI2m/8pPC1xTUXTV6AisxBkS+TKo9XryUKDABctQipg=; b=quttK8TYZdUR/zrStiXrO4jwsevsIWI+I1WKBXImPjLu5Sq7sQDCY06WCy3Is/O4LS0NkNspQ DZpdnsNeLpNCJ8dg65sILknSf2LjXHZKMF56JzYz1W3QZxKGGIMrW6G X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Now SWP_SYNCHRONOUS_IO devices are also using swap cache. One side effect is that a folio may stay in swap cache for a longer time due to lazy freeing (vm_swap_full()). This can help save some CPU / IO if folios are being swapped out very frequently right after swapin, hence improving the performance. But the long pinning of swap slots also increases the fragmentation rate of the swap device significantly, and currently, all in-tree SWP_SYNCHRONOUS_IO devices are RAM disks, so it also causes the backing memory to be pinned, increasing the memory pressure. So drop the swap cache immediately for SWP_SYNCHRONOUS_IO devices after swapin finishes. Swap cache has served its role as a synchronization layer to prevent any parallel swap-in from wasting CPU or memory allocation, and the redundant IO is not a major concern for SWP_SYNCHRONOUS_IO devices. Worth noting, without this patch, this series so far can provide a ~30% performance gain for certain workloads like MySQL or kernel compilation, but causes significant regression or OOM when under extreme global pressure. With this patch, we still have a nice performance gain for most workloads, and without introducing any observable regressions. This is a hint that further optimization can be done based on the new unified swapin with swap cache, but for now, just keep the behaviour consistent with before. Signed-off-by: Kairui Song --- mm/memory.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 41b690eb8c00..9fb2032772f2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4354,12 +4354,26 @@ static vm_fault_t remove_device_exclusive_entry(str= uct vm_fault *vmf) return 0; } =20 -static inline bool should_try_to_free_swap(struct folio *folio, +/* + * Check if we should call folio_free_swap to free the swap cache. + * folio_free_swap only frees the swap cache to release the slot if swap + * count is zero, so we don't need to check the swap count here. + */ +static inline bool should_try_to_free_swap(struct swap_info_struct *si, + struct folio *folio, struct vm_area_struct *vma, unsigned int fault_flags) { if (!folio_test_swapcache(folio)) return false; + /* + * Always try to free swap cache for SWP_SYNCHRONOUS_IO devices. Swap + * cache can help save some IO or memory overhead, but these devices + * are fast, and meanwhile, swap cache pinning the slot deferring the + * release of metadata or fragmentation is a more critical issue. + */ + if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) + return true; if (mem_cgroup_swap_full(folio) || (vma->vm_flags & VM_LOCKED) || folio_test_mlocked(folio)) return true; @@ -4931,7 +4945,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * yet. */ swap_free_nr(entry, nr_pages); - if (should_try_to_free_swap(folio, vma, vmf->flags)) + if (should_try_to_free_swap(si, folio, vma, vmf->flags)) folio_free_swap(folio); =20 add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); --=20 2.52.0