From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DD14346E78 for ; Wed, 28 Jan 2026 09:30:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592637; cv=none; b=n19PG5xx2DukP3dLXh62kippqkz12cKy8XRmigdU8iTMweGk0V1ptO0g8NXiXSlWIRCi00XmEPNjiCcUnuXleKboxj2HFf+YLdCw3gzOo7Qo9VdhBSa1UhYn0ghAMNoHLVrrAmPOko65pdeL0HqJP6hmSYVvZvhecuYfK67pODk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592637; c=relaxed/simple; bh=VG8IU6huOx+mOLrinUPxSTM5nUnH0djp/f3W9DrUDAM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=u/B84nCAAkrS8eR2nDBDfhOjxmhC/o/rHBVnmpa4FGJfmsr4ZbjKp5TchjI4e4O7zV0bi/p+gHvFff6/F6kPpRBuF3sub6F2b/GyYTC3+qicXxgJMISc9MTi3uGkNOlBGhqLQ2LkBEN28EM+2r+Um8WUd8bl3rDfHGff0+MpHzI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=VGQbS9Bt; arc=none smtp.client-ip=209.85.216.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="VGQbS9Bt" Received: by mail-pj1-f42.google.com with SMTP id 98e67ed59e1d1-352e3d18fa7so4595971a91.1 for ; Wed, 28 Jan 2026 01:30:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592635; x=1770197435; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=JXkKpfW2fKRP2DHBX2nOhkvpUQvYM7wlOqYMZBE7moY=; b=VGQbS9BtDj05s7YEhp3ZYTaqFCe+scsFjQcrzujHw0eQs7crgm7NNTh3jGBuOZhgrl iien81oscYoBeOOJKBgeD5KT2ef5GiwgFeBGJjXe5zh96em4cuagEvA0dujTBl5YuWC4 haHi8aF8fps45Kg/2RT4dIhK3SY0Yg0LUFggaGglUPipTs9RZyq+2qumxPGmjT1FLtDb 3HSKUNqg5MeGQYvLTe/mn4wghrzHcJQvMFTjKlpohlOx+3Uv6kmEQtTyt+QfV6pUOcBj tFmxiHWdFW/2H5r2O/qyfXfh8gAMiXN7BwyxoiXF8jGxEQqRJahA3eJjFcw/d3piRKL1 iSMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592635; x=1770197435; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=JXkKpfW2fKRP2DHBX2nOhkvpUQvYM7wlOqYMZBE7moY=; b=OG9X46el6/hQs5EMQMKIvEjGDTuwWbKKW8/1clC5DXRJGTJi3OR3/OvGTAGL8VxxVo JLlwuz6xf+s5LBqVXbsd6RKhtTXy2zh4dAJDaFI6c20YoEvc7NrqTNPhnP4Yq2IcVqJ5 hnAmPE6m29vw6fAGhhHiFTNx/VaFyaDouARB0yBK4bjGoIL3Mj4YRIOfxN4UTvXeHBvP +LKMmjwCR1Z+dlf/dSKds7lD6sJ7wNVoKVr0nWDbGz++4SwcQ4B33O3JCuzlDDhVegDG aCnnJE0U+SZFpumdj4xfT0g8rgHHyvdhKfAkfGtZ3sbvQ/yOmQ8L0kHmrisigYxES/wK +TSA== X-Forwarded-Encrypted: i=1; AJvYcCVJmj7SUR756iESMNIa3i+0bOd/0yTSFgGbY12I1grwdkV48CKs421EZD1FheggtF6rEb7HHRvwK0LSnek=@vger.kernel.org X-Gm-Message-State: AOJu0Yw4RCstmSZAY2b2bEEWVxZX5qhbVQDx6gzVVCXfNh3GT+J+s6fv Hcp1M6oOGSCzljFDc485VEeKTjV591T2pv+7SjE5mtlSg3jEQU5yiRD5 X-Gm-Gg: AZuq6aIFRYsgOyYBzOOGgpdHKZGSA2FeaZ6EIus9w/zW6sn32Mw/elEvzDDbII/FdMR ZspSpAUPtukllAya0qLc6T8qjN9U8zeUawg6sjkgz+3KpQE/YtafFP6lDrYDZE/kCFsO1VmPwP4 ER8eoNQKmLO3bZK/tLHOOcA3JgbRrGek9f0DE7SKVxncLuhJpngkUT44jMqUGGR9SklpxStWyoN ChJ1GL2b695m26LfnfeJFeH0wBS2PaR1kVEtHK6G4B1ToS5yDtx40y+GScSFw+2KUHgLhYN6swP 9DLuYnY7/56EKotRMuU2AGs5TPNB0TUtv2b4V9ICNoEekyr51EEsiwm/BLzILVwMz8uZCK7+l7H x2l3hnX8vdR2IrVkvh8hm32JBfijyVra0YUwif4JvvzeKfoU361p8H9QW8UXQdU2JO3aiONQePz dcdr9SCTuzuQIUrcjXTXalViv6v4Dv/UF6aTD6SKfx+YTtJT+TjCt4xIrDh7Cq40g4VxZj X-Received: by 2002:a17:90a:c2cb:b0:34c:4c6d:ad0f with SMTP id 98e67ed59e1d1-353fedb04e4mr4505185a91.37.1769592635182; Wed, 28 Jan 2026 01:30:35 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.30.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:30:34 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:25 +0800 Subject: [PATCH v2 01/12] mm, swap: protect si->swap_file properly and use as a mount indicator Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-1-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=5423; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=KCJ3euqA2qAaRKKZ+5BOjIB6OnqWy66OxHgNUHCngaI=; b=yUDffmpYnq0E0QTlzeFqQIlUHmu6snQJuTNpipCSscvVCs/a0w84zEQXy7hojSBMeXeMrbeiT uuD5ZksxBlFCx3xtlmgpeUcAp/e/Q7vCV0UdfEg0XIrukAPUSaAbqkI X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song /proc/swaps uses si->swap_map as the indicator to check if the swap device is mounted. swap_map will be removed soon, so change it to use si->swap_file instead because: - si->swap_file is exactly the only dynamic content that /proc/swaps is interested in. Previously, it was checking si->swap_map just to ensure si->swap_file is available. si->swap_map is set under mutex protection, and after si->swap_file is set, so having si->swap_map set guarantees si->swap_file is set. - Checking si->flags doesn't work here. SWP_WRITEOK is cleared during swapoff, but /proc/swaps is supposed to show the device under swapoff too to report the swapoff progress. And SWP_USED is set even if the device hasn't been properly set up. We can have another flag, but the easier way is to just check si->swap_file directly. So protect si->swap_file setting with mutext, and set si->swap_file only when the swap device is truly enabled. /proc/swaps only interested in si->swap_file and a few static data reading. Only si->swap_file needs protection. Reading other static fields is always fine. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 7b055f15d705..521f7713a7c3 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -110,6 +110,7 @@ struct swap_info_struct *swap_info[MAX_SWAPFILES]; =20 static struct kmem_cache *swap_table_cachep; =20 +/* Protects si->swap_file for /proc/swaps usage */ static DEFINE_MUTEX(swapon_mutex); =20 static DECLARE_WAIT_QUEUE_HEAD(proc_poll_wait); @@ -2521,7 +2522,8 @@ static void drain_mmlist(void) /* * Free all of a swapdev's extent information */ -static void destroy_swap_extents(struct swap_info_struct *sis) +static void destroy_swap_extents(struct swap_info_struct *sis, + struct file *swap_file) { while (!RB_EMPTY_ROOT(&sis->swap_extent_root)) { struct rb_node *rb =3D sis->swap_extent_root.rb_node; @@ -2532,7 +2534,6 @@ static void destroy_swap_extents(struct swap_info_str= uct *sis) } =20 if (sis->flags & SWP_ACTIVATED) { - struct file *swap_file =3D sis->swap_file; struct address_space *mapping =3D swap_file->f_mapping; =20 sis->flags &=3D ~SWP_ACTIVATED; @@ -2615,9 +2616,9 @@ EXPORT_SYMBOL_GPL(add_swap_extent); * Typically it is in the 1-4 megabyte range. So we can have hundreds of * extents in the rbtree. - akpm. */ -static int setup_swap_extents(struct swap_info_struct *sis, sector_t *span) +static int setup_swap_extents(struct swap_info_struct *sis, + struct file *swap_file, sector_t *span) { - struct file *swap_file =3D sis->swap_file; struct address_space *mapping =3D swap_file->f_mapping; struct inode *inode =3D mapping->host; int ret; @@ -2635,7 +2636,7 @@ static int setup_swap_extents(struct swap_info_struct= *sis, sector_t *span) sis->flags |=3D SWP_ACTIVATED; if ((sis->flags & SWP_FS_OPS) && sio_pool_init() !=3D 0) { - destroy_swap_extents(sis); + destroy_swap_extents(sis, swap_file); return -ENOMEM; } return ret; @@ -2851,7 +2852,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) flush_work(&p->reclaim_work); flush_percpu_swap_cluster(p); =20 - destroy_swap_extents(p); + destroy_swap_extents(p, p->swap_file); if (p->flags & SWP_CONTINUED) free_swap_count_continuations(p); =20 @@ -2941,7 +2942,7 @@ static void *swap_start(struct seq_file *swap, loff_t= *pos) return SEQ_START_TOKEN; =20 for (type =3D 0; (si =3D swap_type_to_info(type)); type++) { - if (!(si->flags & SWP_USED) || !si->swap_map) + if (!(si->swap_file)) continue; if (!--l) return si; @@ -2962,7 +2963,7 @@ static void *swap_next(struct seq_file *swap, void *v= , loff_t *pos) =20 ++(*pos); for (; (si =3D swap_type_to_info(type)); type++) { - if (!(si->flags & SWP_USED) || !si->swap_map) + if (!(si->swap_file)) continue; return si; } @@ -3379,7 +3380,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) goto bad_swap; } =20 - si->swap_file =3D swap_file; mapping =3D swap_file->f_mapping; dentry =3D swap_file->f_path.dentry; inode =3D mapping->host; @@ -3429,7 +3429,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) =20 si->max =3D maxpages; si->pages =3D maxpages - 1; - nr_extents =3D setup_swap_extents(si, &span); + nr_extents =3D setup_swap_extents(si, swap_file, &span); if (nr_extents < 0) { error =3D nr_extents; goto bad_swap_unlock_inode; @@ -3538,6 +3538,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) prio =3D DEF_SWAP_PRIO; if (swap_flags & SWAP_FLAG_PREFER) prio =3D swap_flags & SWAP_FLAG_PRIO_MASK; + + si->swap_file =3D swap_file; enable_swap_info(si, prio, swap_map, cluster_info, zeromap); =20 pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s= %s%s\n", @@ -3562,10 +3564,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, special= file, int, swap_flags) kfree(si->global_cluster); si->global_cluster =3D NULL; inode =3D NULL; - destroy_swap_extents(si); + destroy_swap_extents(si, swap_file); swap_cgroup_swapoff(si->type); spin_lock(&swap_lock); - si->swap_file =3D NULL; si->flags =3D 0; spin_unlock(&swap_lock); vfree(swap_map); --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0366346FA0 for ; Wed, 28 Jan 2026 09:30:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592641; cv=none; b=KZtH9RN8/zMfa7J+VEUWVWp12F4D/U0R0aEZ3yIAWVzfJIzW68CnHA9raWFJe6zr4NeRJFbisgbwzoVHIeBql3vVv03/vXOlU13HKwkopXmU+OBiovRUPInFoZGQR8BBvUOXf5+D2VZxxXHqtf997afXDvUnjWAUKqhRHc9vwvc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592641; c=relaxed/simple; bh=kO89w4TSJX4bByAX1RSmDOrEA4P5dAaQCy7BjrZ/M2M=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=bBXebOiP6WVt+faHYEfN4dRZBsEakF8XZ9GsC67HSyR7QhM9HnBCNxq26xQoyymCNZt5vzIWzDPha8BKCF/OhpNNlbTWF/C118flOd8yUbPgn3X+SkF24uKG8/Pgy9EUwGBkLO7Y7MQMkTn6r8NamhsIYuWa69+sP2YuEAhKIzQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=EwsgJRql; arc=none smtp.client-ip=209.85.216.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="EwsgJRql" Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-34c3259da34so3920362a91.2 for ; Wed, 28 Jan 2026 01:30:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592639; x=1770197439; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=jHlSAs5DuvtfLFcO72y/SZVl/BNUOfJD+/3/mENU6eI=; b=EwsgJRql4hvh2RrKg/+VnNO2ljL5h5lzWJdhRDCdZEmGFnhwP4WtYurTdW+DCRF9og oNPCDAfofzlWyoGnMR+0Y2BXtuMj1YBBGGh4ZnW9+30SAR7l7iK1bhAhHKgZztpT616q 0vWGricgysGEf/x9ro0k9MQjee4/QsIPTNoX7RVOmoIvor6oTHxEkUz47opK2x08gj+t QgowpLg2Jrmnb8SWvqvzbvITrXDOAHvjms1cb0eHZzgC0yz6l/w3F9THrAsdoa+qTA61 ecwd8LGYsPxznCM/aTuSRggJRIWL8QQLBalHwb9SbtnhHyMXIzNcDfwwtELQAPNG6jTN CZ6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592639; x=1770197439; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=jHlSAs5DuvtfLFcO72y/SZVl/BNUOfJD+/3/mENU6eI=; b=VxPRmQlYHsv+B6Z1fAW/5WLIs4Wux2jrIH0LIZE8nmnXXtbgwfxNshTvNskCUgp+aj vfhNlzjoUCKPGsMZ4eajP6KlOX6S3wO9e/+RikLfui+30JZ0DtTU1N+U/3v57oYH3oef CIjGkh4Ep+Jqq85fqq9JqByWPs8Q5XD82bxTo4glnjqZR+Dhk5ri0TccBGBwEqrZFqKz APJipLpOT13b0ob6UZpa1iNVnGDcKRwmSxS5CxH5kzIdUGpz4C0LBU1dUU/qPZ+okaVc weKJvKSvprV92oObrUMoml3R7Mza/TuQS2ez9ptyf5dctssz9bV5T6dNQrHVx4rCGoCQ rkJA== X-Forwarded-Encrypted: i=1; AJvYcCV9pF2powaSpNEmVn2uEN5jQtjveo/0UAInJqQ4oSMixNl7zLvHLYyj1qnuKGjSU9BFa6rD5ofHuRmc958=@vger.kernel.org X-Gm-Message-State: AOJu0YwFU/FMbTW+ctd+hCo4NhtiXulI+LjWVyWCdtS+z0WiA/04XV6f HdH/X8a5fFG/0lvZseIWxqUoYjJtpxqarzv2/NRgxjtMp9jYzRGi8kPP X-Gm-Gg: AZuq6aLHicleFk9hnEnmeOtIKIzvUFOt/3OzSC78Emi9u1AYYU0d2eMzpQAe9GbqA5n Lj9Sh8eNJyuJsgXeWWK7a36h23APu6C5jsg+H52nVmtwibdvUrSQfxKcgUAVIWj7PNuJzf4FryY eBgfm6E4wytJLOKkaNV4f1yka95cKxWfb98py2kKCyao8jw1J19TgQOwsJlxYet/lYv0N4+VLs9 KewVGZblL/0tUNnJ7/cIaCtC7balskyqJJccZjizCa0l2A93FI99eJivfm2HylMtDYa0IDfbkTc S0gex1ylGlsgraKSpFDCwHn1jHAXBtMJoBiIP52QRyTGk7iaS1jaddV3PIDD6wzF04RLDSKaw/y 5wz8TWekGjM1YQJyC9NYY9M7vkP+21Mlr45capTtQljBSJmdAZEzq0hn/n8UZKX1UFoyWO/wP1H mg0OsNnknxjlFFViNYJNh/AzIKwHXbr6JGMi7AOawXYnGJH8labl2TFUsCDL+4mw+HPCGE X-Received: by 2002:a17:90b:3901:b0:32d:f352:f764 with SMTP id 98e67ed59e1d1-353fecd08edmr3956834a91.2.1769592638870; Wed, 28 Jan 2026 01:30:38 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.30.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:30:38 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:26 +0800 Subject: [PATCH v2 02/12] mm, swap: clean up swapon process and locking Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-2-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=8467; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=GJOWST4EwFrMORO0TdNo6OHVZqB7RqYtXfEdQQZxN/s=; b=eaBaKJzF46kjkypZP4nFMY/8qkGRIc2eKS5aNa/7z8CfPJ2+q5TnR7L2qpYC5Tg4uFLC0vwP6 y1nyymxOE7tDzhppZ04ZedNIJwRp+gsO1r1jLBx2C6FJ81WxLUiledA X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Slightly clean up the swapon process. Add comments about what swap_lock protects, introduce and rename helpers that wrap swap_map and cluster_info setup, and do it outside of the swap_lock lock. This lock protection is not needed for swap_map and cluster_info setup because all swap users must either hold the percpu ref or hold a stable allocated swap entry (e.g., locking a folio in the swap cache) before accessing. So before the swap device is exposed by enable_swap_info, nothing would use the swap device's map or cluster. So we are safe to allocate and set up swap data freely first, then expose the swap device and set the SWP_WRITEOK flag. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 87 ++++++++++++++++++++++++++++++++-----------------------= ---- 1 file changed, 48 insertions(+), 39 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 521f7713a7c3..53ce222c3aba 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -65,6 +65,13 @@ static void move_cluster(struct swap_info_struct *si, struct swap_cluster_info *ci, struct list_head *list, enum swap_cluster_flags new_flags); =20 +/* + * Protects the swap_info array, and the SWP_USED flag. swap_info contains + * lazily allocated & freed swap device info struts, and SWP_USED indicates + * which device is used, ~SWP_USED devices and can be reused. + * + * Also protects swap_active_head total_swap_pages, and the SWP_WRITEOK fl= ag. + */ static DEFINE_SPINLOCK(swap_lock); static unsigned int nr_swapfiles; atomic_long_t nr_swap_pages; @@ -2646,8 +2653,6 @@ static int setup_swap_extents(struct swap_info_struct= *sis, } =20 static void setup_swap_info(struct swap_info_struct *si, int prio, - unsigned char *swap_map, - struct swap_cluster_info *cluster_info, unsigned long *zeromap) { si->prio =3D prio; @@ -2657,8 +2662,6 @@ static void setup_swap_info(struct swap_info_struct *= si, int prio, */ si->list.prio =3D -si->prio; si->avail_list.prio =3D -si->prio; - si->swap_map =3D swap_map; - si->cluster_info =3D cluster_info; si->zeromap =3D zeromap; } =20 @@ -2676,13 +2679,11 @@ static void _enable_swap_info(struct swap_info_stru= ct *si) } =20 static void enable_swap_info(struct swap_info_struct *si, int prio, - unsigned char *swap_map, - struct swap_cluster_info *cluster_info, - unsigned long *zeromap) + unsigned long *zeromap) { spin_lock(&swap_lock); spin_lock(&si->lock); - setup_swap_info(si, prio, swap_map, cluster_info, zeromap); + setup_swap_info(si, prio, zeromap); spin_unlock(&si->lock); spin_unlock(&swap_lock); /* @@ -2700,7 +2701,7 @@ static void reinsert_swap_info(struct swap_info_struc= t *si) { spin_lock(&swap_lock); spin_lock(&si->lock); - setup_swap_info(si, si->prio, si->swap_map, si->cluster_info, si->zeromap= ); + setup_swap_info(si, si->prio, si->zeromap); _enable_swap_info(si); spin_unlock(&si->lock); spin_unlock(&swap_lock); @@ -2724,8 +2725,8 @@ static void wait_for_allocation(struct swap_info_stru= ct *si) } } =20 -static void free_cluster_info(struct swap_cluster_info *cluster_info, - unsigned long maxpages) +static void free_swap_cluster_info(struct swap_cluster_info *cluster_info, + unsigned long maxpages) { struct swap_cluster_info *ci; int i, nr_clusters =3D DIV_ROUND_UP(maxpages, SWAPFILE_CLUSTER); @@ -2883,7 +2884,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) p->global_cluster =3D NULL; vfree(swap_map); kvfree(zeromap); - free_cluster_info(cluster_info, maxpages); + free_swap_cluster_info(cluster_info, maxpages); /* Destroy swap account information */ swap_cgroup_swapoff(p->type); =20 @@ -3232,10 +3233,15 @@ static unsigned long read_swap_header(struct swap_i= nfo_struct *si, =20 static int setup_swap_map(struct swap_info_struct *si, union swap_header *swap_header, - unsigned char *swap_map, unsigned long maxpages) { unsigned long i; + unsigned char *swap_map; + + swap_map =3D vzalloc(maxpages); + si->swap_map =3D swap_map; + if (!swap_map) + return -ENOMEM; =20 swap_map[0] =3D SWAP_MAP_BAD; /* omit header page */ for (i =3D 0; i < swap_header->info.nr_badpages; i++) { @@ -3256,9 +3262,9 @@ static int setup_swap_map(struct swap_info_struct *si, return 0; } =20 -static struct swap_cluster_info *setup_clusters(struct swap_info_struct *s= i, - union swap_header *swap_header, - unsigned long maxpages) +static int setup_swap_clusters_info(struct swap_info_struct *si, + union swap_header *swap_header, + unsigned long maxpages) { unsigned long nr_clusters =3D DIV_ROUND_UP(maxpages, SWAPFILE_CLUSTER); struct swap_cluster_info *cluster_info; @@ -3328,10 +3334,11 @@ static struct swap_cluster_info *setup_clusters(str= uct swap_info_struct *si, } } =20 - return cluster_info; + si->cluster_info =3D cluster_info; + return 0; err: - free_cluster_info(cluster_info, maxpages); - return ERR_PTR(err); + free_swap_cluster_info(cluster_info, maxpages); + return err; } =20 SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) @@ -3347,9 +3354,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) int nr_extents; sector_t span; unsigned long maxpages; - unsigned char *swap_map =3D NULL; unsigned long *zeromap =3D NULL; - struct swap_cluster_info *cluster_info =3D NULL; struct folio *folio =3D NULL; struct inode *inode =3D NULL; bool inced_nr_rotate_swap =3D false; @@ -3360,6 +3365,11 @@ SYSCALL_DEFINE2(swapon, const char __user *, special= file, int, swap_flags) if (!capable(CAP_SYS_ADMIN)) return -EPERM; =20 + /* + * Allocate or reuse existing !SWP_USED swap_info. The returned + * si will stay in a dying status, so nothing will access its content + * until enable_swap_info resurrects its percpu ref and expose it. + */ si =3D alloc_swap_info(); if (IS_ERR(si)) return PTR_ERR(si); @@ -3442,18 +3452,17 @@ SYSCALL_DEFINE2(swapon, const char __user *, specia= lfile, int, swap_flags) =20 maxpages =3D si->max; =20 - /* OK, set up the swap map and apply the bad block list */ - swap_map =3D vzalloc(maxpages); - if (!swap_map) { - error =3D -ENOMEM; + /* Setup the swap map and apply bad block */ + error =3D setup_swap_map(si, swap_header, maxpages); + if (error) goto bad_swap_unlock_inode; - } =20 - error =3D swap_cgroup_swapon(si->type, maxpages); + /* Set up the swap cluster info */ + error =3D setup_swap_clusters_info(si, swap_header, maxpages); if (error) goto bad_swap_unlock_inode; =20 - error =3D setup_swap_map(si, swap_header, swap_map, maxpages); + error =3D swap_cgroup_swapon(si->type, maxpages); if (error) goto bad_swap_unlock_inode; =20 @@ -3481,13 +3490,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, special= file, int, swap_flags) inced_nr_rotate_swap =3D true; } =20 - cluster_info =3D setup_clusters(si, swap_header, maxpages); - if (IS_ERR(cluster_info)) { - error =3D PTR_ERR(cluster_info); - cluster_info =3D NULL; - goto bad_swap_unlock_inode; - } - if ((swap_flags & SWAP_FLAG_DISCARD) && si->bdev && bdev_max_discard_sectors(si->bdev)) { /* @@ -3540,7 +3542,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) prio =3D swap_flags & SWAP_FLAG_PRIO_MASK; =20 si->swap_file =3D swap_file; - enable_swap_info(si, prio, swap_map, cluster_info, zeromap); + + /* Sets SWP_WRITEOK, resurrect the percpu ref, expose the swap device */ + enable_swap_info(si, prio, zeromap); =20 pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s= %s%s\n", K(si->pages), name->name, si->prio, nr_extents, @@ -3566,13 +3570,18 @@ SYSCALL_DEFINE2(swapon, const char __user *, specia= lfile, int, swap_flags) inode =3D NULL; destroy_swap_extents(si, swap_file); swap_cgroup_swapoff(si->type); + vfree(si->swap_map); + si->swap_map =3D NULL; + free_swap_cluster_info(si->cluster_info, si->max); + si->cluster_info =3D NULL; + /* + * Clear the SWP_USED flag after all resources are freed so + * alloc_swap_info can reuse this si safely. + */ spin_lock(&swap_lock); si->flags =3D 0; spin_unlock(&swap_lock); - vfree(swap_map); kvfree(zeromap); - if (cluster_info) - free_cluster_info(cluster_info, maxpages); if (inced_nr_rotate_swap) atomic_dec(&nr_rotate_swap); if (swap_file) --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A2B03491C4 for ; Wed, 28 Jan 2026 09:30:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592644; cv=none; b=QiA/zz+UUL8lNZNicSnheCPH3ZIFbiZN4uMBtb7Mg9lgrZ3rRS6Xv5/e73T7XgNoewyvogaIhufLfE8u6f5zCBhsoiNp+Bt6qupV2U3whpc5FsNgQilg4UwaMqjNTf5TbTUDofzgqAiHxKx7na3bGJ2+rfyEaXEpY619pufjpzo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592644; c=relaxed/simple; bh=7vhr/Y5/qe1bdq79grBj7yWIKvJOSaKIgfx4h5h9r3Q=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=rOiCfT07nvGgv9anRnQJUUH0co+u7s2ov4/Km2y8I8p/FuPQKyQSNbHTQKJPXyz9ayTHE68CYjphp7GdyeTZ3+usET8DUx/f4r2m6ymTXNr1k7YuwVAhW1+zBWqAzc+/uiM2ZcWwoyIcg9+6qvDZPDSAEJEZ9uXu0L6KmCozlnQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=eJbd6Ma6; arc=none smtp.client-ip=209.85.216.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eJbd6Ma6" Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-34f2a0c4574so5379373a91.1 for ; Wed, 28 Jan 2026 01:30:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592642; x=1770197442; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=/oxuyrcl+senacpl825Pc/8hD6orLDCzNPV1cvMYSnM=; b=eJbd6Ma6EmCVRIiA2Bhj8eOD1utq/7Trb6lxtMexOPgqRr4BEeu3F7aiIc/hewlrvU 68FOKuR0ei+1T5TXsDPX43pvTgafwQI1HofrOqjOFlMkASX9PzbSGErCY09G4/vEr+D1 jwTPIS4NmC6JSW88dm9GOOEpF+dh3jTfGkRVH27fDths+aogRVcOW1Dx2iRDv/oeXD57 fMG6Pc5U4rQdDMMBl4LMRT9qUnqztBJrOf70x46IWv3BawyyEzTUzgbE/xQ4P77NOoBQ jpQE9qHXOiBrDFOGEvrUaBXYIxTJ6qo40ix8FEKmiLkXkxqIT3nWsgr+3dLk9HZImgPq qrdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592642; x=1770197442; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=/oxuyrcl+senacpl825Pc/8hD6orLDCzNPV1cvMYSnM=; b=WZQocdC+L15h6fnaSKdFCrqrmT1jmnG/4E66xDy3TiGiFVpUMFwuWGRYFYVORTLu6y C7MZSsPX29V1WNSrLi9YP+ChAU7maIYSu+IGNELD0t1dHG5tNR3Gh5BSC8F2xd48V89r bUJb8U6i72D4+ZLdsGt1rPkgF6Qtb6sKOLjugmInn1vIg6e6BYJhmGKLya01MZalgr15 Xcnvi+5AMyXJzYTcuO8giEgGeoUAcL8HlQrxKGCnhAgcA4y63AEQJOYwhhYgoTt+AdfW /w8b150m2FMsXuZ2a7xSffLY9ZjaSv+K5iITPbdd2M7EePaJBzhcbaDt/hxg2yMwn+M6 L/FQ== X-Forwarded-Encrypted: i=1; AJvYcCWBqpH5tVD7tp4KU7EViXd6VaHxt9L8/XtdFzuZCtFxvmvPYUULsZsQJg5nue4nRWZLLIfHgcgcpxQAxhw=@vger.kernel.org X-Gm-Message-State: AOJu0YzqY9lv196G3aCJHHmsvZ1TGiC1A/gpeOWn/tO9vqw37axleYL8 xaSrUaKWbzfBuYpvOi84/5P8C2PkGopLbCQ60m29VC0dFPFz1GnPaGCxvl80zpJWSzE= X-Gm-Gg: AZuq6aKg7cRgTXc6sS95uDq9O7zaByahLcCehSZg35zU6YxqeVW9LcyDScCeg6Vi9TJ l7Cj/qaEkHtYJETe3AMS02I4TFCQhwJ1n+DuAJM1veb4a86K3P+Y5Zaif/TtioHYv0/EI6pNARI VuQOpNduRzb32Z8rpMZDMrP7KnP7hAqm8olvRxU8a3k1SlzIo47UAhIK0kwbNIUyTBNAMx//DBE G4A/YQe3wlpZga+ZrWYElkCAbRK9VDzGMFznxXIO4FBOdJx+d1gJ42uTv9cCI19qenMKaY3HZyM gWDow+XLSXgGzAteMqbZX8SN3tNarJHGyPEIcVzqsVZFfti4dHcrgsnNdZ2b0Px8ArlAsKUFSOi TZE0Q2pUH7GvYp7nxCZqu7p/M1jhxpvm1rN3XnQMUZZOI8ni4YzxU7gVmETRL+Itb/Yjh7mSvJG gkzZH7bfWMIb2Z+s5LIUIeCrxkv/GaEv05/re8KHLi0H9G4swMLj92BZv81lYogI0dZl9t X-Received: by 2002:a17:90b:4e90:b0:353:41e:1f51 with SMTP id 98e67ed59e1d1-353fedb0be5mr3633981a91.32.1769592642506; Wed, 28 Jan 2026 01:30:42 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.30.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:30:41 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:27 +0800 Subject: [PATCH v2 03/12] mm, swap: remove redundant arguments and locking for enabling a device Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-3-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=4474; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=VcHfIq9fX/OfsYjH1rKwFGMkjPLRnW2PTE0+Zr1fpH8=; b=CQrft73Ff5GcHeRS8vzyEB/f+sEs/xHE/r5jDb4Ns083oBkMezObNd7E2wwq5X9CTKIjbkW0B eKFr8GD68ifBUl+KzxNsh+DrqCCW9DQzGcIPm6yThuqN5PlAbYBA+YZ X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song There is no need to repeatedly pass zero map and priority values. zeromap is similar to cluster info and swap_map, which are only used once the swap device is exposed. And the prio values are currently read only once set, and only used for the list insertion upon expose or swap info display. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 48 ++++++++++++++++++------------------------------ 1 file changed, 18 insertions(+), 30 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 53ce222c3aba..80bf0ea098f6 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2652,19 +2652,6 @@ static int setup_swap_extents(struct swap_info_struc= t *sis, return generic_swapfile_activate(sis, swap_file, span); } =20 -static void setup_swap_info(struct swap_info_struct *si, int prio, - unsigned long *zeromap) -{ - si->prio =3D prio; - /* - * the plist prio is negated because plist ordering is - * low-to-high, while swap ordering is high-to-low - */ - si->list.prio =3D -si->prio; - si->avail_list.prio =3D -si->prio; - si->zeromap =3D zeromap; -} - static void _enable_swap_info(struct swap_info_struct *si) { atomic_long_add(si->pages, &nr_swap_pages); @@ -2678,17 +2665,12 @@ static void _enable_swap_info(struct swap_info_stru= ct *si) add_to_avail_list(si, true); } =20 -static void enable_swap_info(struct swap_info_struct *si, int prio, - unsigned long *zeromap) +/* + * Called after the swap device is ready, resurrect its percpu ref, it's n= ow + * safe to reference it. Add it to the list to expose it to the allocator. + */ +static void enable_swap_info(struct swap_info_struct *si) { - spin_lock(&swap_lock); - spin_lock(&si->lock); - setup_swap_info(si, prio, zeromap); - spin_unlock(&si->lock); - spin_unlock(&swap_lock); - /* - * Finished initializing swap device, now it's safe to reference it. - */ percpu_ref_resurrect(&si->users); spin_lock(&swap_lock); spin_lock(&si->lock); @@ -2701,7 +2683,6 @@ static void reinsert_swap_info(struct swap_info_struc= t *si) { spin_lock(&swap_lock); spin_lock(&si->lock); - setup_swap_info(si, si->prio, si->zeromap); _enable_swap_info(si); spin_unlock(&si->lock); spin_unlock(&swap_lock); @@ -3354,7 +3335,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) int nr_extents; sector_t span; unsigned long maxpages; - unsigned long *zeromap =3D NULL; struct folio *folio =3D NULL; struct inode *inode =3D NULL; bool inced_nr_rotate_swap =3D false; @@ -3470,9 +3450,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) * Use kvmalloc_array instead of bitmap_zalloc as the allocation order mi= ght * be above MAX_PAGE_ORDER incase of a large swap file. */ - zeromap =3D kvmalloc_array(BITS_TO_LONGS(maxpages), sizeof(long), - GFP_KERNEL | __GFP_ZERO); - if (!zeromap) { + si->zeromap =3D kvmalloc_array(BITS_TO_LONGS(maxpages), sizeof(long), + GFP_KERNEL | __GFP_ZERO); + if (!si->zeromap) { error =3D -ENOMEM; goto bad_swap_unlock_inode; } @@ -3541,10 +3521,17 @@ SYSCALL_DEFINE2(swapon, const char __user *, specia= lfile, int, swap_flags) if (swap_flags & SWAP_FLAG_PREFER) prio =3D swap_flags & SWAP_FLAG_PRIO_MASK; =20 + /* + * The plist prio is negated because plist ordering is + * low-to-high, while swap ordering is high-to-low + */ + si->prio =3D prio; + si->list.prio =3D -si->prio; + si->avail_list.prio =3D -si->prio; si->swap_file =3D swap_file; =20 /* Sets SWP_WRITEOK, resurrect the percpu ref, expose the swap device */ - enable_swap_info(si, prio, zeromap); + enable_swap_info(si); =20 pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s= %s%s\n", K(si->pages), name->name, si->prio, nr_extents, @@ -3574,6 +3561,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) si->swap_map =3D NULL; free_swap_cluster_info(si->cluster_info, si->max); si->cluster_info =3D NULL; + kvfree(si->zeromap); + si->zeromap =3D NULL; /* * Clear the SWP_USED flag after all resources are freed so * alloc_swap_info can reuse this si safely. @@ -3581,7 +3570,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) spin_lock(&swap_lock); si->flags =3D 0; spin_unlock(&swap_lock); - kvfree(zeromap); if (inced_nr_rotate_swap) atomic_dec(&nr_rotate_swap); if (swap_file) --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30DCC34A3D0 for ; Wed, 28 Jan 2026 09:30:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592648; cv=none; b=rHezWVhSRU9VhTXDqRT7XGoS55G3wB2l2KzQoHdSmrAM4qKSxiLamnm5TQ/cZ34NDNli2BLtX5lz6YlfOkcSSUSqC/6iAcSfzeZI/dRcCgpgsVSTymFKOnKjeAmqMkeCtt5605N82mJJ1p8snoBhF983nbTxgJWazXONzHAmA/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592648; c=relaxed/simple; bh=OWJY00Wf2dIBc05Nz8hzedH27aMPMMOuwTeZeud6eu0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=FwPM+Fiq7MpZ0O1PL7AuEq3U+T0d3Vaz80wDMLDVerQgRb/K9QUXIXXQWWpF+ScTvTP3EAayPCkC8s5+vhglC4TtSWcyqQSoyQcfMtnmEOtpgzgbbYFeVKr+joMajWlztfZnkXU6pgBDYbX5qrOTjQeC1US1qcRT+vjshuObBkk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=A3OkND/R; arc=none smtp.client-ip=209.85.216.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="A3OkND/R" Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-34c868b197eso5565843a91.2 for ; Wed, 28 Jan 2026 01:30:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592646; x=1770197446; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=aAw5ZLwfH3aOqNw+8qnqKgdPaynl21dDWoO/xPxb8Vc=; b=A3OkND/R6YUbX9rP3ehMt56b6CDeZ2MIFOvfiO3DDbadpYkXpHUk5ISyfrU8NqEIUE SPiJGL5ahwONH1PQVth2xGIo3914/cG5f+Fu/aBenT56BWjQB6rgz2XQgtZVyFVGSUWu 5iOvGpRYtJixMXvylt376CL7RAxx21R4uMfdC+FoNFcvAVa3OAfIR0JSmc8U/PLjvuoV jEhFKA33yK5+BZlR+du3pZalCK4GX59sJv59dtJWKqqPhKqHB1yQTLPICiP00pnIFEW3 Nofzsqk4nndvq0cx/SfCuaJaXC7nkkxtpumZFGiTBD8JTM3oZWw6LL9tNncwAd+SFGxk dHbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592646; x=1770197446; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=aAw5ZLwfH3aOqNw+8qnqKgdPaynl21dDWoO/xPxb8Vc=; b=t4YiaS2vFHxMqxlG07clFzS6l8S6wlbNwR+dPk3Vn9bErphXkuwrNZfTdmKUcCWX5w Qrsqx8IXvgYKaMdmDejbrQdrstNDv58LCUj48daIIza9JE4hbhWhcxAaeUvZWQUjISvC 9h0g2pFnTQzrERshZfww/18SN6UEJ3aSgsKYtJUUBXSsUOaHbKCI2zH/3Us/m/Xu/Mbq 5/FLHbLqY7eyQx/PamartzvKxyS9LXHM2nC7KcIaMavqAwA1INFKbWFfL8g9q4ny7JqN g34fX9R//67ZiA72ZQn7A1ZNAsAWa4NlBiot8rbiHIKiwTTk0sHg7k4S2xs88sCeg2Sz ++sg== X-Forwarded-Encrypted: i=1; AJvYcCVnvd/PUkov+ZCkmbP0fzRGXFcf8vjkQmRVsuY2RpvcH6oc50WWjMPU6MyQLiCvdpTnTwxsHL6xxnlkdKo=@vger.kernel.org X-Gm-Message-State: AOJu0YznOrOkybLWdFW99Ezgqdp/gsaxamyyHDJgLcb1Jr8lEmx0rGD6 4bioKz5+EHRfdv5Lq5luJp0LjDziVOy9f0z8pB9TiLFugeJaI0Ar3pF2 X-Gm-Gg: AZuq6aKjqK3t8cWBjgt31iFUPnFMJf6lty0fkSEIqIcbzcHBA8a39LMKv/TWAiqQIyT Z7FJw0+8ua+Qs+2bJTMxYuK0nU8fMwGs3Z0X3CCJ+J6VTranJCoRygeAeS8pmDwpeRwT+BCzeOM lu14i3Bt0OQKUAZqPsFcf6r8hLGlQUL8BExV8gE7ScDi9I1+0cSUSUZ3sFBZh/rmNRW3yY0QXS8 Phz4Qr354V0KKm7n86O+0wCihyR9aUjFyK2kl48r6ncEyhte9p485a60hi7er5eauLnfwQW50y8 I3eYoR6ysmWtHvXqQG4jYVrA7pemzIcdf5/YCoCZPjnF1TVQ6osHabhNoLL8rKrpTEq0bZjvGai li1fH30doS8zGh6idKwF88w6ezP5HlQeXIZpWDHwy9HmlGaI2/eRHCiZHhHbfMwNPtq3KpupCJo 4rXkurENryaOghGcpgMBoOF2XvxZJIJGIfChoFwN5X6w6wxmJiITJdYq9Emg== X-Received: by 2002:a17:90b:2f04:b0:340:ad5e:cb with SMTP id 98e67ed59e1d1-353fecdcfabmr4259352a91.8.1769592646111; Wed, 28 Jan 2026 01:30:46 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.30.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:30:45 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:28 +0800 Subject: [PATCH v2 04/12] mm, swap: consolidate bad slots setup and make it more robust Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-4-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=4411; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=OidTajkMc0iGAT4CQSZ4FaMMIJHOoz6IAuhiV5n7RgI=; b=cmDmAT8wUMiXX1gPvgA7bHsM5uiECNHeKtIvXly1+RBv9uooWAdTksHnja5oCbt81l6FmPKio PhDaU+N3Zy1AhsScxqi1FhSMKzBvxMFEkNRKnMayV5T5SAlUDMCxC8G X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song In preparation for using the swap table to track bad slots directly, move the bad slot setup to one place, set up the swap_map mark, and cluster counter update together. While at it, provide more informative logs and a more robust fallback if any bad slot info looks incorrect. Fixes a potential issue that a malformed swap file may cause the cluster to be unusable upon swapon, and provides a more verbose warning on a malformed swap file Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 68 +++++++++++++++++++++++++++++++++----------------------= ---- 1 file changed, 38 insertions(+), 30 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 80bf0ea098f6..df8b13eecab1 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -743,13 +743,37 @@ static void relocate_cluster(struct swap_info_struct = *si, * slot. The cluster will not be added to the free cluster list, and its * usage counter will be increased by 1. Only used for initialization. */ -static int swap_cluster_setup_bad_slot(struct swap_cluster_info *cluster_i= nfo, - unsigned long offset) +static int swap_cluster_setup_bad_slot(struct swap_info_struct *si, + struct swap_cluster_info *cluster_info, + unsigned int offset, bool mask) { unsigned long idx =3D offset / SWAPFILE_CLUSTER; struct swap_table *table; struct swap_cluster_info *ci; =20 + /* si->max may got shrunk by swap swap_activate() */ + if (offset >=3D si->max && !mask) { + pr_debug("Ignoring bad slot %u (max: %u)\n", offset, si->max); + return 0; + } + /* + * Account it, skip header slot: si->pages is initiated as + * si->max - 1. Also skip the masking of last cluster, + * si->pages doesn't include that part. + */ + if (offset && !mask) + si->pages -=3D 1; + if (!si->pages) { + pr_warn("Empty swap-file\n"); + return -EINVAL; + } + /* Check for duplicated bad swap slots. */ + if (si->swap_map[offset]) { + pr_warn("Duplicated bad slot offset %d\n", offset); + return -EINVAL; + } + + si->swap_map[offset] =3D SWAP_MAP_BAD; ci =3D cluster_info + idx; if (!ci->table) { table =3D swap_table_alloc(GFP_KERNEL); @@ -3216,30 +3240,12 @@ static int setup_swap_map(struct swap_info_struct *= si, union swap_header *swap_header, unsigned long maxpages) { - unsigned long i; unsigned char *swap_map; =20 swap_map =3D vzalloc(maxpages); si->swap_map =3D swap_map; if (!swap_map) return -ENOMEM; - - swap_map[0] =3D SWAP_MAP_BAD; /* omit header page */ - for (i =3D 0; i < swap_header->info.nr_badpages; i++) { - unsigned int page_nr =3D swap_header->info.badpages[i]; - if (page_nr =3D=3D 0 || page_nr > swap_header->info.last_page) - return -EINVAL; - if (page_nr < maxpages) { - swap_map[page_nr] =3D SWAP_MAP_BAD; - si->pages--; - } - } - - if (!si->pages) { - pr_warn("Empty swap-file\n"); - return -EINVAL; - } - return 0; } =20 @@ -3270,26 +3276,28 @@ static int setup_swap_clusters_info(struct swap_inf= o_struct *si, } =20 /* - * Mark unusable pages as unavailable. The clusters aren't - * marked free yet, so no list operations are involved yet. - * - * See setup_swap_map(): header page, bad pages, - * and the EOF part of the last cluster. + * Mark unusable pages (header page, bad pages, and the EOF part of + * the last cluster) as unavailable. The clusters aren't marked free + * yet, so no list operations are involved yet. */ - err =3D swap_cluster_setup_bad_slot(cluster_info, 0); + err =3D swap_cluster_setup_bad_slot(si, cluster_info, 0, false); if (err) goto err; for (i =3D 0; i < swap_header->info.nr_badpages; i++) { unsigned int page_nr =3D swap_header->info.badpages[i]; =20 - if (page_nr >=3D maxpages) - continue; - err =3D swap_cluster_setup_bad_slot(cluster_info, page_nr); + if (!page_nr || page_nr > swap_header->info.last_page) { + pr_warn("Bad slot offset is out of border: %d (last_page: %d)\n", + page_nr, swap_header->info.last_page); + err =3D -EINVAL; + goto err; + } + err =3D swap_cluster_setup_bad_slot(si, cluster_info, page_nr, false); if (err) goto err; } for (i =3D maxpages; i < round_up(maxpages, SWAPFILE_CLUSTER); i++) { - err =3D swap_cluster_setup_bad_slot(cluster_info, i); + err =3D swap_cluster_setup_bad_slot(si, cluster_info, i, true); if (err) goto err; } --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73E6E349B16 for ; Wed, 28 Jan 2026 09:30:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592651; cv=none; b=OaLx6CJz7zun+pESFGI3O5v4S9kjli4NP9UK15od7Pqmhtm234sewLNVGDtfUbyHefN54ea5gEOXITM338pKVnZDip1AZ02xLguY/ttBoNRcy2HtQcPo069JskUuO93eC0zlLtXv/4eNUFCu6ZlEqXX26DFwnrQjx7YsXrKHHTA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592651; c=relaxed/simple; bh=kkyIO7qcZVQTh3+ze8oW5mG1QCXOTV0RvL1Yuf71ALQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=CBJFEgtubU+JJFmiCPIps5fn0vpXoeegGXqsIN8pI5dV7qvOPkLn50pbloa4U9dCVh2Pphf4a6ZwxCViC3PCVSZVOcwM76LBMqPZcpweZuoogvVw4fghF7o+oRSF7hnv503r4cwfQ6WD2PBOpUjx36MVBWRJR5/TO17eoreEfiY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nsjCcW8N; arc=none smtp.client-ip=209.85.215.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nsjCcW8N" Received: by mail-pg1-f171.google.com with SMTP id 41be03b00d2f7-bc274b8b15bso4224277a12.1 for ; Wed, 28 Jan 2026 01:30:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592650; x=1770197450; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=CkTKKula0DQd5wgE6uWfgSKH0QJhpKGSFifSGYvHSoo=; b=nsjCcW8NKtW/QrPlLTwVOGSmfMUNzkiAWNeu21l1zqy1VFI3F3c/I+6AI/3cxJoJ+R //CnNp/qICpUFf0mMy+G/2T4B6/edkaeqEVEKVzLxOb1VYUOQHCqpgFp5E3EA79FLJKW 20x0DDSWasUklrqe/mR8g5VpMNPeXk+KFfOtjbTbNF+EQoPBPng07SJz7ugy8Ege21Ua TP8xTY1c9Qth+EjrR0w7fGiFGAuhOOhkRB78GIfNjUrWQlMwGoCNrIaas+3Wt2udktSK f/9wTjMSIr/mTpCCflxsZlbNKyHV+6QLZnSpvYCRtgjNazJ4fjnH7Yg/lIzlGnhiNqvv YntA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592650; x=1770197450; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=CkTKKula0DQd5wgE6uWfgSKH0QJhpKGSFifSGYvHSoo=; b=GpzoahvVRcnd6I6JptStEX1EZOYOsKaWmM8NwXNfAPWjnpPn5SfjVB4FCPBwxQalbo fs0qHmCMuN/BakPM+3q9vppaBL/t5hyclcpfN+RkKikIGzA1gDhW7EJgNFm7jSK7sG9D 1XS7KlmwsPZL0QrK1qXKovSDw19zsxqiKoSrk79Qg8gH+W/Tf7XCLpQZ/apfFvlPYwdJ A4XPVHtayiCuYExlSsvnIz5zNvoTfV/Fqcp+NpbS1DGuDi/Rl1/ykYJxA+nYvlPwZdm6 bvZYIec/6iagRNCr60kzUMCdLcn0JWNMhom0DLZP7vLVBAjYWh/+MCKZWCNRs/lTPERq cLkQ== X-Forwarded-Encrypted: i=1; AJvYcCXXAOgRIG2jnMJSRw3AehlaPmrVXtACRzXEHw5p2fQ58IRu50ANmR/mgITS7v0h88i/n2uetUbNM+W2qqA=@vger.kernel.org X-Gm-Message-State: AOJu0YzqCThB1urR+58adEO2UWuz+G95rJCIvz4B/1ZU8AH5gf+Sf/qF wIIQ4RYjhmXkVOnm9C4NwWMLS3sTJ+cnCLrcuoIDf8vGEBw6SFB7VNwi X-Gm-Gg: AZuq6aL3db9C9JC/Jli4sf84C18vBp9wcWSFljNyo1v2+81ZC8s+NU4tTY3xv9zGxJ2 /jPW+j2mFjt3Xz+5OWgUX8UjXCEFrBLeFyN7/taoYGnN4aMzYOwPNYG0cDSjd7D+3aHM4t+ljAl k7m4alaCGWqP3XG34CdydYSvqiFZRyPc16RVR8iCNs1hk7EFlN5xK47P4huyosT3+WS31qQ4VXG jWrTEiz59eqnu4tvGseQ/ItYf+1xWU/Vg+JfaXHEjzefYJx/QZF6yw+u0loTum7N53crkShm0VQ +vTqI6gJgKEgPdK6TkI0nPCeG89IOzfWSs1EH6GMQ/mMlSIgLsNcrmsUjtKSU+OuYfRmODPBPle 0WtaXpPZ81OuxNDQZRx7Usfz+q9yqHohMojjpVwaWstEcLLIEFO1DRl6n8XFUBYDiIG4J2PBAvP yE7nkUzZPPknwR4vfVMG+Wo0d8yU5egHXqsuVMLWerSwYidBs351xobnZIqw== X-Received: by 2002:a17:90b:3a08:b0:341:88c1:6a7d with SMTP id 98e67ed59e1d1-353fed8fc85mr4504462a91.18.1769592649720; Wed, 28 Jan 2026 01:30:49 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.30.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:30:49 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:29 +0800 Subject: [PATCH v2 05/12] mm/workingset: leave highest bits empty for anon shadow Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-5-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=8883; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=FaLMbQqW7ETAqIJMdO/kAwe98yiKvc5zz/kgiu2u1QQ=; b=wcdA5+u19hk2eVOm518EnInTkK1Kz2Uhd2N4ABKu63Gee9CLdcQfxDzjgT60+ZoOXlERb9Eos McI0ABQcO7DAEtMZZwcmYmiDO16gZMJ7sitnYARk+mNktRlEbX3JsKa X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Swap table entry will need 4 bits reserved for swap count in the shadow, so the anon shadow should have its leading 4 bits remain 0. This should be OK for the foreseeable future. Take 52 bits of physical address space as an example: for 4K pages, there would be at most 40 bits for addressable pages. Currently, we have 36 bits available (64 - 1 - 16 - 10 - 1, where XA_VALUE takes 1 bit for marker, MEM_CGROUP_ID_SHIFT takes 16 bits, NODES_SHIFT takes <=3D10 bits, WORKINGSET flags takes 1 bit). So in the worst case, we previously need to pack the 40 bits of address in 36 bits fields using a 64K bucket (bucket_order =3D 4). After this, the bucket will be increased to 1M. Which should be fine, as on such large machines, the working set size will be way larger than the bucket size. And for MGLRU's gen number tracking, it should be even more than enough, MGLRU's gen number (max_seq) increment is much slower compared to the eviction counter (nonresident_age). And after all, either the refault distance or the gen distance is only a hint that can tolerate inaccuracy just fine. And the 4 bits can be shrunk to 3, or extended to a higher value if needed later. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swap_table.h | 4 ++++ mm/workingset.c | 49 ++++++++++++++++++++++++++++++------------------- 2 files changed, 34 insertions(+), 19 deletions(-) diff --git a/mm/swap_table.h b/mm/swap_table.h index ea244a57a5b7..10e11d1f3b04 100644 --- a/mm/swap_table.h +++ b/mm/swap_table.h @@ -12,6 +12,7 @@ struct swap_table { }; =20 #define SWP_TABLE_USE_PAGE (sizeof(struct swap_table) =3D=3D PAGE_SIZE) +#define SWP_TB_COUNT_BITS 4 =20 /* * A swap table entry represents the status of a swap slot on a swap @@ -22,6 +23,9 @@ struct swap_table { * (shadow), or NULL. */ =20 +/* Macro for shadow offset calculation */ +#define SWAP_COUNT_SHIFT SWP_TB_COUNT_BITS + /* * Helpers for casting one type of info into a swap table entry. */ diff --git a/mm/workingset.c b/mm/workingset.c index 13422d304715..37a94979900f 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -16,6 +16,7 @@ #include #include #include +#include "swap_table.h" #include "internal.h" =20 /* @@ -184,7 +185,9 @@ #define EVICTION_SHIFT ((BITS_PER_LONG - BITS_PER_XA_VALUE) + \ WORKINGSET_SHIFT + NODES_SHIFT + \ MEM_CGROUP_ID_SHIFT) +#define EVICTION_SHIFT_ANON (EVICTION_SHIFT + SWAP_COUNT_SHIFT) #define EVICTION_MASK (~0UL >> EVICTION_SHIFT) +#define EVICTION_MASK_ANON (~0UL >> EVICTION_SHIFT_ANON) =20 /* * Eviction timestamps need to be able to cover the full range of @@ -194,12 +197,12 @@ * that case, we have to sacrifice granularity for distance, and group * evictions into coarser buckets by shaving off lower timestamp bits. */ -static unsigned int bucket_order __read_mostly; +static unsigned int bucket_order[ANON_AND_FILE] __read_mostly; =20 static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long evic= tion, - bool workingset) + bool workingset, bool file) { - eviction &=3D EVICTION_MASK; + eviction &=3D file ? EVICTION_MASK : EVICTION_MASK_ANON; eviction =3D (eviction << MEM_CGROUP_ID_SHIFT) | memcgid; eviction =3D (eviction << NODES_SHIFT) | pgdat->node_id; eviction =3D (eviction << WORKINGSET_SHIFT) | workingset; @@ -244,7 +247,8 @@ static void *lru_gen_eviction(struct folio *folio) struct mem_cgroup *memcg =3D folio_memcg(folio); struct pglist_data *pgdat =3D folio_pgdat(folio); =20 - BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > BITS_PER_LONG - EVICTION_SH= IFT); + BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > + BITS_PER_LONG - max(EVICTION_SHIFT, EVICTION_SHIFT_ANON)); =20 lruvec =3D mem_cgroup_lruvec(memcg, pgdat); lrugen =3D &lruvec->lrugen; @@ -254,7 +258,7 @@ static void *lru_gen_eviction(struct folio *folio) hist =3D lru_hist_from_seq(min_seq); atomic_long_add(delta, &lrugen->evicted[hist][type][tier]); =20 - return pack_shadow(mem_cgroup_private_id(memcg), pgdat, token, workingset= ); + return pack_shadow(mem_cgroup_private_id(memcg), pgdat, token, workingset= , type); } =20 /* @@ -262,7 +266,7 @@ static void *lru_gen_eviction(struct folio *folio) * Fills in @lruvec, @token, @workingset with the values unpacked from sha= dow. */ static bool lru_gen_test_recent(void *shadow, struct lruvec **lruvec, - unsigned long *token, bool *workingset) + unsigned long *token, bool *workingset, bool file) { int memcg_id; unsigned long max_seq; @@ -275,7 +279,7 @@ static bool lru_gen_test_recent(void *shadow, struct lr= uvec **lruvec, *lruvec =3D mem_cgroup_lruvec(memcg, pgdat); =20 max_seq =3D READ_ONCE((*lruvec)->lrugen.max_seq); - max_seq &=3D EVICTION_MASK >> LRU_REFS_WIDTH; + max_seq &=3D (file ? EVICTION_MASK : EVICTION_MASK_ANON) >> LRU_REFS_WIDT= H; =20 return abs_diff(max_seq, *token >> LRU_REFS_WIDTH) < MAX_NR_GENS; } @@ -293,7 +297,7 @@ static void lru_gen_refault(struct folio *folio, void *= shadow) =20 rcu_read_lock(); =20 - recent =3D lru_gen_test_recent(shadow, &lruvec, &token, &workingset); + recent =3D lru_gen_test_recent(shadow, &lruvec, &token, &workingset, type= ); if (lruvec !=3D folio_lruvec(folio)) goto unlock; =20 @@ -331,7 +335,7 @@ static void *lru_gen_eviction(struct folio *folio) } =20 static bool lru_gen_test_recent(void *shadow, struct lruvec **lruvec, - unsigned long *token, bool *workingset) + unsigned long *token, bool *workingset, bool file) { return false; } @@ -381,6 +385,7 @@ void workingset_age_nonresident(struct lruvec *lruvec, = unsigned long nr_pages) void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_m= emcg) { struct pglist_data *pgdat =3D folio_pgdat(folio); + int file =3D folio_is_file_lru(folio); unsigned long eviction; struct lruvec *lruvec; int memcgid; @@ -397,10 +402,10 @@ void *workingset_eviction(struct folio *folio, struct= mem_cgroup *target_memcg) /* XXX: target_memcg can be NULL, go through lruvec */ memcgid =3D mem_cgroup_private_id(lruvec_memcg(lruvec)); eviction =3D atomic_long_read(&lruvec->nonresident_age); - eviction >>=3D bucket_order; + eviction >>=3D bucket_order[file]; workingset_age_nonresident(lruvec, folio_nr_pages(folio)); return pack_shadow(memcgid, pgdat, eviction, - folio_test_workingset(folio)); + folio_test_workingset(folio), file); } =20 /** @@ -431,14 +436,15 @@ bool workingset_test_recent(void *shadow, bool file, = bool *workingset, bool recent; =20 rcu_read_lock(); - recent =3D lru_gen_test_recent(shadow, &eviction_lruvec, &eviction, work= ingset); + recent =3D lru_gen_test_recent(shadow, &eviction_lruvec, &eviction, + workingset, file); rcu_read_unlock(); return recent; } =20 rcu_read_lock(); unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset); - eviction <<=3D bucket_order; + eviction <<=3D bucket_order[file]; =20 /* * Look up the memcg associated with the stored ID. It might @@ -495,7 +501,8 @@ bool workingset_test_recent(void *shadow, bool file, bo= ol *workingset, * longest time, so the occasional inappropriate activation * leading to pressure on the active list is not a problem. */ - refault_distance =3D (refault - eviction) & EVICTION_MASK; + refault_distance =3D ((refault - eviction) & + (file ? EVICTION_MASK : EVICTION_MASK_ANON)); =20 /* * Compare the distance to the existing workingset size. We @@ -780,8 +787,8 @@ static struct lock_class_key shadow_nodes_key; =20 static int __init workingset_init(void) { + unsigned int timestamp_bits, timestamp_bits_anon; struct shrinker *workingset_shadow_shrinker; - unsigned int timestamp_bits; unsigned int max_order; int ret =3D -ENOMEM; =20 @@ -794,11 +801,15 @@ static int __init workingset_init(void) * double the initial memory by using totalram_pages as-is. */ timestamp_bits =3D BITS_PER_LONG - EVICTION_SHIFT; + timestamp_bits_anon =3D BITS_PER_LONG - EVICTION_SHIFT_ANON; max_order =3D fls_long(totalram_pages() - 1); - if (max_order > timestamp_bits) - bucket_order =3D max_order - timestamp_bits; - pr_info("workingset: timestamp_bits=3D%d max_order=3D%d bucket_order=3D%u= \n", - timestamp_bits, max_order, bucket_order); + if (max_order > (BITS_PER_LONG - EVICTION_SHIFT)) + bucket_order[WORKINGSET_FILE] =3D max_order - timestamp_bits; + if (max_order > timestamp_bits_anon) + bucket_order[WORKINGSET_ANON] =3D max_order - timestamp_bits_anon; + pr_info("workingset: timestamp_bits=3D%d (anon: %d) max_order=3D%d bucket= _order=3D%u (anon: %d)\n", + timestamp_bits, timestamp_bits_anon, max_order, + bucket_order[WORKINGSET_FILE], bucket_order[WORKINGSET_ANON]); =20 workingset_shadow_shrinker =3D shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03E3234A797 for ; Wed, 28 Jan 2026 09:30:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592655; cv=none; b=H1v+RKy8NlDdUB5y7AcqHW7LF+kTKZ4jA9nTWN2/X6vBfOxfpV/+77hYMrf4Oyib/mx3va3gtumurGNPpnRKvz0rHHv54szE5DIQMR0kquqk5uPWDaXPDLjJJoYZ6ZWe/7236HTn0Z2k52jQza1Jm2IDa3ntwxhr9tbftzX96Js= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592655; c=relaxed/simple; bh=7ccEw4+ezs68KMz6fmNdjYtqMbjfhzcscScI6KcPzUo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qm5FVgJnmXezpIzQThXVTx8zVNPeiv1p176euw6pl+vCmUXs5vpWuXLiSLqtkpA+Aaa4iW0ecReWp1yPQ/M2doAWHvFzSwuBYfzFlih5K/DvI3vRaYF+9uUifHx8UNrREEhVlsEY03VCk5mYPzsP7eCM0T/jk91iposn/q/X/fk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=iOkWJMF8; arc=none smtp.client-ip=209.85.216.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iOkWJMF8" Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-3530f597ea3so2977640a91.1 for ; Wed, 28 Jan 2026 01:30:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592653; x=1770197453; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=dh09qgby8S6xDVlW5KdfbSbFeHI4SQYQYdGZvGyhK/w=; b=iOkWJMF89EPeBtijNIGu6+O4bslVuNIrRIro0Cb1m9db1v2wYhO6iH9PXOn+my6WNo KGI49LPfylklYlymnsgYOr0LxwOU3zpLBOCOO7s+UZdUGyKcD/gXVe5+1C3VQzvzKSGr KsVgOHA8A/7ywr2fAU3qFhap/EprlXnAR7A7MTc8Zt2BkS/D5ou8dtYSKej8jD44e2WY tT5sMq1smPsDWCISxonup4dBec2rkbQDvfymxuXRe/6/GMX4Xh2ttX3Q1KUHGyV5b8/z FqvEoDyQh2a7EWjWqHcee0S2dun3BKPSSrbvAOdR9wfwiOvgrnH/b8b0SvpIsOQmItgE glIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592653; x=1770197453; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=dh09qgby8S6xDVlW5KdfbSbFeHI4SQYQYdGZvGyhK/w=; b=Bzf5md0SD222/Ggk23AADZanSzR9D+GzNxf4a+HT1pSWRdEu6/Y/oRoSotjDxp+wPS 43wWHx1vagkON0jFlfqm8XZglvZAtf1Ebb+hY4bwQBXCQw9VD/jwFivAjOd9KnLsRXBz oSrBL2Y4sSrAcmmQXtohMIium3lwipoZuotEXN1w2FV2LnrlslKWTqL+M38w9SXGa0Sg TVYdKr80fePHcv8oOTzDBQo4WjcaYqWwm1Amz4y1qHVnh0yCnoOs+tYnY5B3IqQ+SfbF MISBiQ771eld8lB0mP0MnNvB4tlj6jba+Oy6U4rq/uuBb9U1bHcoj7agBDpBqBwdWH1r s9hA== X-Forwarded-Encrypted: i=1; AJvYcCVquAzeriK+jNT8LcHfzLo4pYyWbRRwacCCRw8AWCaY+QynvkaITRVZYZpIEzuoM+DtEU437JPdeT9F4Cs=@vger.kernel.org X-Gm-Message-State: AOJu0YwBUNzmZyEbh/zcc/fQwdCQNnpCa1T7r2RB+tJioQdXYgjmbCpW zoH+qe4bVq3iug0EbLzraCFwPTqKTv1j1Gmmnloy82w6zLuNdsabEwm+ X-Gm-Gg: AZuq6aKUN/MO35MjInm80NqDtZSbUG6dWXsJwZTqNWQGgsMWvflOx/lQy7UITtF/rXo cw9N/upIWbJJmdWe6wDKSSBwe1gdpbFNH6NB36AeERbiAZXWi8f1nmOF2pKgGW716M6eGtn9Z+T ZHxq++MwU0qBTdrQ6B35n605wd4IXwql5c5Hdhb5Bi7n5AJ8umR6sSXU2ZmsRVaO7E8AKr4AOSi xUpgIqdzaBGj+2X76lSaT3eHnyWXMAEYR2bS6OA+KA61w3J8S/mUcJdphjSr4zjjIYU4vSm0E0/ OYpS6Q3a82BVX6Rj7C3BFRBWuJeDBLMF6vjWVZVxRpy6nGy18zk7YYpuf8knCnw70ODs0aDYidr +NQDjKWuTBBU1Tl3eLEkY4scRWZHiQ+ltuGim8NgmgeDpeAMcYxHhBqPV7RMi2UGMYq3p17NxoN AuJwnAFJygn8KzsnhWY/P1GTvdbqE8JLVXNltuP6wtscjwI8CNU3dmgt5WzipzjxSVPm3B X-Received: by 2002:a17:90b:268a:b0:33b:a906:e40 with SMTP id 98e67ed59e1d1-353fecc68aamr4206658a91.2.1769592653395; Wed, 28 Jan 2026 01:30:53 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.30.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:30:52 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:30 +0800 Subject: [PATCH v2 06/12] mm, swap: implement helpers for reserving data in the swap table Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-6-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=8791; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=BTH7PLu1JcCgfu/sHFpEotAODg5Cd2G+MvLqcP9QWOM=; b=gbRif2/XxCHZURM6jy0wYODIgNiThYiIXJAfEMflr4osMuoS0q9rvNODHkucFrhDHBgf8VRqy b8sX9SNJVy2DNn0sYgPey2bxinRBiXote/Np8l7yzIECi1YVLcuOt93 X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song To prepare for using the swap table as the unified swap layer, introduce macros and helpers for storing multiple kinds of data in a swap table entry. From now on, we are storing PFN in the swap table to make space for extra counting bits (SWAP_COUNT). Shadows are still stored as they are, as the SWAP_COUNT is not used yet. Also, rename shadow_swp_to_tb to shadow_to_swp_tb. That's a spelling error, not really worth a separate fix. No behaviour change yet, just prepare the API. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swap_state.c | 6 +-- mm/swap_table.h | 131 +++++++++++++++++++++++++++++++++++++++++++++++++++-= ---- 2 files changed, 124 insertions(+), 13 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 6d0eef7470be..e213ee35c1d2 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -148,7 +148,7 @@ void __swap_cache_add_folio(struct swap_cluster_info *c= i, VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); VM_WARN_ON_ONCE_FOLIO(!folio_test_swapbacked(folio), folio); =20 - new_tb =3D folio_to_swp_tb(folio); + new_tb =3D folio_to_swp_tb(folio, 0); ci_start =3D swp_cluster_offset(entry); ci_off =3D ci_start; ci_end =3D ci_start + nr_pages; @@ -249,7 +249,7 @@ void __swap_cache_del_folio(struct swap_cluster_info *c= i, struct folio *folio, VM_WARN_ON_ONCE_FOLIO(folio_test_writeback(folio), folio); =20 si =3D __swap_entry_to_info(entry); - new_tb =3D shadow_swp_to_tb(shadow); + new_tb =3D shadow_to_swp_tb(shadow, 0); ci_start =3D swp_cluster_offset(entry); ci_end =3D ci_start + nr_pages; ci_off =3D ci_start; @@ -331,7 +331,7 @@ void __swap_cache_replace_folio(struct swap_cluster_inf= o *ci, VM_WARN_ON_ONCE(!entry.val); =20 /* Swap cache still stores N entries instead of a high-order entry */ - new_tb =3D folio_to_swp_tb(new); + new_tb =3D folio_to_swp_tb(new, 0); do { old_tb =3D __swap_table_xchg(ci, ci_off, new_tb); WARN_ON_ONCE(!swp_tb_is_folio(old_tb) || swp_tb_to_folio(old_tb) !=3D ol= d); diff --git a/mm/swap_table.h b/mm/swap_table.h index 10e11d1f3b04..10762ac5f4f5 100644 --- a/mm/swap_table.h +++ b/mm/swap_table.h @@ -12,17 +12,72 @@ struct swap_table { }; =20 #define SWP_TABLE_USE_PAGE (sizeof(struct swap_table) =3D=3D PAGE_SIZE) -#define SWP_TB_COUNT_BITS 4 =20 /* * A swap table entry represents the status of a swap slot on a swap * (physical or virtual) device. The swap table in each cluster is a * 1:1 map of the swap slots in this cluster. * - * Each swap table entry could be a pointer (folio), a XA_VALUE - * (shadow), or NULL. + * Swap table entry type and bits layouts: + * + * NULL: |---------------- 0 ---------------| - Free slot + * Shadow: | SWAP_COUNT |---- SHADOW_VAL ---|1| - Swapped out slot + * PFN: | SWAP_COUNT |------ PFN -------|10| - Cached slot + * Pointer: |----------- Pointer ----------|100| - (Unused) + * Bad: |------------- 1 -------------|1000| - Bad slot + * + * SWAP_COUNT is `SWP_TB_COUNT_BITS` long, each entry is an atomic long. + * + * Usages: + * + * - NULL: Swap slot is unused, could be allocated. + * + * - Shadow: Swap slot is used and not cached (usually swapped out). It re= uses + * the XA_VALUE format to be compatible with working set shadows. SHADOW= _VAL + * part might be all 0 if the working shadow info is absent. In such a c= ase, + * we still want to keep the shadow format as a placeholder. + * + * Memcg ID is embedded in SHADOW_VAL. + * + * - PFN: Swap slot is in use, and cached. Memcg info is recorded on the p= age + * struct. + * + * - Pointer: Unused yet. `0b100` is reserved for potential pointer usage + * because only the lower three bits can be used as a marker for 8 bytes + * aligned pointers. + * + * - Bad: Swap slot is reserved, protects swap header or holes on swap dev= ices. */ =20 +#if defined(MAX_POSSIBLE_PHYSMEM_BITS) +#define SWAP_CACHE_PFN_BITS (MAX_POSSIBLE_PHYSMEM_BITS - PAGE_SHIFT) +#elif defined(MAX_PHYSMEM_BITS) +#define SWAP_CACHE_PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT) +#else +#define SWAP_CACHE_PFN_BITS (BITS_PER_LONG - PAGE_SHIFT) +#endif + +/* NULL Entry, all 0 */ +#define SWP_TB_NULL 0UL + +/* Swapped out: shadow */ +#define SWP_TB_SHADOW_MARK 0b1UL + +/* Cached: PFN */ +#define SWP_TB_PFN_BITS (SWAP_CACHE_PFN_BITS + SWP_TB_PFN_MARK_BITS) +#define SWP_TB_PFN_MARK 0b10UL +#define SWP_TB_PFN_MARK_BITS 2 +#define SWP_TB_PFN_MARK_MASK (BIT(SWP_TB_PFN_MARK_BITS) - 1) + +/* SWAP_COUNT part for PFN or shadow, the width can be shrunk or extended = */ +#define SWP_TB_COUNT_BITS min(4, BITS_PER_LONG - SWP_TB_PFN_BITS) +#define SWP_TB_COUNT_MASK (~((~0UL) >> SWP_TB_COUNT_BITS)) +#define SWP_TB_COUNT_SHIFT (BITS_PER_LONG - SWP_TB_COUNT_BITS) +#define SWP_TB_COUNT_MAX ((1 << SWP_TB_COUNT_BITS) - 1) + +/* Bad slot: ends with 0b1000 and rests of bits are all 1 */ +#define SWP_TB_BAD ((~0UL) << 3) + /* Macro for shadow offset calculation */ #define SWAP_COUNT_SHIFT SWP_TB_COUNT_BITS =20 @@ -35,18 +90,47 @@ static inline unsigned long null_to_swp_tb(void) return 0; } =20 -static inline unsigned long folio_to_swp_tb(struct folio *folio) +static inline unsigned long __count_to_swp_tb(unsigned char count) { + /* + * At least three values are needed to distinguish free (0), + * used (count > 0 && count < SWP_TB_COUNT_MAX), and + * overflow (count =3D=3D SWP_TB_COUNT_MAX). + */ + BUILD_BUG_ON(SWP_TB_COUNT_MAX < 2 || SWP_TB_COUNT_BITS < 2); + VM_WARN_ON(count > SWP_TB_COUNT_MAX); + return ((unsigned long)count) << SWP_TB_COUNT_SHIFT; +} + +static inline unsigned long pfn_to_swp_tb(unsigned long pfn, unsigned int = count) +{ + unsigned long swp_tb; + BUILD_BUG_ON(sizeof(unsigned long) !=3D sizeof(void *)); - return (unsigned long)folio; + BUILD_BUG_ON(SWAP_CACHE_PFN_BITS > + (BITS_PER_LONG - SWP_TB_PFN_MARK_BITS - SWP_TB_COUNT_BITS)); + + swp_tb =3D (pfn << SWP_TB_PFN_MARK_BITS) | SWP_TB_PFN_MARK; + VM_WARN_ON_ONCE(swp_tb & SWP_TB_COUNT_MASK); + + return swp_tb | __count_to_swp_tb(count); +} + +static inline unsigned long folio_to_swp_tb(struct folio *folio, unsigned = int count) +{ + return pfn_to_swp_tb(folio_pfn(folio), count); } =20 -static inline unsigned long shadow_swp_to_tb(void *shadow) +static inline unsigned long shadow_to_swp_tb(void *shadow, unsigned int co= unt) { BUILD_BUG_ON((BITS_PER_XA_VALUE + 1) !=3D BITS_PER_BYTE * sizeof(unsigned long)); + BUILD_BUG_ON((unsigned long)xa_mk_value(0) !=3D SWP_TB_SHADOW_MARK); + VM_WARN_ON_ONCE(shadow && !xa_is_value(shadow)); - return (unsigned long)shadow; + VM_WARN_ON_ONCE(shadow && ((unsigned long)shadow & SWP_TB_COUNT_MASK)); + + return (unsigned long)shadow | __count_to_swp_tb(count) | SWP_TB_SHADOW_M= ARK; } =20 /* @@ -59,7 +143,7 @@ static inline bool swp_tb_is_null(unsigned long swp_tb) =20 static inline bool swp_tb_is_folio(unsigned long swp_tb) { - return !xa_is_value((void *)swp_tb) && !swp_tb_is_null(swp_tb); + return ((swp_tb & SWP_TB_PFN_MARK_MASK) =3D=3D SWP_TB_PFN_MARK); } =20 static inline bool swp_tb_is_shadow(unsigned long swp_tb) @@ -67,19 +151,44 @@ static inline bool swp_tb_is_shadow(unsigned long swp_= tb) return xa_is_value((void *)swp_tb); } =20 +static inline bool swp_tb_is_bad(unsigned long swp_tb) +{ + return swp_tb =3D=3D SWP_TB_BAD; +} + +static inline bool swp_tb_is_countable(unsigned long swp_tb) +{ + return (swp_tb_is_shadow(swp_tb) || swp_tb_is_folio(swp_tb) || + swp_tb_is_null(swp_tb)); +} + /* * Helpers for retrieving info from swap table. */ static inline struct folio *swp_tb_to_folio(unsigned long swp_tb) { VM_WARN_ON(!swp_tb_is_folio(swp_tb)); - return (void *)swp_tb; + return pfn_folio((swp_tb & ~SWP_TB_COUNT_MASK) >> SWP_TB_PFN_MARK_BITS); } =20 static inline void *swp_tb_to_shadow(unsigned long swp_tb) { VM_WARN_ON(!swp_tb_is_shadow(swp_tb)); - return (void *)swp_tb; + /* No shift needed, xa_value is stored as it is in the lower bits. */ + return (void *)(swp_tb & ~SWP_TB_COUNT_MASK); +} + +static inline unsigned char __swp_tb_get_count(unsigned long swp_tb) +{ + VM_WARN_ON(!swp_tb_is_countable(swp_tb)); + return ((swp_tb & SWP_TB_COUNT_MASK) >> SWP_TB_COUNT_SHIFT); +} + +static inline int swp_tb_get_count(unsigned long swp_tb) +{ + if (swp_tb_is_countable(swp_tb)) + return __swp_tb_get_count(swp_tb); + return -EINVAL; } =20 /* @@ -124,6 +233,8 @@ static inline unsigned long swap_table_get(struct swap_= cluster_info *ci, atomic_long_t *table; unsigned long swp_tb; =20 + VM_WARN_ON_ONCE(off >=3D SWAPFILE_CLUSTER); + rcu_read_lock(); table =3D rcu_dereference(ci->table); swp_tb =3D table ? atomic_long_read(&table[off]) : null_to_swp_tb(); --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACF0D346E6E for ; Wed, 28 Jan 2026 09:30:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592659; cv=none; b=iP1kt7cvnBcgDAyH/lEY/8m8ZhdlO86lVWNig6mx4+QL4cgYUqhYTap2xehHqIQrr1j4DQK2lumfdSfP77jByV0HfZpk14mA3sEDR8u7HaGJS807B766DJdfijNrMKb1YBm7DCtHBBaxa8Dqpe2XWXhWNGR2pGYn3dMnu9uxBoE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592659; c=relaxed/simple; bh=Er02vhxaEVbwJu/z/eFj3qEl2mARw7tjDYxAvy8TGt0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Dc/NKr5WG3MTO53KgydC1ExmitZCysUONqh5z77AWBQm7pgyLcdlViJ+lunzx0h1tgkv+hCdMsr6PMpP/yOF1GD0wxlYG3OFCnR2HtOoBs8BEZlV93csMJEAX5ExDbQg3Jmpkr+cZSoZ1oTgu3IA/sy6zomA75TfTQlfhka3suI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=kxbbyfk3; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="kxbbyfk3" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-c551edc745eso2711966a12.2 for ; Wed, 28 Jan 2026 01:30:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592657; x=1770197457; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=FByi9ivqPJyIjI850sU//sSIfdwjIzLDbOJYWWeWKFE=; b=kxbbyfk3U2h5NtSM1yp6zuLtBLolKRyaYHfHjEt3ZaS883nqHQ2yiOsGxXn3lIc66o kh9VtxeGoB/T6SP83JwjZyuN3j1C1CaEU7CYVaW6VC3emNGgOfnuPK7sEWLOpHEEHhwa yt9d2exjYFOmzl9QvJIGifHbt4E1jPSLWKkRAypEdAVtVC9nMJ51/EDEFf1tkh+nId7V oukyMeWgfTVsUCgQWFSZXQk8k9CBQo2qAD+Yf8wbdsur7RlRGlVyAcxi1OjJ8ezv/xKN aW4BaYeOrBY8lmi1mcYhWWBcEfVSP1VXO33RXY30xAMKgtAzCJus/Qc94lTR0/w6SPmK FmsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592657; x=1770197457; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=FByi9ivqPJyIjI850sU//sSIfdwjIzLDbOJYWWeWKFE=; b=rfICE3zDD8LZ96xKqNmGTvRSN75TgQGJTu5NDU4cSdNj7Xc0yth8HNeoi1ow8e8MLD dIGWhtE1Umpn+EmcUKPCaqw74iSPjD5cYJ6Rqy3WnbzKp7Mr1My7nKCNlWIpAAjbyzll 7Yr2LTC9Y+EHXLNF+mumpi3U5rlpylVuKdv/WF9BwzKG2zBef3nQFnuA+YEh1lZgTnYq IUOoMXTWfrcLxsfYx+0xecQ/0CC6w3YKbmuN9XrX590cc+dvQYeQIFdkwFFouMA8b359 bG9KNHrPcltQjXD//matnpi0ZZyclGbQhZol0S0b2qBERBnA9I9LaMrH5AUo/qpycTgC Yo5Q== X-Forwarded-Encrypted: i=1; AJvYcCU/A6d60kB58zWQAaFCt5a/3nZ8evXYe9PXgvCA3DaIcg7kxcf5Lje4pvflHKRCgMeW5cFArnrueMzeMrw=@vger.kernel.org X-Gm-Message-State: AOJu0YxktJpIK7kLxnAVLonpTR0HsSFqKhuQWZJoWjBLqB9ONZ1zABQG GREtNWzCLfeqIyMaDcp0Kr4W61Q2/essPT+2GXteletTa879rCUJmdXafj4bMoK3fcM= X-Gm-Gg: AZuq6aIpIqBimlX0FjHk5fL4+L6/KPoHn5qh5vhj672QMFrsgUMzB7FUrS09I19TJ2a v6cqIdpia5SxnKXnYGqE1M1N96a4j7v4BF36O8ss0O6BChb7eCoI7y9TYXYoGy221Gsu1uxGl1d EEaxjlh1hSTF2RaOSog0YAMcN+OmkjCTW4o8GVc660OFUNs3qi09d2Kps0tKLi+JEca8TuiqPiD osokyqf7F0QTxz6sjV9GUB6pSyNPxi28iJWffjROOFEqEYhWx79GtpRZY4X/Wxi8xFI0/c67Q07 Nz9wLfwjLTx5QCzJ5rEr1VPvuQfwzisr1XBFgRjFpEnkFJsbs2nj7oWw8PhFcdsL1sAHlCJiZiL XC9b+KI1nrIUZjRI4CyQTfydJoOgAfMxN7dh2dCnZJMKLbMm5naz2k3Fy05gYujRn/wfvEpLlLC irzY7VGmgBh74H8rPxgXoKnLXGU0+Ca6P8s7qXIFNjD4S/At+qcApFV2TTsw== X-Received: by 2002:a17:90b:2c86:b0:32e:7340:a7f7 with SMTP id 98e67ed59e1d1-353feccb368mr3822825a91.2.1769592656953; Wed, 28 Jan 2026 01:30:56 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.30.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:30:56 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:31 +0800 Subject: [PATCH v2 07/12] mm, swap: mark bad slots in swap table directly Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-7-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=4470; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=8m9COjsQHMkFcnP8SRLKraCOnV7wwqbxj9agW6bIMxQ=; b=PJ+2tKOVSzN+igBqvPYB4aflWKuvIjXVLzoOy7Sa6fTN51lteTQv078zHknrSO4i0Gbrf5zBc Y8yCq/5qPiBDOWGFjReAYZwOOSw8UlyCNYO2ncZR5mduSyDsLBD916v X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song In preparing the deprecating swap_map, mark bad slots in the swap table too when setting SWAP_MAP_BAD in swap_map. Also, refine the swap table sanity check on freeing to adapt to the bad slots change. For swapoff, the bad slots count must match the cluster usage count, as nothing should touch them, and they contribute to the cluster usage count on swapon. For ordinary swap table freeing, the swap table of clusters with bad slots should never be freed since the cluster usage count never reaches zero. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 56 +++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 41 insertions(+), 15 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index df8b13eecab1..bdce2abd9135 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -454,16 +454,37 @@ static void swap_table_free(struct swap_table *table) swap_table_free_folio_rcu_cb); } =20 +/* + * Sanity check to ensure nothing leaked, and the specified range is empty. + * One special case is that bad slots can't be freed, so check the number = of + * bad slots for swapoff, and non-swapoff path must never free bad slots. + */ +static void swap_cluster_assert_empty(struct swap_cluster_info *ci, bool s= wapoff) +{ + unsigned int ci_off =3D 0, ci_end =3D SWAPFILE_CLUSTER; + unsigned long swp_tb; + int bad_slots =3D 0; + + if (!IS_ENABLED(CONFIG_DEBUG_VM) && !swapoff) + return; + + do { + swp_tb =3D __swap_table_get(ci, ci_off); + if (swp_tb_is_bad(swp_tb)) + bad_slots++; + else + WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); + } while (++ci_off < ci_end); + + WARN_ON_ONCE(bad_slots !=3D (swapoff ? ci->count : 0)); +} + static void swap_cluster_free_table(struct swap_cluster_info *ci) { - unsigned int ci_off; struct swap_table *table; =20 /* Only empty cluster's table is allow to be freed */ lockdep_assert_held(&ci->lock); - VM_WARN_ON_ONCE(!cluster_is_empty(ci)); - for (ci_off =3D 0; ci_off < SWAPFILE_CLUSTER; ci_off++) - VM_WARN_ON_ONCE(!swp_tb_is_null(__swap_table_get(ci, ci_off))); table =3D (void *)rcu_dereference_protected(ci->table, true); rcu_assign_pointer(ci->table, NULL); =20 @@ -567,6 +588,7 @@ static void swap_cluster_schedule_discard(struct swap_i= nfo_struct *si, =20 static void __free_cluster(struct swap_info_struct *si, struct swap_cluste= r_info *ci) { + swap_cluster_assert_empty(ci, false); swap_cluster_free_table(ci); move_cluster(si, ci, &si->free_clusters, CLUSTER_FLAG_FREE); ci->order =3D 0; @@ -747,9 +769,11 @@ static int swap_cluster_setup_bad_slot(struct swap_inf= o_struct *si, struct swap_cluster_info *cluster_info, unsigned int offset, bool mask) { + unsigned int ci_off =3D offset % SWAPFILE_CLUSTER; unsigned long idx =3D offset / SWAPFILE_CLUSTER; - struct swap_table *table; struct swap_cluster_info *ci; + struct swap_table *table; + int ret =3D 0; =20 /* si->max may got shrunk by swap swap_activate() */ if (offset >=3D si->max && !mask) { @@ -767,13 +791,7 @@ static int swap_cluster_setup_bad_slot(struct swap_inf= o_struct *si, pr_warn("Empty swap-file\n"); return -EINVAL; } - /* Check for duplicated bad swap slots. */ - if (si->swap_map[offset]) { - pr_warn("Duplicated bad slot offset %d\n", offset); - return -EINVAL; - } =20 - si->swap_map[offset] =3D SWAP_MAP_BAD; ci =3D cluster_info + idx; if (!ci->table) { table =3D swap_table_alloc(GFP_KERNEL); @@ -781,13 +799,21 @@ static int swap_cluster_setup_bad_slot(struct swap_in= fo_struct *si, return -ENOMEM; rcu_assign_pointer(ci->table, table); } - - ci->count++; + spin_lock(&ci->lock); + /* Check for duplicated bad swap slots. */ + if (__swap_table_xchg(ci, ci_off, SWP_TB_BAD) !=3D SWP_TB_NULL) { + pr_warn("Duplicated bad slot offset %d\n", offset); + ret =3D -EINVAL; + } else { + si->swap_map[offset] =3D SWAP_MAP_BAD; + ci->count++; + } + spin_unlock(&ci->lock); =20 WARN_ON(ci->count > SWAPFILE_CLUSTER); WARN_ON(ci->flags); =20 - return 0; + return ret; } =20 /* @@ -2743,7 +2769,7 @@ static void free_swap_cluster_info(struct swap_cluste= r_info *cluster_info, /* Cluster with bad marks count will have a remaining table */ spin_lock(&ci->lock); if (rcu_dereference_protected(ci->table, true)) { - ci->count =3D 0; + swap_cluster_assert_empty(ci, true); swap_cluster_free_table(ci); } spin_unlock(&ci->lock); --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 523D534BA42 for ; Wed, 28 Jan 2026 09:31:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592662; cv=none; b=XN2swYDEXACA5Fh2IJmiy0Bg+6Mes7TZFKrW0szSUxWLa0nCeKiq3NDeAfwgQmbb7bKTwe27hZM8sItMbFAVWtLW2YYT1VFaSn2UY3yKSavCb5pztd6oDYToWjgvowUzTeRUgWyIVoyTBwa2qucBjqX577Y3JO4Pyc2FQNx0T0Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592662; c=relaxed/simple; bh=duthAvrprtmW88C0Xisiob8i1iw8kK3zMTNLkXt6lqI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=tC+qRooBhRYn3AA6lmFToWH053Hfx9nNDJS8Gaj2kd98yvSFpS7uKSSaTylhKhVYYLitk7mg8lMVvce1XUSfyoLVxE6l6iLMk2vpN1EqdvWrInktBu2KreoxkZ6tw6Wn8sH4mOx76xBAL02PItA9KDmslKPEXDuV7XN0+iqfOAE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Xjz9Kmcw; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Xjz9Kmcw" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-2a09d981507so4471125ad.1 for ; Wed, 28 Jan 2026 01:31:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592661; x=1770197461; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=bBjV8XyXDLoEReDqKUKA22e4GI0fZeu23FHRrREbJdM=; b=Xjz9KmcwYFeUgqjYtVt5TYjnR4SdnZVELYnoWEl5A0BqLDagrumoptA1rj9FN41U28 aGWhKS788YVM6cZdrPF7MQ9QAfLTSTM6/GktXnZP6iYFAlKGF2K2nhImgGSOUQt7kW8+ 6sgDo2eyNuxxMJ8kHpmoOBH3oak3tmoV7cTNh89eArC/oIB4r4wq+vMor6/oxwyN9zsA sxjyXMFybbLLMx8jT99Ryv49AJ5V3DhDC4tenEamcYRbCchtPISQYskTcHkiuDtQiMcG 9Xy+e9QmfjeerKS9R8Pohgirco4PwZyLaPiNZ6hB1jqMWiTk282bcwwuTiw8M6Y3BiVG rL+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592661; x=1770197461; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=bBjV8XyXDLoEReDqKUKA22e4GI0fZeu23FHRrREbJdM=; b=bSzEs4d4zdsAf46HwoBbR15VM6uVTs5fs2eGVzhWDzYybrE+qI8bVYIDnFCat664X5 eL3WuEIeUV70GIhCQjp4J5VcpIITK8GSh/RvRwfJrRlHrQDkIqc1cFD3YBOXUOUf+fad ss9iayoH/J/PYJ/l7tN7rNmOe4jx/4FwGywq4oc5p8TAOGtGVtnphGA5vqKZsZF9DN5V kPblMeErEDDvEGX+ohWBdyvtDMwSLqCwBWKOuRJ+xKUQtf8uu/06D/b1MqGXC6Xk1xni BamP5V5slHOtAr6xYowcFMaY+EUzvZQlPl1a/HOgpc9oedH+8zhHUx8pwEp392Ks0kGl w5mQ== X-Forwarded-Encrypted: i=1; AJvYcCVfDo0qKkS7O01WesktOBlRJVziESapz4/te5HuQoK2gtxYBta+WyCTtaFqdJHlH6+Cb0ZbKFL1eXntL+Y=@vger.kernel.org X-Gm-Message-State: AOJu0YztYFstYTbimbDK4NJR3qu/Li90/XXe7f2CrSAdIuvJ5lZI9CW5 AOwjUvwbxGgUnGHhnhou/Qn0JhyRtceePRubapk2Z/QeI/DdmDowiXnxrGjDwuH11BI= X-Gm-Gg: AZuq6aJZe+ikaWNKgXXLFEVHIdGOBSUkKNs6sRY7zAcWMtbguRr294zfWb+IlJgC9zD DzoJV6UkpCHZDp9a9KlFw1IIH5MetIaK5yADMyvQV88wE8Y0S+X1wh+MH0aYEKXRG6T3T9q5/Q5 GNzqA5ktrypZOZOKhqulhbIU7zPj+u/9G2hnaIwY7BW7SrUKO2K1WTx/IWvjR1vhMeZ8iV758me 5afLC4nPAiH4X3x6AJgPzWgKLUBL42zTc1pZZbvQOU4i7kaCmluM9t2Vhe+1w9dmaru7hv4IU0X z2MAEhc36qFf2u+bzzneLtrC98zq1We0eHEkw9BejtWu4A6f3RkzxhM2jTYzLbL0YtHrqp06zOd HJUF1QrAiBsk0V9QfA0CRaDUY1udTj10kevWyuvt1e12icP4v6QEeNphwm5JHPmWet/d1XFf9EX 7KGAWIIvh5ie08D/WPGUCGOVo7ObsKWk8Pj4HldVzNlmCzcIBuZ4SxeHTVsQ== X-Received: by 2002:a17:902:d512:b0:2a0:9238:881d with SMTP id d9443c01a7336-2a87130a499mr48464525ad.15.1769592660621; Wed, 28 Jan 2026 01:31:00 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.30.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:31:00 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:32 +0800 Subject: [PATCH v2 08/12] mm, swap: simplify swap table sanity range check Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-8-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=3805; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=xEm9vmKYs9I8cGuUNSJYWeygrYtNmSS9LIdq7v6ecsc=; b=CUVi91qTEx9seoLd4nVc/dyAm11X+quXmPpT9xgVXUnFcUQM1MsVopybAyeIimWtDj/4ioUTd 65YlGEfRcVhDQPepEaODn3lzB2foec2pXntqtIJ5Cqsb8HJ9ymQKWUK X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song The newly introduced helper, which checks bad slots and emptiness of a cluster, can cover the older sanity check just fine, with a more rigorous condition check. So merge them. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 35 +++++++++-------------------------- 1 file changed, 9 insertions(+), 26 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index bdce2abd9135..968153691fc4 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -459,9 +459,11 @@ static void swap_table_free(struct swap_table *table) * One special case is that bad slots can't be freed, so check the number = of * bad slots for swapoff, and non-swapoff path must never free bad slots. */ -static void swap_cluster_assert_empty(struct swap_cluster_info *ci, bool s= wapoff) +static void swap_cluster_assert_empty(struct swap_cluster_info *ci, + unsigned int ci_off, unsigned int nr, + bool swapoff) { - unsigned int ci_off =3D 0, ci_end =3D SWAPFILE_CLUSTER; + unsigned int ci_end =3D ci_off + nr; unsigned long swp_tb; int bad_slots =3D 0; =20 @@ -588,7 +590,7 @@ static void swap_cluster_schedule_discard(struct swap_i= nfo_struct *si, =20 static void __free_cluster(struct swap_info_struct *si, struct swap_cluste= r_info *ci) { - swap_cluster_assert_empty(ci, false); + swap_cluster_assert_empty(ci, 0, SWAPFILE_CLUSTER, false); swap_cluster_free_table(ci); move_cluster(si, ci, &si->free_clusters, CLUSTER_FLAG_FREE); ci->order =3D 0; @@ -898,26 +900,6 @@ static bool cluster_scan_range(struct swap_info_struct= *si, return true; } =20 -/* - * Currently, the swap table is not used for count tracking, just - * do a sanity check here to ensure nothing leaked, so the swap - * table should be empty upon freeing. - */ -static void swap_cluster_assert_table_empty(struct swap_cluster_info *ci, - unsigned int start, unsigned int nr) -{ - unsigned int ci_off =3D start % SWAPFILE_CLUSTER; - unsigned int ci_end =3D ci_off + nr; - unsigned long swp_tb; - - if (IS_ENABLED(CONFIG_DEBUG_VM)) { - do { - swp_tb =3D __swap_table_get(ci, ci_off); - VM_WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); - } while (++ci_off < ci_end); - } -} - static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster_info *ci, struct folio *folio, @@ -943,13 +925,14 @@ static bool cluster_alloc_range(struct swap_info_stru= ct *si, if (likely(folio)) { order =3D folio_order(folio); nr_pages =3D 1 << order; + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false= ); __swap_cache_add_folio(ci, folio, swp_entry(si->type, offset)); } else if (IS_ENABLED(CONFIG_HIBERNATION)) { order =3D 0; nr_pages =3D 1; WARN_ON_ONCE(si->swap_map[offset]); si->swap_map[offset] =3D 1; - swap_cluster_assert_table_empty(ci, offset, 1); + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, 1, false); } else { /* Allocation without folio is only possible with hibernation */ WARN_ON_ONCE(1); @@ -1768,7 +1751,7 @@ void swap_entries_free(struct swap_info_struct *si, =20 mem_cgroup_uncharge_swap(entry, nr_pages); swap_range_free(si, offset, nr_pages); - swap_cluster_assert_table_empty(ci, offset, nr_pages); + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false); =20 if (!ci->count) free_cluster(si, ci); @@ -2769,7 +2752,7 @@ static void free_swap_cluster_info(struct swap_cluste= r_info *cluster_info, /* Cluster with bad marks count will have a remaining table */ spin_lock(&ci->lock); if (rcu_dereference_protected(ci->table, true)) { - swap_cluster_assert_empty(ci, true); + swap_cluster_assert_empty(ci, 0, SWAPFILE_CLUSTER, true); swap_cluster_free_table(ci); } spin_unlock(&ci->lock); --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E57D334CFD6 for ; Wed, 28 Jan 2026 09:31:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592668; cv=none; b=H4a4swMhR0xKlTKE8h5iUw4jjpqf5DKNsVyVs/GDhP/TaUm5ljifpTopdw1dCjplFI8v45OTDGi5CXQWDbrtYKpxrRNjgYzHqTQ5uFqx9e6/DWVrvDU91vQv7YHfY0xp/L7lplByCL8iOq7njkQ3+Q1OgQngGCKrCQkT/71KTus= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592668; c=relaxed/simple; bh=tvGqTEb0cMZSpMFJAo/pYwKFEEJDonWWfUP5qvT8xik=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=RgeQ+W2dY4j2+Suzk+1B04AGOwVP8NIeZs/TJH8LIx12aPye8hAMVcjaOZQunr3CE5IoVgFlsg7iuc4FfeODrfZlljIzbu5YhLbh7MLFAMJSmFV+vlUCZfPcCKzlWWo7H6tWUTeG8MTFBzVp6SMNsyugG5uOFHqSFlYeRGNwRZc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LO1bVEom; arc=none smtp.client-ip=209.85.216.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LO1bVEom" Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-34c21417781so3560725a91.3 for ; Wed, 28 Jan 2026 01:31:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592665; x=1770197465; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=2jVZYcuj/952HrEWjhUWvom6u4cCoa68QkTz/H2mQNA=; b=LO1bVEomBafK9OCybVQPYyom70KuH+z8YV63N8BAWhf5VwvNARFtb34KA5B/Uw0AeT Gh8Xy1fbxNbozP/YiLsUSGcz29+Shr38l+7Oz76dHxPWP2hSmGY07iJ28OqBe0EMgRA9 S2H5ER6I7RvnSGJk4l6tO6CtCY436swk63RPphBl4/Gr2tz4b/pZgX8NtUkHq2Ui4pXQ 5p+VD/4MwENFzrh66XANRwnWL8ix2gYI22O7gbC+yvwUDHB3mZScnpBkNC7d1nERjHQY yqpkbka4pND6ctDBKFaB5uql0sAU6qTGtCzy5ThOLA7/jLP2F3hbzigsQ2QTPRgYF2d3 vSQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592665; x=1770197465; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=2jVZYcuj/952HrEWjhUWvom6u4cCoa68QkTz/H2mQNA=; b=vZVO2RoOrEYGO5FhvFaEukYMKWHrEVe8q0gvvbgMNe8WyxdF/GRoUze33hVdGehDQ5 ibbwz/rn2613IEf9EgTaE/W2m3ap1Xf/7PaOGxN+R7ITKFxeLS78RHfAPLreRBgLaU8e qlnwhCBqLV131R9DDRqmi0qYUAEdbiZRa0lzD+gDHZ9/ovZnjWRm5TxCs443EJzyeYAt zOdeTlgTNnThJcfdAsz1xhHj84XT6xWql931E47E5Hn1yYUivUIcNQKcdf7u1uz77nwu jLfsPg7vM9dyj0iPTDbkU3iZ8hjVjVLhapXuuZnyDS3jWWfIvgUVEKmBx0mREEn1yRub ks6w== X-Forwarded-Encrypted: i=1; AJvYcCUFrXzfQ4nq5bWlLPkMrBoMcFx6sZRB75rruUvImMOi/OFi+jbfwwgLrB9485CElNwbkyqSdXpGO8Atvss=@vger.kernel.org X-Gm-Message-State: AOJu0Yx9gfE9Oa9IHDpJCqKLrNMyUjSyIL/9CErjflmUbgFO1jj4zdZ+ SJYvgC7aLHpLghIOnoZUIDOxMQ+Upmr2E5ki1NR9ocMx3C3Z4qWZfAiDEklFdA== X-Gm-Gg: AZuq6aLydx3OYqpcyy3iK8Pbs9dEmoa77n28HCh/PxV+Dsiy0AAggF19VKeOTz/MXvJ QnEY1ZM0By7y4uXKcCRisTMSJjBmSA9nf2baYsWehBR5sZxd8UCWVCmbeilyy+fUDMiApa+5HvC 2lc8mbsf/oRNsGUOY8GK5/74srZjjSCK+J/iF4mzwFJYQ16BpenkYVSj0x6VqJ18GuDAEKclGca c2Hq2TOvjOjb1A5ZtXfqMNhuPun4+YNe2aG3FBXo0jEmZowz9LmMIgUWNXRGvQd39DAK2/uxeP+ KxqENIAsWRGRyUDlXHHFZe3RapqeaN175Y2lr+dwtJKC2D5Z8kfnL74PY+vwHmkTwUPdvuYQpc9 WoraZAlW1gIKyRZEcOl+yn0kzzmpzGwdB4ko67i+GQHWp9GMSHrIQdjMddrcSzPwVZdh5zMOrmz 7m8+lUop8JXlIHnEyxZt/4kaJnQhlpcSxayu6l9EIwhfyGnhjfRVHHCTQaOA== X-Received: by 2002:a17:90b:384f:b0:341:8b2b:43c with SMTP id 98e67ed59e1d1-353fed70ccamr4458153a91.18.1769592664714; Wed, 28 Jan 2026 01:31:04 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.31.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:31:03 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:33 +0800 Subject: [PATCH v2 09/12] mm, swap: use the swap table to track the swap count Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-9-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=51351; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=YQ7/z8seUp2+1e2hnjIrZhzqPnbrBbFot/D1wqXDcEE=; b=dHSs14YYnFsLWVaSmfzSEKzhJQer/Dh9dX9VaSnoL3+wRb2TPUdG7DO2Qa38lRAQHhz6oJdpE 8kfbAjXgXCNDySgjVLFiQPVKirKjZywJM13vPjVqmfRqTs+o8cfpgSS X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Now all the infrastructures are ready, switch to using the swap table only. This is unfortunately a large patch because the whole old counting mechanism, especially SWP_CONTINUED, has to be gone and switch to the new mechanism together, with no intermediate steps available. The swap table is capable of holding up to SWP_TB_COUNT_MAX - 1 counts in the higher bits of each table entry, so using that, the swap_map can be completely dropped. swap_map also had a limit of SWAP_CONT_MAX. Any value beyond that limit will require a COUNT_CONTINUED page. COUNT_CONTINUED is a bit complex to maintain, so for the swap table, a simpler approach is used: when the count goes beyond SWP_TB_COUNT_MAX - 1, the cluster will have an extend_table allocated, which is a swap cluster-sized array of unsigned long. The counting is basically offloaded there until the count drops below SWAP_CONT_MAX again. Both the swap table and the extend table are cluster-based, so they exhibit good performance and sparsity. To make the switch from swap_map to swap table clean, this commit cleans up and introduces a new set of functions based on the swap table design, for manipulating swap counts: - __swap_cluster_dup_entry, __swap_cluster_put_entry, __swap_cluster_alloc_entry, __swap_cluster_free_entry: Increase/decrease the count of a swap slot, or alloc / free a swap slot. This is the internal routine that does the counting work based on the swap table and handles all the complexities. The caller will need to lock the cluster before calling them. All swap count-related update operations are wrapped by these four helpers. - swap_dup_entries_cluster, swap_put_entries_cluster: Increase/decrease the swap count of one or a set of swap slots in the same cluster range. These two helpers serve as the common routines for folio_dup_swap & swap_dup_entry_direct, or folio_put_swap & swap_put_entries_direct. And use these helpers to replace all existing callers. This helps to simplify the count tracking by a lot, and the swap_map is gone. Suggested-by: Chris Li Signed-off-by: Kairui Song --- include/linux/swap.h | 28 +- mm/memory.c | 2 +- mm/swap.h | 14 +- mm/swap_state.c | 53 ++-- mm/swap_table.h | 5 + mm/swapfile.c | 780 +++++++++++++++++++----------------------------= ---- 6 files changed, 330 insertions(+), 552 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 62fc7499b408..0effe3cc50f5 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -208,7 +208,6 @@ enum { SWP_DISCARDABLE =3D (1 << 2), /* blkdev support discard */ SWP_DISCARDING =3D (1 << 3), /* now discarding a free cluster */ SWP_SOLIDSTATE =3D (1 << 4), /* blkdev seeks are cheap */ - SWP_CONTINUED =3D (1 << 5), /* swap_map has count continuation */ SWP_BLKDEV =3D (1 << 6), /* its a block device */ SWP_ACTIVATED =3D (1 << 7), /* set after swap_activate success */ SWP_FS_OPS =3D (1 << 8), /* swapfile operations go through fs */ @@ -223,16 +222,6 @@ enum { #define SWAP_CLUSTER_MAX_SKIPPED (SWAP_CLUSTER_MAX << 10) #define COMPACT_CLUSTER_MAX SWAP_CLUSTER_MAX =20 -/* Bit flag in swap_map */ -#define COUNT_CONTINUED 0x80 /* Flag swap_map continuation for full count = */ - -/* Special value in first swap_map */ -#define SWAP_MAP_MAX 0x3e /* Max count */ -#define SWAP_MAP_BAD 0x3f /* Note page is bad */ - -/* Special value in each swap_map continuation */ -#define SWAP_CONT_MAX 0x7f /* Max count */ - /* * The first page in the swap file is the swap header, which is always mar= ked * bad to prevent it from being allocated as an entry. This also prevents = the @@ -264,8 +253,7 @@ struct swap_info_struct { signed short prio; /* swap priority of this type */ struct plist_node list; /* entry in swap_active_head */ signed char type; /* strange name for an index */ - unsigned int max; /* extent of the swap_map */ - unsigned char *swap_map; /* vmalloc'ed array of usage counts */ + unsigned int max; /* size of this swap device */ unsigned long *zeromap; /* kvmalloc'ed bitmap to track zero pages */ struct swap_cluster_info *cluster_info; /* cluster info. Only for SSD */ struct list_head free_clusters; /* free clusters list */ @@ -284,18 +272,14 @@ struct swap_info_struct { struct completion comp; /* seldom referenced */ spinlock_t lock; /* * protect map scan related fields like - * swap_map, inuse_pages and all cluster - * lists. other fields are only changed + * inuse_pages and all cluster lists. + * Other fields are only changed * at swapon/swapoff, so are protected * by swap_lock. changing flags need * hold this lock and swap_lock. If * both locks need hold, hold swap_lock * first. */ - spinlock_t cont_lock; /* - * protect swap count continuation page - * list. - */ struct work_struct discard_work; /* discard worker */ struct work_struct reclaim_work; /* reclaim worker */ struct list_head discard_clusters; /* discard clusters list */ @@ -451,7 +435,6 @@ static inline long get_nr_swap_pages(void) } =20 extern void si_swapinfo(struct sysinfo *); -extern int add_swap_count_continuation(swp_entry_t, gfp_t); int swap_type_of(dev_t device, sector_t offset); int find_first_swap(dev_t *device); extern unsigned int count_swap_pages(int, int); @@ -517,11 +500,6 @@ static inline void free_swap_cache(struct folio *folio) { } =20 -static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_m= ask) -{ - return 0; -} - static inline int swap_dup_entry_direct(swp_entry_t ent) { return 0; diff --git a/mm/memory.c b/mm/memory.c index 7238dd9dd629..a5f471d79507 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1348,7 +1348,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct= vm_area_struct *src_vma, =20 if (ret =3D=3D -EIO) { VM_WARN_ON_ONCE(!entry.val); - if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) { + if (swap_retry_table_alloc(entry, GFP_KERNEL) < 0) { ret =3D -ENOMEM; goto out; } diff --git a/mm/swap.h b/mm/swap.h index bfafa637c458..751430e2d2a5 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -37,6 +37,7 @@ struct swap_cluster_info { u8 flags; u8 order; atomic_long_t __rcu *table; /* Swap table entries, see mm/swap_table.h */ + unsigned long *extend_table; /* For large swap count, protected by ci->lo= ck */ struct list_head list; }; =20 @@ -183,6 +184,8 @@ static inline void swap_cluster_unlock_irq(struct swap_= cluster_info *ci) spin_unlock_irq(&ci->lock); } =20 +extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp); + /* * Below are the core routines for doing swap for a folio. * All helpers requires the folio to be locked, and a locked folio @@ -206,9 +209,9 @@ int folio_dup_swap(struct folio *folio, struct page *su= bpage); void folio_put_swap(struct folio *folio, struct page *subpage); =20 /* For internal use */ -extern void swap_entries_free(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset, unsigned int nr_pages); +extern void __swap_cluster_free_entries(struct swap_info_struct *si, + struct swap_cluster_info *ci, + unsigned int ci_off, unsigned int nr_pages); =20 /* linux/mm/page_io.c */ int sio_pool_init(void); @@ -446,6 +449,11 @@ static inline int swap_writeout(struct folio *folio, return 0; } =20 +static inline int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp) +{ + return -EINVAL; +} + static inline bool swap_cache_has_folio(swp_entry_t entry) { return false; diff --git a/mm/swap_state.c b/mm/swap_state.c index e213ee35c1d2..c808f0948b10 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -140,21 +140,20 @@ void *swap_cache_get_shadow(swp_entry_t entry) void __swap_cache_add_folio(struct swap_cluster_info *ci, struct folio *folio, swp_entry_t entry) { - unsigned long new_tb; - unsigned int ci_start, ci_off, ci_end; + unsigned int ci_off =3D swp_cluster_offset(entry), ci_end; unsigned long nr_pages =3D folio_nr_pages(folio); + unsigned long pfn =3D folio_pfn(folio); + unsigned long old_tb; =20 VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); VM_WARN_ON_ONCE_FOLIO(!folio_test_swapbacked(folio), folio); =20 - new_tb =3D folio_to_swp_tb(folio, 0); - ci_start =3D swp_cluster_offset(entry); - ci_off =3D ci_start; - ci_end =3D ci_start + nr_pages; + ci_end =3D ci_off + nr_pages; do { - VM_WARN_ON_ONCE(swp_tb_is_folio(__swap_table_get(ci, ci_off))); - __swap_table_set(ci, ci_off, new_tb); + old_tb =3D __swap_table_get(ci, ci_off); + VM_WARN_ON_ONCE(swp_tb_is_folio(old_tb)); + __swap_table_set(ci, ci_off, pfn_to_swp_tb(pfn, __swp_tb_get_count(old_t= b))); } while (++ci_off < ci_end); =20 folio_ref_add(folio, nr_pages); @@ -183,14 +182,13 @@ static int swap_cache_add_folio(struct folio *folio, = swp_entry_t entry, unsigned long old_tb; struct swap_info_struct *si; struct swap_cluster_info *ci; - unsigned int ci_start, ci_off, ci_end, offset; + unsigned int ci_start, ci_off, ci_end; unsigned long nr_pages =3D folio_nr_pages(folio); =20 si =3D __swap_entry_to_info(entry); ci_start =3D swp_cluster_offset(entry); ci_end =3D ci_start + nr_pages; ci_off =3D ci_start; - offset =3D swp_offset(entry); ci =3D swap_cluster_lock(si, swp_offset(entry)); if (unlikely(!ci->table)) { err =3D -ENOENT; @@ -202,13 +200,12 @@ static int swap_cache_add_folio(struct folio *folio, = swp_entry_t entry, err =3D -EEXIST; goto failed; } - if (unlikely(!__swap_count(swp_entry(swp_type(entry), offset)))) { + if (unlikely(!__swp_tb_get_count(old_tb))) { err =3D -ENOENT; goto failed; } if (swp_tb_is_shadow(old_tb)) shadow =3D swp_tb_to_shadow(old_tb); - offset++; } while (++ci_off < ci_end); __swap_cache_add_folio(ci, folio, entry); swap_cluster_unlock(ci); @@ -237,8 +234,9 @@ static int swap_cache_add_folio(struct folio *folio, sw= p_entry_t entry, void __swap_cache_del_folio(struct swap_cluster_info *ci, struct folio *fo= lio, swp_entry_t entry, void *shadow) { + int count; + unsigned long old_tb; struct swap_info_struct *si; - unsigned long old_tb, new_tb; unsigned int ci_start, ci_off, ci_end; bool folio_swapped =3D false, need_free =3D false; unsigned long nr_pages =3D folio_nr_pages(folio); @@ -249,20 +247,20 @@ void __swap_cache_del_folio(struct swap_cluster_info = *ci, struct folio *folio, VM_WARN_ON_ONCE_FOLIO(folio_test_writeback(folio), folio); =20 si =3D __swap_entry_to_info(entry); - new_tb =3D shadow_to_swp_tb(shadow, 0); ci_start =3D swp_cluster_offset(entry); ci_end =3D ci_start + nr_pages; ci_off =3D ci_start; do { - /* If shadow is NULL, we sets an empty shadow */ - old_tb =3D __swap_table_xchg(ci, ci_off, new_tb); + old_tb =3D __swap_table_get(ci, ci_off); WARN_ON_ONCE(!swp_tb_is_folio(old_tb) || swp_tb_to_folio(old_tb) !=3D folio); - if (__swap_count(swp_entry(si->type, - swp_offset(entry) + ci_off - ci_start))) + count =3D __swp_tb_get_count(old_tb); + if (count) folio_swapped =3D true; else need_free =3D true; + /* If shadow is NULL, we sets an empty shadow. */ + __swap_table_set(ci, ci_off, shadow_to_swp_tb(shadow, count)); } while (++ci_off < ci_end); =20 folio->swap.val =3D 0; @@ -271,13 +269,13 @@ void __swap_cache_del_folio(struct swap_cluster_info = *ci, struct folio *folio, lruvec_stat_mod_folio(folio, NR_SWAPCACHE, -nr_pages); =20 if (!folio_swapped) { - swap_entries_free(si, ci, swp_offset(entry), nr_pages); + __swap_cluster_free_entries(si, ci, ci_start, nr_pages); } else if (need_free) { + ci_off =3D ci_start; do { - if (!__swap_count(entry)) - swap_entries_free(si, ci, swp_offset(entry), 1); - entry.val++; - } while (--nr_pages); + if (!__swp_tb_get_count(__swap_table_get(ci, ci_off))) + __swap_cluster_free_entries(si, ci, ci_off, 1); + } while (++ci_off < ci_end); } } =20 @@ -324,17 +322,18 @@ void __swap_cache_replace_folio(struct swap_cluster_i= nfo *ci, unsigned long nr_pages =3D folio_nr_pages(new); unsigned int ci_off =3D swp_cluster_offset(entry); unsigned int ci_end =3D ci_off + nr_pages; - unsigned long old_tb, new_tb; + unsigned int pfn =3D folio_pfn(new); + unsigned long old_tb; =20 VM_WARN_ON_ONCE(!folio_test_swapcache(old) || !folio_test_swapcache(new)); VM_WARN_ON_ONCE(!folio_test_locked(old) || !folio_test_locked(new)); VM_WARN_ON_ONCE(!entry.val); =20 /* Swap cache still stores N entries instead of a high-order entry */ - new_tb =3D folio_to_swp_tb(new, 0); do { - old_tb =3D __swap_table_xchg(ci, ci_off, new_tb); + old_tb =3D __swap_table_get(ci, ci_off); WARN_ON_ONCE(!swp_tb_is_folio(old_tb) || swp_tb_to_folio(old_tb) !=3D ol= d); + __swap_table_set(ci, ci_off, pfn_to_swp_tb(pfn, __swp_tb_get_count(old_t= b))); } while (++ci_off < ci_end); =20 /* @@ -368,7 +367,7 @@ void __swap_cache_clear_shadow(swp_entry_t entry, int n= r_ents) ci_end =3D ci_off + nr_ents; do { old =3D __swap_table_xchg(ci, ci_off, null_to_swp_tb()); - WARN_ON_ONCE(swp_tb_is_folio(old)); + WARN_ON_ONCE(swp_tb_is_folio(old) || swp_tb_get_count(old)); } while (++ci_off < ci_end); } =20 diff --git a/mm/swap_table.h b/mm/swap_table.h index 10762ac5f4f5..8415ffbe2b9c 100644 --- a/mm/swap_table.h +++ b/mm/swap_table.h @@ -191,6 +191,11 @@ static inline int swp_tb_get_count(unsigned long swp_t= b) return -EINVAL; } =20 +static inline unsigned long __swp_tb_mk_count(unsigned long swp_tb, int co= unt) +{ + return ((swp_tb & ~SWP_TB_COUNT_MASK) | __count_to_swp_tb(count)); +} + /* * Helpers for accessing or modifying the swap table of a cluster, * the swap cluster must be locked. diff --git a/mm/swapfile.c b/mm/swapfile.c index 968153691fc4..45579ace27ba 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -51,15 +51,8 @@ #include "swap_table.h" #include "swap.h" =20 -static bool swap_count_continued(struct swap_info_struct *, pgoff_t, - unsigned char); -static void free_swap_count_continuations(struct swap_info_struct *); static void swap_range_alloc(struct swap_info_struct *si, unsigned int nr_entries); -static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr= ); -static void swap_put_entry_locked(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset); static bool folio_swapcache_freeable(struct folio *folio); static void move_cluster(struct swap_info_struct *si, struct swap_cluster_info *ci, struct list_head *list, @@ -182,22 +175,19 @@ static long swap_usage_in_pages(struct swap_info_stru= ct *si) /* Reclaim the swap entry if swap is getting full */ #define TTRS_FULL 0x4 =20 -static bool swap_only_has_cache(struct swap_info_struct *si, - struct swap_cluster_info *ci, +static bool swap_only_has_cache(struct swap_cluster_info *ci, unsigned long offset, int nr_pages) { unsigned int ci_off =3D offset % SWAPFILE_CLUSTER; - unsigned char *map =3D si->swap_map + offset; - unsigned char *map_end =3D map + nr_pages; + unsigned int ci_end =3D ci_off + nr_pages; unsigned long swp_tb; =20 do { swp_tb =3D __swap_table_get(ci, ci_off); VM_WARN_ON_ONCE(!swp_tb_is_folio(swp_tb)); - if (*map) + if (swp_tb_get_count(swp_tb)) return false; - ++ci_off; - } while (++map < map_end); + } while (++ci_off < ci_end); =20 return true; } @@ -256,7 +246,7 @@ static int __try_to_reclaim_swap(struct swap_info_struc= t *si, * reference or pending writeback, and can't be allocated to others. */ ci =3D swap_cluster_lock(si, offset); - need_reclaim =3D swap_only_has_cache(si, ci, offset, nr_pages); + need_reclaim =3D swap_only_has_cache(ci, offset, nr_pages); swap_cluster_unlock(ci); if (!need_reclaim) goto out_unlock; @@ -479,6 +469,7 @@ static void swap_cluster_assert_empty(struct swap_clust= er_info *ci, } while (++ci_off < ci_end); =20 WARN_ON_ONCE(bad_slots !=3D (swapoff ? ci->count : 0)); + WARN_ON_ONCE(nr =3D=3D SWAPFILE_CLUSTER && ci->extend_table); } =20 static void swap_cluster_free_table(struct swap_cluster_info *ci) @@ -529,7 +520,7 @@ swap_cluster_alloc_table(struct swap_info_struct *si, spin_unlock(&si->global_cluster_lock); local_unlock(&percpu_swap_cluster.lock); =20 - table =3D swap_table_alloc(__GFP_HIGH | __GFP_NOMEMALLOC | GFP_KERNEL); + table =3D swap_table_alloc(__GFP_HIGH | __GFP_NOMEMALLOC | GFP_KERNEL | _= _GFP_NOWARN); =20 /* * Back to atomic context. We might have migrated to a new CPU with a @@ -807,7 +798,6 @@ static int swap_cluster_setup_bad_slot(struct swap_info= _struct *si, pr_warn("Duplicated bad slot offset %d\n", offset); ret =3D -EINVAL; } else { - si->swap_map[offset] =3D SWAP_MAP_BAD; ci->count++; } spin_unlock(&ci->lock); @@ -829,18 +819,16 @@ static bool cluster_reclaim_range(struct swap_info_st= ruct *si, { unsigned int nr_pages =3D 1 << order; unsigned long offset =3D start, end =3D start + nr_pages; - unsigned char *map =3D si->swap_map; unsigned long swp_tb; =20 spin_unlock(&ci->lock); do { - if (READ_ONCE(map[offset])) - break; swp_tb =3D swap_table_get(ci, offset % SWAPFILE_CLUSTER); - if (swp_tb_is_folio(swp_tb)) { + if (swp_tb_get_count(swp_tb)) + break; + if (swp_tb_is_folio(swp_tb)) if (__try_to_reclaim_swap(si, offset, TTRS_ANYWAY) < 0) break; - } } while (++offset < end); spin_lock(&ci->lock); =20 @@ -864,7 +852,7 @@ static bool cluster_reclaim_range(struct swap_info_stru= ct *si, */ for (offset =3D start; offset < end; offset++) { swp_tb =3D __swap_table_get(ci, offset % SWAPFILE_CLUSTER); - if (map[offset] || !swp_tb_is_null(swp_tb)) + if (!swp_tb_is_null(swp_tb)) return false; } =20 @@ -876,37 +864,35 @@ static bool cluster_scan_range(struct swap_info_struc= t *si, unsigned long offset, unsigned int nr_pages, bool *need_reclaim) { - unsigned long end =3D offset + nr_pages; - unsigned char *map =3D si->swap_map; + unsigned int ci_off =3D offset % SWAPFILE_CLUSTER; + unsigned int ci_end =3D ci_off + nr_pages; unsigned long swp_tb; =20 - if (cluster_is_empty(ci)) - return true; - do { - if (map[offset]) - return false; - swp_tb =3D __swap_table_get(ci, offset % SWAPFILE_CLUSTER); - if (swp_tb_is_folio(swp_tb)) { + swp_tb =3D __swap_table_get(ci, ci_off); + if (swp_tb_is_null(swp_tb)) + continue; + if (swp_tb_is_folio(swp_tb) && !__swp_tb_get_count(swp_tb)) { if (!vm_swap_full()) return false; *need_reclaim =3D true; - } else { - /* A entry with no count and no cache must be null */ - VM_WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); + continue; } - } while (++offset < end); + /* Slot with zero count can only be NULL or folio */ + VM_WARN_ON(!swp_tb_get_count(swp_tb)); + return false; + } while (++ci_off < ci_end); =20 return true; } =20 -static bool cluster_alloc_range(struct swap_info_struct *si, - struct swap_cluster_info *ci, - struct folio *folio, - unsigned int offset) +static bool __swap_cluster_alloc_entries(struct swap_info_struct *si, + struct swap_cluster_info *ci, + struct folio *folio, + unsigned int ci_off) { - unsigned long nr_pages; unsigned int order; + unsigned long nr_pages; =20 lockdep_assert_held(&ci->lock); =20 @@ -925,14 +911,15 @@ static bool cluster_alloc_range(struct swap_info_stru= ct *si, if (likely(folio)) { order =3D folio_order(folio); nr_pages =3D 1 << order; - swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false= ); - __swap_cache_add_folio(ci, folio, swp_entry(si->type, offset)); + swap_cluster_assert_empty(ci, ci_off, nr_pages, false); + __swap_cache_add_folio(ci, folio, swp_entry(si->type, + ci_off + cluster_offset(si, ci))); } else if (IS_ENABLED(CONFIG_HIBERNATION)) { order =3D 0; nr_pages =3D 1; - WARN_ON_ONCE(si->swap_map[offset]); - si->swap_map[offset] =3D 1; - swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, 1, false); + swap_cluster_assert_empty(ci, ci_off, 1, false); + /* Sets a fake shadow as placeholder */ + __swap_table_set(ci, ci_off, shadow_to_swp_tb(NULL, 1)); } else { /* Allocation without folio is only possible with hibernation */ WARN_ON_ONCE(1); @@ -983,7 +970,7 @@ static unsigned int alloc_swap_scan_cluster(struct swap= _info_struct *si, if (!ret) continue; } - if (!cluster_alloc_range(si, ci, folio, offset)) + if (!__swap_cluster_alloc_entries(si, ci, folio, offset % SWAPFILE_CLUST= ER)) break; found =3D offset; offset +=3D nr_pages; @@ -1030,7 +1017,7 @@ static void swap_reclaim_full_clusters(struct swap_in= fo_struct *si, bool force) long to_scan =3D 1; unsigned long offset, end; struct swap_cluster_info *ci; - unsigned char *map =3D si->swap_map; + unsigned long swp_tb; int nr_reclaim; =20 if (force) @@ -1042,8 +1029,8 @@ static void swap_reclaim_full_clusters(struct swap_in= fo_struct *si, bool force) to_scan--; =20 while (offset < end) { - if (!READ_ONCE(map[offset]) && - swp_tb_is_folio(swap_table_get(ci, offset % SWAPFILE_CLUSTER))) { + swp_tb =3D swap_table_get(ci, offset % SWAPFILE_CLUSTER); + if (swp_tb_is_folio(swp_tb) && !__swp_tb_get_count(swp_tb)) { spin_unlock(&ci->lock); nr_reclaim =3D __try_to_reclaim_swap(si, offset, TTRS_ANYWAY); @@ -1452,40 +1439,129 @@ static bool swap_sync_discard(void) return false; } =20 +/* + * Allocate an array of unsigned long to contain counts above SWP_TB_COUNT= _MAX. + */ +static int swap_extend_table_alloc(struct swap_info_struct *si, + struct swap_cluster_info *ci, gfp_t gfp) +{ + void *table =3D kzalloc(sizeof(unsigned long) * SWAPFILE_CLUSTER, gfp); + + if (!table) + return -ENOMEM; + + spin_lock(&ci->lock); + if (!ci->extend_table) + ci->extend_table =3D table; + else + kfree(table); + spin_unlock(&ci->lock); + return 0; +} + +int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp) +{ + int ret; + struct swap_info_struct *si; + struct swap_cluster_info *ci; + unsigned long offset =3D swp_offset(entry); + + si =3D get_swap_device(entry); + if (!si) + return 0; + + ci =3D __swap_offset_to_cluster(si, offset); + ret =3D swap_extend_table_alloc(si, ci, gfp); + + put_swap_device(si); + return ret; +} + +static void swap_extend_table_try_free(struct swap_cluster_info *ci) +{ + unsigned long i; + bool can_free =3D true; + + if (!ci->extend_table) + return; + + for (i =3D 0; i < SWAPFILE_CLUSTER; i++) { + if (ci->extend_table[i]) + can_free =3D false; + } + + if (can_free) { + kfree(ci->extend_table); + ci->extend_table =3D NULL; + } +} + +/* Decrease the swap count of one slot, without freeing it */ +static void __swap_cluster_put_entry(struct swap_cluster_info *ci, + unsigned int ci_off) +{ + int count; + unsigned long swp_tb; + + lockdep_assert_held(&ci->lock); + swp_tb =3D __swap_table_get(ci, ci_off); + count =3D __swp_tb_get_count(swp_tb); + + VM_WARN_ON_ONCE(count <=3D 0); + VM_WARN_ON_ONCE(count > SWP_TB_COUNT_MAX); + + if (count =3D=3D SWP_TB_COUNT_MAX) { + count =3D ci->extend_table[ci_off]; + /* Overflow starts with SWP_TB_COUNT_MAX */ + VM_WARN_ON_ONCE(count < SWP_TB_COUNT_MAX); + count--; + if (count =3D=3D (SWP_TB_COUNT_MAX - 1)) { + ci->extend_table[ci_off] =3D 0; + __swap_table_set(ci, ci_off, __swp_tb_mk_count(swp_tb, count)); + swap_extend_table_try_free(ci); + } else { + ci->extend_table[ci_off] =3D count; + } + } else { + __swap_table_set(ci, ci_off, __swp_tb_mk_count(swp_tb, --count)); + } +} + /** - * swap_put_entries_cluster - Decrease the swap count of a set of slots. + * swap_put_entries_cluster - Decrease the swap count of slots within one = cluster * @si: The swap device. - * @start: start offset of slots. + * @offset: start offset of slots. * @nr: number of slots. - * @reclaim_cache: if true, also reclaim the swap cache. + * @reclaim_cache: if true, also reclaim the swap cache if slots are freed. * * This helper decreases the swap count of a set of slots and tries to * batch free them. Also reclaims the swap cache if @reclaim_cache is true. - * Context: The caller must ensure that all slots belong to the same - * cluster and their swap count doesn't go underflow. + * + * Context: The specified slots must be pinned by existing swap count or s= wap + * cache reference, so they won't be released until this helper returns. */ static void swap_put_entries_cluster(struct swap_info_struct *si, - unsigned long start, int nr, + pgoff_t offset, int nr, bool reclaim_cache) { - unsigned long offset =3D start, end =3D start + nr; - unsigned long batch_start =3D SWAP_ENTRY_INVALID; struct swap_cluster_info *ci; + unsigned int ci_off, ci_end; + pgoff_t end =3D offset + nr; bool need_reclaim =3D false; unsigned int nr_reclaimed; unsigned long swp_tb; - unsigned int count; + int ci_batch =3D -1; =20 ci =3D swap_cluster_lock(si, offset); + ci_off =3D offset % SWAPFILE_CLUSTER; + ci_end =3D ci_off + nr; do { - swp_tb =3D __swap_table_get(ci, offset % SWAPFILE_CLUSTER); - count =3D si->swap_map[offset]; - VM_WARN_ON(count < 1 || count =3D=3D SWAP_MAP_BAD); - if (count =3D=3D 1) { + swp_tb =3D __swap_table_get(ci, ci_off); + if (swp_tb_get_count(swp_tb) =3D=3D 1) { /* count =3D=3D 1 and non-cached slots will be batch freed. */ if (!swp_tb_is_folio(swp_tb)) { - if (!batch_start) - batch_start =3D offset; + if (ci_batch =3D=3D -1) + ci_batch =3D ci_off; continue; } /* count will be 0 after put, slot can be reclaimed */ @@ -1497,21 +1573,20 @@ static void swap_put_entries_cluster(struct swap_in= fo_struct *si, * slots will be freed when folio is removed from swap cache * (__swap_cache_del_folio). */ - swap_put_entry_locked(si, ci, offset); - if (batch_start) { - swap_entries_free(si, ci, batch_start, offset - batch_start); - batch_start =3D SWAP_ENTRY_INVALID; + __swap_cluster_put_entry(ci, ci_off); + if (ci_batch !=3D -1) { + __swap_cluster_free_entries(si, ci, ci_batch, ci_off - ci_batch); + ci_batch =3D -1; } - } while (++offset < end); + } while (++ci_off < ci_end); =20 - if (batch_start) - swap_entries_free(si, ci, batch_start, offset - batch_start); + if (ci_batch !=3D -1) + __swap_cluster_free_entries(si, ci, ci_batch, ci_off - ci_batch); swap_cluster_unlock(ci); =20 if (!need_reclaim || !reclaim_cache) return; =20 - offset =3D start; do { nr_reclaimed =3D __try_to_reclaim_swap(si, offset, TTRS_UNMAPPED | TTRS_FULL); @@ -1521,6 +1596,90 @@ static void swap_put_entries_cluster(struct swap_inf= o_struct *si, } while (offset < end); } =20 +/* Increase the swap count of one slot. */ +static int __swap_cluster_dup_entry(struct swap_cluster_info *ci, + unsigned int ci_off) +{ + int count; + unsigned long swp_tb; + + lockdep_assert_held(&ci->lock); + swp_tb =3D __swap_table_get(ci, ci_off); + /* Bad or special slots can't be handled */ + if (WARN_ON_ONCE(swp_tb_is_bad(swp_tb))) + return -EINVAL; + count =3D __swp_tb_get_count(swp_tb); + /* Must be either cached or have a count already */ + if (WARN_ON_ONCE(!count && !swp_tb_is_folio(swp_tb))) + return -ENOENT; + + if (likely(count < (SWP_TB_COUNT_MAX - 1))) { + __swap_table_set(ci, ci_off, __swp_tb_mk_count(swp_tb, count + 1)); + VM_WARN_ON_ONCE(ci->extend_table && ci->extend_table[ci_off]); + } else if (count =3D=3D (SWP_TB_COUNT_MAX - 1)) { + if (ci->extend_table) { + VM_WARN_ON_ONCE(ci->extend_table[ci_off]); + ci->extend_table[ci_off] =3D SWP_TB_COUNT_MAX; + __swap_table_set(ci, ci_off, __swp_tb_mk_count(swp_tb, SWP_TB_COUNT_MAX= )); + } else { + return -ENOMEM; + } + } else if (count =3D=3D SWP_TB_COUNT_MAX) { + ++ci->extend_table[ci_off]; + } else { + /* Never happens unless counting went wrong */ + WARN_ON_ONCE(1); + } + + return 0; +} + +/** + * swap_dup_entries_cluster: Increase the swap count of slots within one c= luster. + * @si: The swap device. + * @offset: start offset of slots. + * @nr: number of slots. + * + * Context: The specified slots must be pinned by existing swap count or s= wap + * cache reference, so they won't be released until this helper returns. + * Return: 0 on success. -ENOMEM if the swap count maxed out (SWP_TB_COUNT= _MAX) + * and failed to allocate an extended table. + */ +static int swap_dup_entries_cluster(struct swap_info_struct *si, + pgoff_t offset, int nr) +{ + int err; + struct swap_cluster_info *ci; + unsigned int ci_start, ci_off, ci_end; + + ci_start =3D offset % SWAPFILE_CLUSTER; + ci_end =3D ci_start + nr; + ci_off =3D ci_start; + ci =3D swap_cluster_lock(si, offset); +restart: + do { + err =3D __swap_cluster_dup_entry(ci, ci_off); + if (unlikely(err)) { + if (err =3D=3D -ENOMEM) { + spin_unlock(&ci->lock); + err =3D swap_extend_table_alloc(si, ci, GFP_ATOMIC); + spin_lock(&ci->lock); + if (!err) + goto restart; + } + goto failed; + } + } while (++ci_off < ci_end); + swap_cluster_unlock(ci); + return 0; +failed: + while (ci_off-- > ci_start) + __swap_cluster_put_entry(ci, ci_off); + swap_extend_table_try_free(ci); + swap_cluster_unlock(ci); + return err; +} + /** * folio_alloc_swap - allocate swap space for a folio * @folio: folio we want to move to swap @@ -1595,7 +1754,6 @@ int folio_alloc_swap(struct folio *folio) */ int folio_dup_swap(struct folio *folio, struct page *subpage) { - int err =3D 0; swp_entry_t entry =3D folio->swap; unsigned long nr_pages =3D folio_nr_pages(folio); =20 @@ -1607,10 +1765,8 @@ int folio_dup_swap(struct folio *folio, struct page = *subpage) nr_pages =3D 1; } =20 - while (!err && __swap_duplicate(entry, 1, nr_pages) =3D=3D -ENOMEM) - err =3D add_swap_count_continuation(entry, GFP_ATOMIC); - - return err; + return swap_dup_entries_cluster(swap_entry_to_info(entry), + swp_offset(entry), nr_pages); } =20 /** @@ -1639,28 +1795,6 @@ void folio_put_swap(struct folio *folio, struct page= *subpage) swap_put_entries_cluster(si, swp_offset(entry), nr_pages, false); } =20 -static void swap_put_entry_locked(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset) -{ - unsigned char count; - - count =3D si->swap_map[offset]; - if ((count & ~COUNT_CONTINUED) <=3D SWAP_MAP_MAX) { - if (count =3D=3D COUNT_CONTINUED) { - if (swap_count_continued(si, offset, count)) - count =3D SWAP_MAP_MAX | COUNT_CONTINUED; - else - count =3D SWAP_MAP_MAX; - } else - count--; - } - - WRITE_ONCE(si->swap_map[offset], count); - if (!count && !swp_tb_is_folio(__swap_table_get(ci, offset % SWAPFILE_CLU= STER))) - swap_entries_free(si, ci, offset, 1); -} - /* * When we get a swap entry, if there aren't some other ways to * prevent swapoff, such as the folio in swap cache is locked, RCU @@ -1727,31 +1861,30 @@ struct swap_info_struct *get_swap_device(swp_entry_= t entry) } =20 /* - * Drop the last ref of swap entries, caller have to ensure all entries - * belong to the same cgroup and cluster. + * Free a set of swap slots after their swap count dropped to zero, or wil= l be + * zero after putting the last ref (saves one __swap_cluster_put_entry cal= l). */ -void swap_entries_free(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset, unsigned int nr_pages) +void __swap_cluster_free_entries(struct swap_info_struct *si, + struct swap_cluster_info *ci, + unsigned int ci_start, unsigned int nr_pages) { - swp_entry_t entry =3D swp_entry(si->type, offset); - unsigned char *map =3D si->swap_map + offset; - unsigned char *map_end =3D map + nr_pages; + unsigned long old_tb; + unsigned int ci_off =3D ci_start, ci_end =3D ci_start + nr_pages; + unsigned long offset =3D cluster_offset(si, ci) + ci_start; =20 - /* It should never free entries across different clusters */ - VM_BUG_ON(ci !=3D __swap_offset_to_cluster(si, offset + nr_pages - 1)); - VM_BUG_ON(cluster_is_empty(ci)); - VM_BUG_ON(ci->count < nr_pages); + VM_WARN_ON(ci->count < nr_pages); =20 ci->count -=3D nr_pages; do { - VM_WARN_ON(*map > 1); - *map =3D 0; - } while (++map < map_end); + old_tb =3D __swap_table_get(ci, ci_off); + /* Release the last ref, or after swap cache is dropped */ + VM_WARN_ON(!swp_tb_is_shadow(old_tb) || __swp_tb_get_count(old_tb) > 1); + __swap_table_set(ci, ci_off, null_to_swp_tb()); + } while (++ci_off < ci_end); =20 - mem_cgroup_uncharge_swap(entry, nr_pages); + mem_cgroup_uncharge_swap(swp_entry(si->type, offset), nr_pages); swap_range_free(si, offset, nr_pages); - swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false); + swap_cluster_assert_empty(ci, ci_start, nr_pages, false); =20 if (!ci->count) free_cluster(si, ci); @@ -1761,10 +1894,10 @@ void swap_entries_free(struct swap_info_struct *si, =20 int __swap_count(swp_entry_t entry) { - struct swap_info_struct *si =3D __swap_entry_to_info(entry); - pgoff_t offset =3D swp_offset(entry); + struct swap_cluster_info *ci =3D __swap_entry_to_cluster(entry); + unsigned int ci_off =3D swp_cluster_offset(entry); =20 - return si->swap_map[offset]; + return swp_tb_get_count(__swap_table_get(ci, ci_off)); } =20 /** @@ -1776,81 +1909,62 @@ bool swap_entry_swapped(struct swap_info_struct *si= , swp_entry_t entry) { pgoff_t offset =3D swp_offset(entry); struct swap_cluster_info *ci; - int count; + unsigned long swp_tb; =20 ci =3D swap_cluster_lock(si, offset); - count =3D si->swap_map[offset]; + swp_tb =3D swap_table_get(ci, offset % SWAPFILE_CLUSTER); swap_cluster_unlock(ci); =20 - return count && count !=3D SWAP_MAP_BAD; + return swp_tb_get_count(swp_tb) > 0; } =20 /* * How many references to @entry are currently swapped out? - * This considers COUNT_CONTINUED so it returns exact answer. + * This returns exact answer. */ int swp_swapcount(swp_entry_t entry) { - int count, tmp_count, n; struct swap_info_struct *si; struct swap_cluster_info *ci; - struct page *page; - pgoff_t offset; - unsigned char *map; + unsigned long swp_tb; + int count; =20 si =3D get_swap_device(entry); if (!si) return 0; =20 - offset =3D swp_offset(entry); - - ci =3D swap_cluster_lock(si, offset); - - count =3D si->swap_map[offset]; - if (!(count & COUNT_CONTINUED)) - goto out; - - count &=3D ~COUNT_CONTINUED; - n =3D SWAP_MAP_MAX + 1; - - page =3D vmalloc_to_page(si->swap_map + offset); - offset &=3D ~PAGE_MASK; - VM_BUG_ON(page_private(page) !=3D SWP_CONTINUED); - - do { - page =3D list_next_entry(page, lru); - map =3D kmap_local_page(page); - tmp_count =3D map[offset]; - kunmap_local(map); - - count +=3D (tmp_count & ~COUNT_CONTINUED) * n; - n *=3D (SWAP_CONT_MAX + 1); - } while (tmp_count & COUNT_CONTINUED); -out: + ci =3D swap_cluster_lock(si, swp_offset(entry)); + swp_tb =3D __swap_table_get(ci, swp_cluster_offset(entry)); + count =3D swp_tb_get_count(swp_tb); + if (count =3D=3D SWP_TB_COUNT_MAX) + count =3D ci->extend_table[swp_cluster_offset(entry)]; swap_cluster_unlock(ci); put_swap_device(si); - return count; + + return count < 0 ? 0 : count; } =20 static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, swp_entry_t entry, int order) { struct swap_cluster_info *ci; - unsigned char *map =3D si->swap_map; unsigned int nr_pages =3D 1 << order; unsigned long roffset =3D swp_offset(entry); unsigned long offset =3D round_down(roffset, nr_pages); + unsigned int ci_off; int i; bool ret =3D false; =20 ci =3D swap_cluster_lock(si, offset); if (nr_pages =3D=3D 1) { - if (map[roffset]) + ci_off =3D roffset % SWAPFILE_CLUSTER; + if (swp_tb_get_count(__swap_table_get(ci, ci_off))) ret =3D true; goto unlock_out; } for (i =3D 0; i < nr_pages; i++) { - if (map[offset + i]) { + ci_off =3D (offset + i) % SWAPFILE_CLUSTER; + if (swp_tb_get_count(__swap_table_get(ci, ci_off))) { ret =3D true; break; } @@ -2005,7 +2119,8 @@ void swap_free_hibernation_slot(swp_entry_t entry) return; =20 ci =3D swap_cluster_lock(si, offset); - swap_put_entry_locked(si, ci, offset); + __swap_cluster_put_entry(ci, offset % SWAPFILE_CLUSTER); + __swap_cluster_free_entries(si, ci, offset % SWAPFILE_CLUSTER, 1); swap_cluster_unlock(ci); =20 /* In theory readahead might add it to the swap cache by accident */ @@ -2231,13 +2346,10 @@ static int unuse_pte_range(struct vm_area_struct *v= ma, pmd_t *pmd, unsigned int type) { pte_t *pte =3D NULL; - struct swap_info_struct *si; =20 - si =3D swap_info[type]; do { struct folio *folio; - unsigned long offset; - unsigned char swp_count; + unsigned long swp_tb; softleaf_t entry; int ret; pte_t ptent; @@ -2256,7 +2368,6 @@ static int unuse_pte_range(struct vm_area_struct *vma= , pmd_t *pmd, if (swp_type(entry) !=3D type) continue; =20 - offset =3D swp_offset(entry); pte_unmap(pte); pte =3D NULL; =20 @@ -2273,8 +2384,9 @@ static int unuse_pte_range(struct vm_area_struct *vma= , pmd_t *pmd, &vmf); } if (!folio) { - swp_count =3D READ_ONCE(si->swap_map[offset]); - if (swp_count =3D=3D 0 || swp_count =3D=3D SWAP_MAP_BAD) + swp_tb =3D swap_table_get(__swap_entry_to_cluster(entry), + swp_cluster_offset(entry)); + if (swp_tb_get_count(swp_tb) <=3D 0) continue; return -ENOMEM; } @@ -2402,7 +2514,7 @@ static int unuse_mm(struct mm_struct *mm, unsigned in= t type) } =20 /* - * Scan swap_map from current position to next entry still in use. + * Scan swap table from current position to next entry still in use. * Return 0 if there are no inuse entries after prev till end of * the map. */ @@ -2411,7 +2523,6 @@ static unsigned int find_next_to_unuse(struct swap_in= fo_struct *si, { unsigned int i; unsigned long swp_tb; - unsigned char count; =20 /* * No need for swap_lock here: we're just looking @@ -2420,12 +2531,9 @@ static unsigned int find_next_to_unuse(struct swap_i= nfo_struct *si, * allocations from this area (while holding swap_lock). */ for (i =3D prev + 1; i < si->max; i++) { - count =3D READ_ONCE(si->swap_map[i]); swp_tb =3D swap_table_get(__swap_offset_to_cluster(si, i), i % SWAPFILE_CLUSTER); - if (count =3D=3D SWAP_MAP_BAD) - continue; - if (count || swp_tb_is_folio(swp_tb)) + if (!swp_tb_is_null(swp_tb) && !swp_tb_is_bad(swp_tb)) break; if ((i % LATENCY_LIMIT) =3D=3D 0) cond_resched(); @@ -2785,7 +2893,6 @@ static void flush_percpu_swap_cluster(struct swap_inf= o_struct *si) SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) { struct swap_info_struct *p =3D NULL; - unsigned char *swap_map; unsigned long *zeromap; struct swap_cluster_info *cluster_info; struct file *swap_file, *victim; @@ -2868,8 +2975,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) flush_percpu_swap_cluster(p); =20 destroy_swap_extents(p, p->swap_file); - if (p->flags & SWP_CONTINUED) - free_swap_count_continuations(p); =20 if (!(p->flags & SWP_SOLIDSTATE)) atomic_dec(&nr_rotate_swap); @@ -2881,8 +2986,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) =20 swap_file =3D p->swap_file; p->swap_file =3D NULL; - swap_map =3D p->swap_map; - p->swap_map =3D NULL; zeromap =3D p->zeromap; p->zeromap =3D NULL; maxpages =3D p->max; @@ -2896,7 +2999,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) mutex_unlock(&swapon_mutex); kfree(p->global_cluster); p->global_cluster =3D NULL; - vfree(swap_map); kvfree(zeromap); free_swap_cluster_info(cluster_info, maxpages); /* Destroy swap account information */ @@ -3118,7 +3220,6 @@ static struct swap_info_struct *alloc_swap_info(void) kvfree(defer); } spin_lock_init(&p->lock); - spin_lock_init(&p->cont_lock); atomic_long_set(&p->inuse_pages, SWAP_USAGE_OFFLIST_BIT); init_completion(&p->comp); =20 @@ -3245,19 +3346,6 @@ static unsigned long read_swap_header(struct swap_in= fo_struct *si, return maxpages; } =20 -static int setup_swap_map(struct swap_info_struct *si, - union swap_header *swap_header, - unsigned long maxpages) -{ - unsigned char *swap_map; - - swap_map =3D vzalloc(maxpages); - si->swap_map =3D swap_map; - if (!swap_map) - return -ENOMEM; - return 0; -} - static int setup_swap_clusters_info(struct swap_info_struct *si, union swap_header *swap_header, unsigned long maxpages) @@ -3449,11 +3537,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, special= file, int, swap_flags) =20 maxpages =3D si->max; =20 - /* Setup the swap map and apply bad block */ - error =3D setup_swap_map(si, swap_header, maxpages); - if (error) - goto bad_swap_unlock_inode; - /* Set up the swap cluster info */ error =3D setup_swap_clusters_info(si, swap_header, maxpages); if (error) @@ -3574,8 +3657,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) inode =3D NULL; destroy_swap_extents(si, swap_file); swap_cgroup_swapoff(si->type); - vfree(si->swap_map); - si->swap_map =3D NULL; free_swap_cluster_info(si->cluster_info, si->max); si->cluster_info =3D NULL; kvfree(si->zeromap); @@ -3618,82 +3699,6 @@ void si_swapinfo(struct sysinfo *val) spin_unlock(&swap_lock); } =20 -/* - * Verify that nr swap entries are valid and increment their swap map coun= ts. - * - * Returns error code in following case. - * - success -> 0 - * - swp_entry is invalid -> EINVAL - * - swap-mapped reference is requested but the entry is not used. -> ENOE= NT - * - swap-mapped reference requested but needs continued swap count. -> EN= OMEM - */ -static int swap_dup_entries(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset, - unsigned char usage, int nr) -{ - int i; - unsigned char count; - - for (i =3D 0; i < nr; i++) { - count =3D si->swap_map[offset + i]; - /* - * For swapin out, allocator never allocates bad slots. for - * swapin, readahead is guarded by swap_entry_swapped. - */ - if (WARN_ON(count =3D=3D SWAP_MAP_BAD)) - return -ENOENT; - /* - * Swap count duplication must be guarded by either swap cache folio (fr= om - * folio_dup_swap) or external lock of existing entry (from swap_dup_ent= ry_direct). - */ - if (WARN_ON(!count && - !swp_tb_is_folio(__swap_table_get(ci, offset % SWAPFILE_CLUSTER)))) - return -ENOENT; - if (WARN_ON((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX)) - return -EINVAL; - } - - for (i =3D 0; i < nr; i++) { - count =3D si->swap_map[offset + i]; - if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) - count +=3D usage; - else if (swap_count_continued(si, offset + i, count)) - count =3D COUNT_CONTINUED; - else { - /* - * Don't need to rollback changes, because if - * usage =3D=3D 1, there must be nr =3D=3D 1. - */ - return -ENOMEM; - } - - WRITE_ONCE(si->swap_map[offset + i], count); - } - - return 0; -} - -static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr) -{ - int err; - struct swap_info_struct *si; - struct swap_cluster_info *ci; - unsigned long offset =3D swp_offset(entry); - - si =3D swap_entry_to_info(entry); - if (WARN_ON_ONCE(!si)) { - pr_err("%s%08lx\n", Bad_file, entry.val); - return -EINVAL; - } - - VM_WARN_ON(nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER); - ci =3D swap_cluster_lock(si, offset); - err =3D swap_dup_entries(si, ci, offset, usage, nr); - swap_cluster_unlock(ci); - return err; -} - /* * swap_dup_entry_direct() - Increase reference count of a swap entry by o= ne. * @entry: first swap entry from which we want to increase the refcount. @@ -3707,233 +3712,16 @@ static int __swap_duplicate(swp_entry_t entry, uns= igned char usage, int nr) * owner. e.g., locking the PTL of a PTE containing the entry being increa= sed. */ int swap_dup_entry_direct(swp_entry_t entry) -{ - int err =3D 0; - while (!err && __swap_duplicate(entry, 1, 1) =3D=3D -ENOMEM) - err =3D add_swap_count_continuation(entry, GFP_ATOMIC); - return err; -} - -/* - * add_swap_count_continuation - called when a swap count is duplicated - * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entr= y's - * page of the original vmalloc'ed swap_map, to hold the continuation count - * (for that entry and for its neighbouring PAGE_SIZE swap entries). Call= ed - * again when count is duplicated beyond SWAP_MAP_MAX * SWAP_CONT_MAX, etc. - * - * These continuation pages are seldom referenced: the common paths all wo= rk - * on the original swap_map, only referring to a continuation page when the - * low "digit" of a count is incremented or decremented through SWAP_MAP_M= AX. - * - * add_swap_count_continuation(, GFP_ATOMIC) can be called while holding - * page table locks; if it fails, add_swap_count_continuation(, GFP_KERNEL) - * can be called after dropping locks. - */ -int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) { struct swap_info_struct *si; - struct swap_cluster_info *ci; - struct page *head; - struct page *page; - struct page *list_page; - pgoff_t offset; - unsigned char count; - int ret =3D 0; - - /* - * When debugging, it's easier to use __GFP_ZERO here; but it's better - * for latency not to zero a page while GFP_ATOMIC and holding locks. - */ - page =3D alloc_page(gfp_mask | __GFP_HIGHMEM); =20 - si =3D get_swap_device(entry); - if (!si) { - /* - * An acceptable race has occurred since the failing - * __swap_duplicate(): the swap device may be swapoff - */ - goto outer; - } - - offset =3D swp_offset(entry); - - ci =3D swap_cluster_lock(si, offset); - - count =3D si->swap_map[offset]; - - if ((count & ~COUNT_CONTINUED) !=3D SWAP_MAP_MAX) { - /* - * The higher the swap count, the more likely it is that tasks - * will race to add swap count continuation: we need to avoid - * over-provisioning. - */ - goto out; - } - - if (!page) { - ret =3D -ENOMEM; - goto out; - } - - head =3D vmalloc_to_page(si->swap_map + offset); - offset &=3D ~PAGE_MASK; - - spin_lock(&si->cont_lock); - /* - * Page allocation does not initialize the page's lru field, - * but it does always reset its private field. - */ - if (!page_private(head)) { - BUG_ON(count & COUNT_CONTINUED); - INIT_LIST_HEAD(&head->lru); - set_page_private(head, SWP_CONTINUED); - si->flags |=3D SWP_CONTINUED; - } - - list_for_each_entry(list_page, &head->lru, lru) { - unsigned char *map; - - /* - * If the previous map said no continuation, but we've found - * a continuation page, free our allocation and use this one. - */ - if (!(count & COUNT_CONTINUED)) - goto out_unlock_cont; - - map =3D kmap_local_page(list_page) + offset; - count =3D *map; - kunmap_local(map); - - /* - * If this continuation count now has some space in it, - * free our allocation and use this one. - */ - if ((count & ~COUNT_CONTINUED) !=3D SWAP_CONT_MAX) - goto out_unlock_cont; - } - - list_add_tail(&page->lru, &head->lru); - page =3D NULL; /* now it's attached, don't free it */ -out_unlock_cont: - spin_unlock(&si->cont_lock); -out: - swap_cluster_unlock(ci); - put_swap_device(si); -outer: - if (page) - __free_page(page); - return ret; -} - -/* - * swap_count_continued - when the original swap_map count is incremented - * from SWAP_MAP_MAX, check if there is already a continuation page to car= ry - * into, carry if so, or else fail until a new continuation page is alloca= ted; - * when the original swap_map count is decremented from 0 with continuatio= n, - * borrow from the continuation and report whether it still holds more. - * Called while __swap_duplicate() or caller of swap_put_entry_locked() - * holds cluster lock. - */ -static bool swap_count_continued(struct swap_info_struct *si, - pgoff_t offset, unsigned char count) -{ - struct page *head; - struct page *page; - unsigned char *map; - bool ret; - - head =3D vmalloc_to_page(si->swap_map + offset); - if (page_private(head) !=3D SWP_CONTINUED) { - BUG_ON(count & COUNT_CONTINUED); - return false; /* need to add count continuation */ - } - - spin_lock(&si->cont_lock); - offset &=3D ~PAGE_MASK; - page =3D list_next_entry(head, lru); - map =3D kmap_local_page(page) + offset; - - if (count =3D=3D SWAP_MAP_MAX) /* initial increment from swap_map */ - goto init_map; /* jump over SWAP_CONT_MAX checks */ - - if (count =3D=3D (SWAP_MAP_MAX | COUNT_CONTINUED)) { /* incrementing */ - /* - * Think of how you add 1 to 999 - */ - while (*map =3D=3D (SWAP_CONT_MAX | COUNT_CONTINUED)) { - kunmap_local(map); - page =3D list_next_entry(page, lru); - BUG_ON(page =3D=3D head); - map =3D kmap_local_page(page) + offset; - } - if (*map =3D=3D SWAP_CONT_MAX) { - kunmap_local(map); - page =3D list_next_entry(page, lru); - if (page =3D=3D head) { - ret =3D false; /* add count continuation */ - goto out; - } - map =3D kmap_local_page(page) + offset; -init_map: *map =3D 0; /* we didn't zero the page */ - } - *map +=3D 1; - kunmap_local(map); - while ((page =3D list_prev_entry(page, lru)) !=3D head) { - map =3D kmap_local_page(page) + offset; - *map =3D COUNT_CONTINUED; - kunmap_local(map); - } - ret =3D true; /* incremented */ - - } else { /* decrementing */ - /* - * Think of how you subtract 1 from 1000 - */ - BUG_ON(count !=3D COUNT_CONTINUED); - while (*map =3D=3D COUNT_CONTINUED) { - kunmap_local(map); - page =3D list_next_entry(page, lru); - BUG_ON(page =3D=3D head); - map =3D kmap_local_page(page) + offset; - } - BUG_ON(*map =3D=3D 0); - *map -=3D 1; - if (*map =3D=3D 0) - count =3D 0; - kunmap_local(map); - while ((page =3D list_prev_entry(page, lru)) !=3D head) { - map =3D kmap_local_page(page) + offset; - *map =3D SWAP_CONT_MAX | count; - count =3D COUNT_CONTINUED; - kunmap_local(map); - } - ret =3D count =3D=3D COUNT_CONTINUED; + si =3D swap_entry_to_info(entry); + if (WARN_ON_ONCE(!si)) { + pr_err("%s%08lx\n", Bad_file, entry.val); + return -EINVAL; } -out: - spin_unlock(&si->cont_lock); - return ret; -} =20 -/* - * free_swap_count_continuations - swapoff free all the continuation pages - * appended to the swap_map, after swap_map is quiesced, before vfree'ing = it. - */ -static void free_swap_count_continuations(struct swap_info_struct *si) -{ - pgoff_t offset; - - for (offset =3D 0; offset < si->max; offset +=3D PAGE_SIZE) { - struct page *head; - head =3D vmalloc_to_page(si->swap_map + offset); - if (page_private(head)) { - struct page *page, *next; - - list_for_each_entry_safe(page, next, &head->lru, lru) { - list_del(&page->lru); - __free_page(page); - } - } - } + return swap_dup_entries_cluster(si, swp_offset(entry), 1); } =20 #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25B1D346FB2 for ; Wed, 28 Jan 2026 09:31:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592670; cv=none; b=EcTXe278yy8Eox0e++OuaUZnnI8MSnVffAXGbS0W8M7qsAozf7xX2++Eg2c7w492qHJMF7hr+984UmZUZYyO1gv6g2uLYlRWaJuVx4v1hh0kxSm8Ha5C36SV5RSG2rzV+8cL4gL88l4vW+a9pgOyr69muOTqZ2Q1yJf/iSmrrSY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592670; c=relaxed/simple; bh=SuI2Rxmjj0jMV0SgfFqaXs4+Nzd8b1fzLPf7iTwwCdw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=kv+8DA1kEYi2YL3eb2lGpuX2Yhz7B0VHQ8MN4iI/WC8YFtztXMPKhWIXBR5RlJah65Zl6MxR4n63ZVdpobag5dawYywrXmSIABQDaE9m5u+tQ4PXpjApx/epbE8d2w77BW2Hfe8XGUcOOyCCInvUsv7Pa9co/pBSoiBtr+Xw5TA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ciQA3k1N; arc=none smtp.client-ip=209.85.216.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ciQA3k1N" Received: by mail-pj1-f42.google.com with SMTP id 98e67ed59e1d1-34f634dbfd6so5475322a91.2 for ; Wed, 28 Jan 2026 01:31:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592668; x=1770197468; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=zKtT6XQ/JEquVZGbic/ESwJDGPsZ48peRTM8lwccGeU=; b=ciQA3k1NwM7DFtA19RVUNq/HW54Qb/TTJ9oG3Rytit3VouCdKWco71n+DfLXqRwCgL jA+PaXvWRE0OhikxjMJvwEKvbO8Ob1g1uRWrI3N6wc9jM0MTUL9NuHtRVNJHQ7s9LYXA fVvUWsHohpVYpXcqvr6fST/6s0Ajf4rUxIwfAYwUl6MZWprBf3QgSjh+r5qLqDleMEGe dXFrDSbOo7onlPond5UrXDAqmKdN7PUgBKFUX8DySeeM6A8m2nG1JPzFdqGh5mGxM9oo QxzMANNMguN7Ky6EiULGv8PN2uX2onwPVf+IhGKZTH2UfOed1j9NkETgDw+fOP1ERURJ U+Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592668; x=1770197468; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=zKtT6XQ/JEquVZGbic/ESwJDGPsZ48peRTM8lwccGeU=; b=EkArK32tCEBE05tZC4uRePHX18NbMJCYXwRfNTqdUb/iY9zETDPhhfjpQUgGEw2mlI 50jNp4l7JOb1n7wVwFbM8yKMSsqcYrMktSn2s11SV6HsZjkaaWBnjOLTt9WdmGj1Vhg9 i9V+sK/5Sr8loBdiRWM0468pBAWagWbgtFlH/qj0mXApPOaiFy/2Vvw42+8+oG8MNrQa TrUhHLGv1INs9Kk4HFLYefy6KJJ9oueu5B7MtB1U7MnEoLfbJ7OsBsqV6wfcqkIKT8Lv cVXbbI9CG9chsTIobB6i+Lp8aJSH6YqiJNsifFy1iwSukeTQ1/EX0ZMhVbJ4ur/B7lcX X3sg== X-Forwarded-Encrypted: i=1; AJvYcCWfXgsRBPYwRru1b4rWFXFI7YMEEuY0pYH6wNj5gc67OVo3GPBYZCrGp4pRp7V/EYMagJcHtuB7JapU9sE=@vger.kernel.org X-Gm-Message-State: AOJu0Yz+yil9NSpND2qbN8QtkNtq9zAJW5JOu32+LAeD0B8o9XFSOK5h qbmu5MZDd3frm+UzL3ywlLyBwSZvoMDroNvLgfH7Zx+SGSHjvCeHeyCs X-Gm-Gg: AZuq6aI/6LuVrbv0YEk+/zYa8dYNaKTgNWen3Jtv00Z3fMO3Vm4zQvb1gkgINncxnWp J0ZPNG+3s1Io8ic8z4E+ZpeoKLYL7PnI6jY6n7PTIaK3xE3EDLNKzvJ4N+uWetUNFyAeel/xSsK 3DbPDmN8Y8JgvaD/0MkIY0cheyliQT0ZU45i1fcUNTTumft0PMMdH4Tc8sulQtBb5LdUww4jYZu NfdT4gtypipIzC/C/ihHvt9+OkIeuNN3PQgbW5PK6LbWZyd2xaQ9Llc+glCYj7YW0s/PRXTs43z gpKQ9cEtYUFwGfrrApYN1TOkymxwvUrJKtIVfhhJKkvy8JPISf7GDEZimHMTaBmHESeHxh/h3ov DPYpwrAHs5h5S+Akw1KFjQgpysCb1mEP74lENrauzV9//N4lS8GIBiU6TIamy7w2s8tDiU2lkQu N+cmzcKjTOowZcqN1j4MMrJhS1IV42shKIRSqEu8Hj4xw+9ZrNJw8/QCm3siTGXsjlh1IG X-Received: by 2002:a17:90a:d603:b0:34a:b4a2:f0c8 with SMTP id 98e67ed59e1d1-353feda7c73mr3956579a91.30.1769592668439; Wed, 28 Jan 2026 01:31:08 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.31.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:31:07 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:34 +0800 Subject: [PATCH v2 10/12] mm, swap: no need to truncate the scan border Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-10-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=1640; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=ck39h4o+3Bnrl/Eu9W8vHqj0uF3mkWDHUJNaikw0DaQ=; b=kJCS5ZA08P6/rGRGb+xeKM/R1VDwIlZaFTsuoDqsV+6Ui4ub/Gh80jMko5GwkW+9xBorUMw8i WZE9V+lM9lBAP4xPOGnTEu94K9jhWqa2iuEXNM76wmVfxgMpiP65aqW X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song swap_map had a static flexible size, so the last cluster won't be fully covered, hence the allocator needs to check the scan border to avoid OOB. But the swap table has a fixed-sized swap table for each cluster, and the slots beyond the device size are marked as bad slots. The allocator can simply scan all slots as usual, and any bad slots will be skipped. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swap.h | 2 +- mm/swapfile.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 751430e2d2a5..9fc5fecdcfdf 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -85,7 +85,7 @@ static inline struct swap_cluster_info *__swap_offset_to_= cluster( struct swap_info_struct *si, pgoff_t offset) { VM_WARN_ON_ONCE(percpu_ref_is_zero(&si->users)); /* race with swapoff */ - VM_WARN_ON_ONCE(offset >=3D si->max); + VM_WARN_ON_ONCE(offset >=3D roundup(si->max, SWAPFILE_CLUSTER)); return &si->cluster_info[offset / SWAPFILE_CLUSTER]; } =20 diff --git a/mm/swapfile.c b/mm/swapfile.c index 45579ace27ba..a7fc8837eb74 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -945,8 +945,8 @@ static unsigned int alloc_swap_scan_cluster(struct swap= _info_struct *si, { unsigned int next =3D SWAP_ENTRY_INVALID, found =3D SWAP_ENTRY_INVALID; unsigned long start =3D ALIGN_DOWN(offset, SWAPFILE_CLUSTER); - unsigned long end =3D min(start + SWAPFILE_CLUSTER, si->max); unsigned int order =3D likely(folio) ? folio_order(folio) : 0; + unsigned long end =3D start + SWAPFILE_CLUSTER; unsigned int nr_pages =3D 1 << order; bool need_reclaim, ret, usable; =20 --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BDD5134BA20 for ; Wed, 28 Jan 2026 09:31:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592674; cv=none; b=e2VdXlJyy0f2jTR492RU3hla/5nnlmAbon32njWfYVuisJRNuJtc5aTlIQftcJva6KVin3VawakJLih//XkCJs1inAIj1SLJS+N+LVdq2a2T2wOWACYINJsIWjv8ODqceW5eTfdVw2nh4feGpX0C8nx/uQTGprG9MvKUrVH0Ip0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592674; c=relaxed/simple; bh=V1casluMYNM9duIDGgLVQmk4P9Wz4Ub3EjlBggQQcHI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ZfZCz8VE7eFyDCAyFG+A7gD9GPUf571nI1Hgik2QHT0V+osmbGTXy7Y1v8GahC501xeKsEvKUuljBOVWz+GK0CMixjWOQuTFb/Ny+4h1uTQmHgp4Pzr2Dk03/Y7Fuc79LZ49WfXtsy5irgMUiZ3bBCARjFqE40avmH3Em1cMOaw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=TQxRkn7Y; arc=none smtp.client-ip=209.85.216.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TQxRkn7Y" Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-34c3259da34so3920801a91.2 for ; Wed, 28 Jan 2026 01:31:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592672; x=1770197472; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=IO3x3MxqGRi8WhnUHEHqBelhkKQlEmUKW/aSZAplwzs=; b=TQxRkn7YsTpbvWLwp0VX7SFqaBQdYV692bWvCbODvI6TpnIQWp4BS0H9Fvc3VMt4VF xjTgvp5o4z/BvIf4/nvl6+eXSPQM29cRD/WyK5f3O586eu7PRKKPolR5JO1XMFZLWZ0s T+j0IjrbyDFWShTaYTo+iArAl3pF46jEKOpIMMQipMT1AH/Y2MJNoCbIUeUqmi2WYsA5 bAZIAtiM7AQZpbbN807iq1y0Ob8v9dx1LPQ19mgl0cSzg6oCjxyfcJi7CAGlzYsMJ03C 7TwDaDTUDVM1HW2XlwzuD77951d/ch+njYGXPx9/bYZFCCSf3G8bGhK87uglqF9LZw7g 1cqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592672; x=1770197472; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=IO3x3MxqGRi8WhnUHEHqBelhkKQlEmUKW/aSZAplwzs=; b=oIiz9khNJXEj873s4U0XXFXN4cCWBD6zLLMLTUPDlf2d7XNMjOiu1KE8GxGkNENYPH 2TnyBOZzGTHetwnSgAPe5pshd/bgbL2tfecARmA1lgDvfrGCfMPM5wX/72LbtIsK/HB+ tfBP6vgolYc/cAvcCzLG4wdYrckLMcyno2paq0X9zCX7mdZ2zpRoFMfCgVcmmujMLbcM 2XBCjy6HErvClukjT9agOb7RJWQy+CJxihhUdmu580mgw4XWztfY5vBsGwy5Tcd04x+x C2ybzTvtvr5AWqA+9fFvk4/I2zHy5bYmIyCQ9G+jR9rlH0XEs+nz6OuPk3AAVNBhWYox FMmg== X-Forwarded-Encrypted: i=1; AJvYcCXy7gPauPxA4aKs9XaHsoGy8CJ+gZsWdDBqkIpKG2ZE8rpMfJtiia0uWNBIC34hqCZSAfcWwnaxxABAi/w=@vger.kernel.org X-Gm-Message-State: AOJu0YxjMHfQfgr5hS50w0wywFX7Md6L8L/NFOrtbmfDRER2kKEHN+Kr 3xRERMyeus30W71IhRCg+aRpKVxSP5D5j6DC2dZd/u2xTbVme9D0DkRj X-Gm-Gg: AZuq6aIY3oB+C28ys/CrHEJGggXZ0VFWt27/Bi+5Ho/jrfFjK6RUh3mmNw2O4VI2r9l SCxaYbCmdEZ/8KrPxu02yupiKPpYll8rM1yMferJz40dkkUj1eu91K1af1O3SQcw9H11PtqmjvR +zGIdXthsffGPQ8KfWZLHW7/urH4tKHk5Gj4SeDSnVV8wmkSUBZrNnoFtNWNaLy/Oy1P1LW1nMR OY01fr5Sp/U+FhUb+reERc2RiqtmtX87aVOEDTojY013NMlNfQiVIUS+tf4p3WTe4Iu3y2w8ZGb s159bmVN3T3PNhQccBffNMO7po0QevQqd8qTej5KHIV3PRxt/olt+estF6Fmu20PELRc6hpoHR/ pRXBx8oXm5hgym9CNkZqEydDC/RlWupNGfMubOINgfV5Q0PzFiJBH0e4lxSR/A09NW0vNFUiGl7 QUH2XQCWBjryIyNt5VuK7omkiUPnkPbiIAPUwmMlWenZ85KMcjLceS/ipg4nIIYW0Uo6OP X-Received: by 2002:a17:90b:3ccf:b0:353:356c:6821 with SMTP id 98e67ed59e1d1-353fecd0906mr4046998a91.8.1769592672086; Wed, 28 Jan 2026 01:31:12 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.31.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:31:11 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:35 +0800 Subject: [PATCH v2 11/12] mm, swap: simplify checking if a folio is swapped Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-11-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=7000; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=JNIm9AZzdz6ohutIc0C0GEVsDBQSvtZ0ar1ajiaQqZM=; b=cXY2N0M4lgSWLTYK6dAshXYinNIZR/38U4M7MwXKjOqluHtDrbBrKH/NSmhj3jkwpLJCLsOkX rodLFRaLMbgDBVVA1L4Er5DEpZVqhzmWpkF+JV9PrCNB1OLiKTdJw7m X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Clean up and simplify how we check if a folio is swapped. The helper already requires the folio to be in swap cache and locked. That's enough to pin the swap cluster from being freed, so there is no need to lock anything else to avoid UAF. And besides, we have cleaned up and defined the swap operation to be mostly folio based, and now the only place a folio will have any of its swap slots' count increased from 0 to 1 is folio_dup_swap, which also requires the folio lock. So as we are holding the folio lock here, a folio can't change its swap status from not swapped (all swap slots have a count of 0) to swapped (any slot has a swap count larger than 0). So there won't be any false negatives of this helper if we simply depend on the folio lock to stabilize the cluster. We are only using this helper to determine if we can and should release the swap cache. So false positives are completely harmless, and also already exist before. Depending on the timing, previously, it's also possible that a racing thread releases the swap count right after releasing the ci lock and before this helper returns. In any case, the worst that could happen is we leave a clean swap cache. It will still be reclaimed when under pressure just fine. So, in conclusion, we can simplify and make the check much simpler and lockless. Also, rename it to folio_maybe_swapped to reflect the design. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swap.h | 5 ++-- mm/swapfile.c | 82 ++++++++++++++++++++++++++++++++-----------------------= ---- 2 files changed, 48 insertions(+), 39 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 9fc5fecdcfdf..3ee761ee8348 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -195,12 +195,13 @@ extern int swap_retry_table_alloc(swp_entry_t entry, = gfp_t gfp); * * folio_alloc_swap(): the entry point for a folio to be swapped * out. It allocates swap slots and pins the slots with swap cache. - * The slots start with a swap count of zero. + * The slots start with a swap count of zero. The slots are pinned + * by swap cache reference which doesn't contribute to swap count. * * folio_dup_swap(): increases the swap count of a folio, usually * during it gets unmapped and a swap entry is installed to replace * it (e.g., swap entry in page table). A swap slot with swap - * count =3D=3D 0 should only be increasd by this helper. + * count =3D=3D 0 can only be increased by this helper. * * folio_put_swap(): does the opposite thing of folio_dup_swap(). */ diff --git a/mm/swapfile.c b/mm/swapfile.c index a7fc8837eb74..f5474ddbba36 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1743,7 +1743,11 @@ int folio_alloc_swap(struct folio *folio) * @subpage: if not NULL, only increase the swap count of this subpage. * * Typically called when the folio is unmapped and have its swap entry to - * take its palce. + * take its place: Swap entries allocated to a folio has count =3D=3D 0 an= d pinned + * by swap cache. The swap cache pin doesn't increase the swap count. This + * helper sets the initial count =3D=3D 1 and increases the count as the f= olio is + * unmapped and swap entries referencing the slots are generated to replace + * the folio. * * Context: Caller must ensure the folio is locked and in the swap cache. * NOTE: The caller also has to ensure there is no raced call to @@ -1944,49 +1948,44 @@ int swp_swapcount(swp_entry_t entry) return count < 0 ? 0 : count; } =20 -static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, - swp_entry_t entry, int order) +/* + * folio_maybe_swapped - Test if a folio covers any swap slot with count >= 0. + * + * Check if a folio is swapped. Holding the folio lock ensures the folio w= on't + * go from not-swapped to swapped because the initial swap count increment= can + * only be done by folio_dup_swap, which also locks the folio. But a concu= rrent + * decrease of swap count is possible through swap_put_entries_direct, so = this + * may return a false positive. + * + * Context: Caller must ensure the folio is locked and in the swap cache. + */ +static bool folio_maybe_swapped(struct folio *folio) { + swp_entry_t entry =3D folio->swap; struct swap_cluster_info *ci; - unsigned int nr_pages =3D 1 << order; - unsigned long roffset =3D swp_offset(entry); - unsigned long offset =3D round_down(roffset, nr_pages); - unsigned int ci_off; - int i; + unsigned int ci_off, ci_end; bool ret =3D false; =20 - ci =3D swap_cluster_lock(si, offset); - if (nr_pages =3D=3D 1) { - ci_off =3D roffset % SWAPFILE_CLUSTER; - if (swp_tb_get_count(__swap_table_get(ci, ci_off))) - ret =3D true; - goto unlock_out; - } - for (i =3D 0; i < nr_pages; i++) { - ci_off =3D (offset + i) % SWAPFILE_CLUSTER; - if (swp_tb_get_count(__swap_table_get(ci, ci_off))) { - ret =3D true; - break; - } - } -unlock_out: - swap_cluster_unlock(ci); - return ret; -} - -static bool folio_swapped(struct folio *folio) -{ - swp_entry_t entry =3D folio->swap; - struct swap_info_struct *si; - VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_ONCE_FOLIO(!folio_test_swapcache(folio), folio); =20 - si =3D __swap_entry_to_info(entry); - if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!folio_test_large(folio))) - return swap_entry_swapped(si, entry); + ci =3D __swap_entry_to_cluster(entry); + ci_off =3D swp_cluster_offset(entry); + ci_end =3D ci_off + folio_nr_pages(folio); + /* + * Extra locking not needed, folio lock ensures its swap entries + * won't be released, the backing data won't be gone either. + */ + rcu_read_lock(); + do { + if (__swp_tb_get_count(__swap_table_get(ci, ci_off))) { + ret =3D true; + break; + } + } while (++ci_off < ci_end); + rcu_read_unlock(); =20 - return swap_page_trans_huge_swapped(si, entry, folio_order(folio)); + return ret; } =20 static bool folio_swapcache_freeable(struct folio *folio) @@ -2032,7 +2031,7 @@ bool folio_free_swap(struct folio *folio) { if (!folio_swapcache_freeable(folio)) return false; - if (folio_swapped(folio)) + if (folio_maybe_swapped(folio)) return false; =20 swap_cache_del_folio(folio); @@ -3710,6 +3709,8 @@ void si_swapinfo(struct sysinfo *val) * * Context: Caller must ensure there is no race condition on the reference * owner. e.g., locking the PTL of a PTE containing the entry being increa= sed. + * Also the swap entry must have a count >=3D 1. Otherwise folio_dup_swap = should + * be used. */ int swap_dup_entry_direct(swp_entry_t entry) { @@ -3721,6 +3722,13 @@ int swap_dup_entry_direct(swp_entry_t entry) return -EINVAL; } =20 + /* + * The caller must be increasing the swap count from a direct + * reference of the swap slot (e.g. a swap entry in page table). + * So the swap count must be >=3D 1. + */ + VM_WARN_ON_ONCE(!swap_entry_swapped(si, entry)); + return swap_dup_entries_cluster(si, swp_offset(entry), 1); } =20 --=20 2.52.0 From nobody Sat Feb 7 08:13:38 2026 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65C8234D3B0 for ; Wed, 28 Jan 2026 09:31:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592677; cv=none; b=KxzzOS5pg/rlli4p1g6teL6xFIQ+lwl1EwVQ3DrmnC0CrcYnq86ndbXz9yGfUl0YQ1ZXcrJwqhuotXoPV4QqbCBhr+eftegxyHYxqjpEThZHMJSoEShUhTE6OO8QcGYJPRE8ZY/IO6VH4PPKC73+Eg4+xbeW27eVs2QIAbV194M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769592677; c=relaxed/simple; bh=lSB9Ksu9nN+FS2aMiv+c7ve7THffbKJgLEimLksoXJ4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=JnDzrjp9n/67w0G5hETG6Q8ZScmoULY006ZNrcSCDOz6xIGDNJw5XmhtTFbb4pcgbPPy/Ki7mQh8gDtg+ddbWvjo/pbM6g6Lx4vS8v7lAqmXMs6fvbMwVPTMA9q2w4+nQKD0fayY3TPiFfu/gaCvFvUUmi/eT445SpxWXb8ByFY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=IjdAUrKW; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IjdAUrKW" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-81ecbdfdcebso3577584b3a.1 for ; Wed, 28 Jan 2026 01:31:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769592676; x=1770197476; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=CaANuFojWwiJkS5m+OWCuKgIqQ4Dc/6b0DoNriahO70=; b=IjdAUrKW9GSCqhe4ZTJjKhdFec32GGhYFhXWNgHpVhf4v9tn8zbrM9NTfy3Vf/QwB8 5ufdJhTVICQzwPjmgsdp4xEH8qPxbKv7g30mZlerpeIfYE6OVmXKFRTx9iyA4DEwnnwG 9ivaPCQeL5TFO2cMhOVIdvFZCXB7P+VRFgMMc9nmHNJ7FrKbkZ8yN19lNHbgl4P7mXfw S+/1bvJDcmkzbPCzmVW4v06dko2AVyeB99hY8/V0/G8f75lGnKl67oPfeecH1FJ00cHr Hj9hDnz2hcBWN6dgr3GQjuJOG0gFbCqn0aOr6paRR0qQUR0hY0tHok+qYMOiNdSvpt5J NOMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769592676; x=1770197476; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=CaANuFojWwiJkS5m+OWCuKgIqQ4Dc/6b0DoNriahO70=; b=XMdGwxW93kOjt8PWHFrS+U1yafo1t17f/aVXbRQ7hCQ4pYFL0z/VG0O8CAQH/Xu+/h 5gb5BR8LbAlyQrKh2fCrz/F7XnkrROEGbVj3cKE3UXbMlbRwwzu5DyunJPdIBYiBk5XC DcogB42uWFSUAuVgLMuJQc6WsjSvqNDEo1dILKoLeGXWPgakewTWUZCJrn4v6Ynhrp0V FZsx80aE0BDNynKssdKdDLBp/nXYqOc0HF7l/ZEcUf80e3peZ71Khibci+EIQtquiPPp ILe/Zeqj5a+Z51zvq3+xaDHqrWLFgw3JDG69YXW8GYcKwSY1Cs7/g7u5kNWoxGtOfF1z lokA== X-Forwarded-Encrypted: i=1; AJvYcCV1E0peSCgjkAPGS1uxxPjPcrLOqFvEq23xY1Nq1OPxnw2Q97mkTym3fOoKHQzk7eeQYhyNHfLXVOnNtGM=@vger.kernel.org X-Gm-Message-State: AOJu0Yy96699ym/0YGbTjnDxMUHQ0p3ScQZOXBWqoQPVHuoi8ewN5Vpa OR8mmO1prZpbNRDDX10NsS1bsKPkZg6l8EmAcobxvqISDhotbXTj4544 X-Gm-Gg: AZuq6aLmTcahEzCJ7/KVPd8tYrt8vUcvGcBFN7RlrPSG6iRDHMLJu+aAAbpmcUa7/Ls VUWR6O0HQTaH/QwTfJyB8v1hltqJeYJqj1oD6JNHjh6101tAge5M4d2LcVrbNNFexvIwQEz0uxv XIKGPTTVGiABOkV8qBy16Zn3EJfa80rf2iXYSfHLtZUJY+hP6m5yH9YFvmbjmIrIfraDEmXkCHc yesuFwFP6KAiz6+/AIa7XATnoENpAfn29uxr0wW1OaZf52mYgf1tk/V/h4LW1fkMkV1OHCwQlTy dY2yqX0JYBQPHf6NgxeSlyUzLi0psz4zVGqRoPENtqObzNR+NAzBiP5E0bGq/QWt0lTLJOcNiY8 9T8G29dqqMNpDcWjWBCl56a+AHhd5ihPyZtll2SHuvRVafBEfPTKS5WHKCnb6PaJKsTkMELfg9i zjYxi3ZV9qzf1CR/I6zJMB7ZATbBW4wl9Gqku6pvNuXXhZZSelKihwHqUfHg== X-Received: by 2002:a05:6a20:9c9b:b0:38d:ebdc:3555 with SMTP id adf61e73a8af0-38ec654e764mr3944319637.66.1769592675707; Wed, 28 Jan 2026 01:31:15 -0800 (PST) Received: from [127.0.0.1] ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3540f3eca6dsm1872235a91.15.2026.01.28.01.31.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jan 2026 01:31:15 -0800 (PST) From: Kairui Song Date: Wed, 28 Jan 2026 17:28:36 +0800 Subject: [PATCH v2 12/12] mm, swap: no need to clear the shadow explicitly Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260128-swap-table-p3-v2-12-fe0b67ef0215@tencent.com> References: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> In-Reply-To: <20260128-swap-table-p3-v2-0-fe0b67ef0215@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769592628; l=3144; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=YZtJiA9aWsbTiyi4phkoPFbj1sojs0mkWm2jrsSQgE0=; b=TNp8hdi2S5UVTWhtAIwWtT+XrIsAbY3NGTmnWtYTe6EB0kGmkxeKGKsio4vdo10XAeo80SooH giH5KHUbLpyBJyCok7oCHwBjFQrvuDsP5UP1Z5/vuNZXNfi3DPbBpYo X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Since we no longer bypass the swap cache, every swap-in will clear the swap shadow by inserting the folio into the swap table. The only place we may seem to need to free the swap shadow is when the swap slots are freed directly without a folio (swap_put_entries_direct). But with the swap table, that is not needed either. Freeing a slot in the swap table will set the table entry to NULL, which erases the shadow just fine. So just delete all explicit shadow clearing, it's no longer needed. Also, rearrange the freeing. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swap.h | 1 - mm/swap_state.c | 21 --------------------- mm/swapfile.c | 2 -- 3 files changed, 24 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 3ee761ee8348..386a289ef8e7 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -290,7 +290,6 @@ void __swap_cache_del_folio(struct swap_cluster_info *c= i, struct folio *folio, swp_entry_t entry, void *shadow); void __swap_cache_replace_folio(struct swap_cluster_info *ci, struct folio *old, struct folio *new); -void __swap_cache_clear_shadow(swp_entry_t entry, int nr_ents); =20 void show_swap_cache_info(void); void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int n= r); diff --git a/mm/swap_state.c b/mm/swap_state.c index c808f0948b10..20c4c2414db3 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -350,27 +350,6 @@ void __swap_cache_replace_folio(struct swap_cluster_in= fo *ci, } } =20 -/** - * __swap_cache_clear_shadow - Clears a set of shadows in the swap cache. - * @entry: The starting index entry. - * @nr_ents: How many slots need to be cleared. - * - * Context: Caller must ensure the range is valid, all in one single clust= er, - * not occupied by any folio, and lock the cluster. - */ -void __swap_cache_clear_shadow(swp_entry_t entry, int nr_ents) -{ - struct swap_cluster_info *ci =3D __swap_entry_to_cluster(entry); - unsigned int ci_off =3D swp_cluster_offset(entry), ci_end; - unsigned long old; - - ci_end =3D ci_off + nr_ents; - do { - old =3D __swap_table_xchg(ci, ci_off, null_to_swp_tb()); - WARN_ON_ONCE(swp_tb_is_folio(old) || swp_tb_get_count(old)); - } while (++ci_off < ci_end); -} - /* * If we are the only user, then try to free up the swap cache. * diff --git a/mm/swapfile.c b/mm/swapfile.c index f5474ddbba36..d77c00c4b511 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1287,7 +1287,6 @@ static void swap_range_alloc(struct swap_info_struct = *si, static void swap_range_free(struct swap_info_struct *si, unsigned long off= set, unsigned int nr_entries) { - unsigned long begin =3D offset; unsigned long end =3D offset + nr_entries - 1; void (*swap_slot_free_notify)(struct block_device *, unsigned long); unsigned int i; @@ -1312,7 +1311,6 @@ static void swap_range_free(struct swap_info_struct *= si, unsigned long offset, swap_slot_free_notify(si->bdev, offset); offset++; } - __swap_cache_clear_shadow(swp_entry(si->type, begin), nr_entries); =20 /* * Make sure that try_to_unuse() observes si->inuse_pages reaching 0 --=20 2.52.0