From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07DEB2EF660 for ; Sun, 25 Jan 2026 17:58:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363887; cv=none; b=jIx30F1K0jCNHbVZRETx4aZA6JBcubT6U3YtJ29ziN2pyMxCgwDeX2ppkBLwgepidN5Fp7XA6y9QSJCVoY05IbzVzJILMRvZtNO9f4Qie8PX8lrPamyq2JpJ96RTAGUivu4roRGMifzKIoX1HsNyToXb8C6NqRlg+A8ew8RTts4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363887; c=relaxed/simple; bh=VG8IU6huOx+mOLrinUPxSTM5nUnH0djp/f3W9DrUDAM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=GJ5QEYun8g4ZuK86UYn30OTOHmgQ6VVTLzj1JDQ3q3S7xMWL8Ed5y/xVvoRsBdhVcwOxHXLI88YpI1PrhnMtn062U/8zL5XyOKinWpiEvlQEWVfWsvPSafnunXIJVSh3fA+jc3ovpzmAM6ZHcE/ZENuuYx/W7LZpXMZ5gobs2Uw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=e2JJ6S9B; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="e2JJ6S9B" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-8217f2ad01eso3335603b3a.2 for ; Sun, 25 Jan 2026 09:58:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363885; x=1769968685; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=JXkKpfW2fKRP2DHBX2nOhkvpUQvYM7wlOqYMZBE7moY=; b=e2JJ6S9B253YhpYR8KMt3yAUAxZFHkMrqUCHl7rstYPRuGhuj1r3JwWw2+EryGXaqD TbVvdl5qZPimWH1wA3J1LImuBkrRGFJN085TC5KIzNFz9xgnqt0ic7a7aOA+V4d6UzIH xaUGP5dWPjcoQztt7c3GSjigvwcgyi0/+SNVN4GIY+r7ixyUTqXKq8uje8UhKKk57xcc BZWk3l6pK+/nG0GH5OTdKKuzhXUYqwafcnqI6APky/2yZB6IYg4megKP5FmTSBhUKvaK WADCx2NKt5OU22WK7h/JkdA9d6B4QFf8gKmizEBsKCpBGZHyLvwhY3mXSR4zYz1JcVmy a99A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363885; x=1769968685; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=JXkKpfW2fKRP2DHBX2nOhkvpUQvYM7wlOqYMZBE7moY=; b=vjk/V92O1yLsuA9MKOkvGjYrKiEgtlYmy1ZNjo8Wmx15gob4+6EM4K5wWnrLIKBTMH RqiY76GbOTvZ9O/cSDDXm6BD8FnINXBp7iXn2kRCo0gvmhGeBu/1l65YJ2eBy0mGr9ZK 8NChqGJaJCm8vjUaPowl4DOhXjcwVTo6laoEDnOHDsFbv4NHdDFLgRssnlqShlzXznqA 1QT4tjYZtqoqi8JtDl0TDobX+7RvSgAOYXAZ9r5OjUhaVP5/Cip4sf/C0O5z1d0rjgqf 37qVuoKFQ45L6lymXyhTAc5er5NSjGdEu19qSpVYZcwv1YZbTVO7M3iuBeqRQ2VUzfU3 Ba2g== X-Forwarded-Encrypted: i=1; AJvYcCVpONg2ACFVjxqt0zUdGnPL+IM+74VckshbPrnFNnArqpL0/4WY83UXu11X67q+8r8JJJfTT5Cqq5Hb250=@vger.kernel.org X-Gm-Message-State: AOJu0Yxidwr6ISOIjVkT85atrrM6KJh8tW5ChepYJrvD8lKl9oqHPqFZ VY7gNHDKiQL+Zz6TsjOp0/A3HL9nYhIGnbZVoxQJu+tMe78/ajrpDCe5 X-Gm-Gg: AZuq6aIP//02IKL+WqkdNeanqcoMohU/jXeym/KULg7mpv0YNm2k8uVIIxQhnb5OogQ QlPhYGcJfLVrGVNWcD1mnOaHtQwsOJFZNWRGvkLFk3kJwxzQEEfzscbaKSPuiU26lVLdBO/TQD0 qedT80a+2f9LV03oa7Y7/FUf/Tqv90VHIWXZPvvRFIyUZSk1ZZfZoL+4SD9SwMv9oevl/84QXRF k6s3AC2PMCXCiXJC6JF9uTBhLSWzWLKeWVKtR4fD3GcmV7uRJdEQsiUUoLHkg5FxpTjI1UQ4/wq LzRutcmdXBwFrn8AxaINYi3h9qYqwIEduBfVXNfbfouarTlru8ls6c+JIRCD9JY1pHlYN+eR+85 E43vcCbCiHK5mrLcJUn0M6uNzfQjaYHqacaCm76MD33eEvXoPTU4gEql/7htItOyrMZV4hRMoTW 4HZAAAJgn2Da/jsv6cI0ast5d7HwiZx54nSiBdmidY2N53N1P5 X-Received: by 2002:a05:6a00:761b:b0:81f:4f74:2246 with SMTP id d2e1a72fcca58-823411fa543mr1939435b3a.16.1769363885303; Sun, 25 Jan 2026 09:58:05 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:04 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:24 +0800 Subject: [PATCH 01/12] mm, swap: protect si->swap_file properly and use as a mount indicator Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-1-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=5423; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=KCJ3euqA2qAaRKKZ+5BOjIB6OnqWy66OxHgNUHCngaI=; b=+gz5DUR9sRm7/DlBjNBzpGMqXfRha1zGai1p/b3tbjYg8aHCenc1uarNGtadbcM3aoVmafff+ 2wiceVFdUL4Dxgv2A1SJdkzRw26+MZR2C8HV9/QumIzrQcZay8aRMDm X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song /proc/swaps uses si->swap_map as the indicator to check if the swap device is mounted. swap_map will be removed soon, so change it to use si->swap_file instead because: - si->swap_file is exactly the only dynamic content that /proc/swaps is interested in. Previously, it was checking si->swap_map just to ensure si->swap_file is available. si->swap_map is set under mutex protection, and after si->swap_file is set, so having si->swap_map set guarantees si->swap_file is set. - Checking si->flags doesn't work here. SWP_WRITEOK is cleared during swapoff, but /proc/swaps is supposed to show the device under swapoff too to report the swapoff progress. And SWP_USED is set even if the device hasn't been properly set up. We can have another flag, but the easier way is to just check si->swap_file directly. So protect si->swap_file setting with mutext, and set si->swap_file only when the swap device is truly enabled. /proc/swaps only interested in si->swap_file and a few static data reading. Only si->swap_file needs protection. Reading other static fields is always fine. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 7b055f15d705..521f7713a7c3 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -110,6 +110,7 @@ struct swap_info_struct *swap_info[MAX_SWAPFILES]; =20 static struct kmem_cache *swap_table_cachep; =20 +/* Protects si->swap_file for /proc/swaps usage */ static DEFINE_MUTEX(swapon_mutex); =20 static DECLARE_WAIT_QUEUE_HEAD(proc_poll_wait); @@ -2521,7 +2522,8 @@ static void drain_mmlist(void) /* * Free all of a swapdev's extent information */ -static void destroy_swap_extents(struct swap_info_struct *sis) +static void destroy_swap_extents(struct swap_info_struct *sis, + struct file *swap_file) { while (!RB_EMPTY_ROOT(&sis->swap_extent_root)) { struct rb_node *rb =3D sis->swap_extent_root.rb_node; @@ -2532,7 +2534,6 @@ static void destroy_swap_extents(struct swap_info_str= uct *sis) } =20 if (sis->flags & SWP_ACTIVATED) { - struct file *swap_file =3D sis->swap_file; struct address_space *mapping =3D swap_file->f_mapping; =20 sis->flags &=3D ~SWP_ACTIVATED; @@ -2615,9 +2616,9 @@ EXPORT_SYMBOL_GPL(add_swap_extent); * Typically it is in the 1-4 megabyte range. So we can have hundreds of * extents in the rbtree. - akpm. */ -static int setup_swap_extents(struct swap_info_struct *sis, sector_t *span) +static int setup_swap_extents(struct swap_info_struct *sis, + struct file *swap_file, sector_t *span) { - struct file *swap_file =3D sis->swap_file; struct address_space *mapping =3D swap_file->f_mapping; struct inode *inode =3D mapping->host; int ret; @@ -2635,7 +2636,7 @@ static int setup_swap_extents(struct swap_info_struct= *sis, sector_t *span) sis->flags |=3D SWP_ACTIVATED; if ((sis->flags & SWP_FS_OPS) && sio_pool_init() !=3D 0) { - destroy_swap_extents(sis); + destroy_swap_extents(sis, swap_file); return -ENOMEM; } return ret; @@ -2851,7 +2852,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) flush_work(&p->reclaim_work); flush_percpu_swap_cluster(p); =20 - destroy_swap_extents(p); + destroy_swap_extents(p, p->swap_file); if (p->flags & SWP_CONTINUED) free_swap_count_continuations(p); =20 @@ -2941,7 +2942,7 @@ static void *swap_start(struct seq_file *swap, loff_t= *pos) return SEQ_START_TOKEN; =20 for (type =3D 0; (si =3D swap_type_to_info(type)); type++) { - if (!(si->flags & SWP_USED) || !si->swap_map) + if (!(si->swap_file)) continue; if (!--l) return si; @@ -2962,7 +2963,7 @@ static void *swap_next(struct seq_file *swap, void *v= , loff_t *pos) =20 ++(*pos); for (; (si =3D swap_type_to_info(type)); type++) { - if (!(si->flags & SWP_USED) || !si->swap_map) + if (!(si->swap_file)) continue; return si; } @@ -3379,7 +3380,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) goto bad_swap; } =20 - si->swap_file =3D swap_file; mapping =3D swap_file->f_mapping; dentry =3D swap_file->f_path.dentry; inode =3D mapping->host; @@ -3429,7 +3429,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) =20 si->max =3D maxpages; si->pages =3D maxpages - 1; - nr_extents =3D setup_swap_extents(si, &span); + nr_extents =3D setup_swap_extents(si, swap_file, &span); if (nr_extents < 0) { error =3D nr_extents; goto bad_swap_unlock_inode; @@ -3538,6 +3538,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) prio =3D DEF_SWAP_PRIO; if (swap_flags & SWAP_FLAG_PREFER) prio =3D swap_flags & SWAP_FLAG_PRIO_MASK; + + si->swap_file =3D swap_file; enable_swap_info(si, prio, swap_map, cluster_info, zeromap); =20 pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s= %s%s\n", @@ -3562,10 +3564,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, special= file, int, swap_flags) kfree(si->global_cluster); si->global_cluster =3D NULL; inode =3D NULL; - destroy_swap_extents(si); + destroy_swap_extents(si, swap_file); swap_cgroup_swapoff(si->type); spin_lock(&swap_lock); - si->swap_file =3D NULL; si->flags =3D 0; spin_unlock(&swap_lock); vfree(swap_map); --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77C8B2EF665 for ; Sun, 25 Jan 2026 17:58:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363891; cv=none; b=rANIQUjijvJv9dnoScytmgdan2NnZ+96wPiDSa5peC4ARzqym1r6fnk+ZHb0D/7toRipsU+5uBDjHs/hkcIqmTfV73YWzuauaCOS3cOBHCR2D/hzHH5KXrKHEbcWSIZ+wpbY1dct/ybNp0GtOKnhgVHM/umvQU/HuXdcoWbchyI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363891; c=relaxed/simple; bh=kO89w4TSJX4bByAX1RSmDOrEA4P5dAaQCy7BjrZ/M2M=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=o8+p7NBD1j0Mi8xM8xyKifvzV4jG1zyeS3mAGDfSFfPc8YW8JKCtwX6S3Lk+G3rOLzLIJrxBdkVrnmyPSedi8nBAAy+ATlEyDXC+l3G+y0sjHRRqjx1wqfgzjXv7BIqkaql0c0KZXzCBun7MD+3TXx5AQ4l7xjaIW+lbpbGKr6Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=B/+vyksR; arc=none smtp.client-ip=209.85.210.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="B/+vyksR" Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-81e8a9d521dso2208901b3a.2 for ; Sun, 25 Jan 2026 09:58:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363890; x=1769968690; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=jHlSAs5DuvtfLFcO72y/SZVl/BNUOfJD+/3/mENU6eI=; b=B/+vyksRQvP3nVL9wBTgTfPbRoiWiCFeWG4hTrto7P8G+zNoQTq2sOFVCrnkeYbJHQ Slo51OLPniGvowo3Z5/2cY4aG9dm4KNkfAtQNCYe3YFL5USxYgl4wKhmEffW+H3GR/Qo 4WdcQN/0pelEJNfsI1l0Qsjx0i1fimydXHu/8/Y7oKkgxEeU592skxd68t91ieAbLkiK tcTB99MZdQ/1e8Pibr9giANnPbS5iYYAdlU0hSQ8jXwZS323y5XbDWXKlWGfpYG5DsQ1 S+NIK78idyEXpPoIQfxcb25qbVJ853FM+CJgOKIojfJkcvOBwTcjJSCCj3NjkQ1beCPR Npyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363890; x=1769968690; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=jHlSAs5DuvtfLFcO72y/SZVl/BNUOfJD+/3/mENU6eI=; b=SenVXpBRCGDxo6uOHxFSzIhaUcruihVKBgj93QmQU+N5vbA7Bjc8W+N54ge/P1Mu4d rtpSyI0+UtJckvv1vziJbvXfDu3CG9Bi184uLMBdilMJtSQP8TBir4LI3JF+OQUyv35T ebX2hWF+SSpF25EoJBQ3n+0PucDfAG9WkkS9tlxTa1dcwoxTkpJG7spzw4OBCCN3D2Ax FZfKAfGqwRM7HmAgj384dXHYatsOMBBZ5cXggxYxmsZEg98TViqkRDclyrL/22sZsye4 PSb1vbn7/HzOSsQPyQTRFl0DH8DbqBFJgHmBcf6XyTfKf2a8sLIdtBwJgNLHCPBHNf5t w+mg== X-Forwarded-Encrypted: i=1; AJvYcCW9u7jWOIiQnLSj4tL05GMQYSi/RzOoOcRbg+V3y5P7jOc+f8ZfLkS1c7QQP1C2VJmzTy/H7i75VmxoN9s=@vger.kernel.org X-Gm-Message-State: AOJu0Yy3jRQoBOYPqw7/0TDzHGunwHWbVUlEQIwryaC/crQFG4Ngj8/4 nJYaJPXWJH38TXlX8CILPY21WSeDQoUIOQr0l4kNg0W2sV8zaOugkaVa X-Gm-Gg: AZuq6aKxdR3mqeMgYC5ngtMAYMv+MHj3zaB5dnwtxWtMoKzIAr3F7Ygepp3GJEf6tsS +Qc9+psDpbCd2ibnB9ZDUP9r+CRDOJyH6m7dO6EWBZZS521GmF6P55K2KMWi3tjNpytX9vPCUc0 kHbuPFEA39Pf02KXWrzHitw/+9yVQeacji4V1L7HoS/GryDmxdGanW2a5d8iKLoWnQqB/tCm4Tm dx5rCOcFf1y1phYT0+jk8lPezT25x6GqAKSB0ukEe86AObL2/vPAkCih159YDMpWeYkUN2qW1f9 v3UUWMCy1ovjSQDdrva9Lfhhn40Cne3cUl7its55XVQApk4b8u8s5ieGEA2dGdJVlSotGgHpGWX 6z3bqbecZR2lRg90ZHjYoHdQqPnYoOpsd5mDlb5hM4drgltQRb1udYonAUEdEmxEhYef+dRNLpx MrVO3VGvkm6P0oGnZvxopJqe3/HERcANaMXL9TnQNyvI6+K3A6Y8r7uBtsKYM= X-Received: by 2002:aa7:88c6:0:b0:81d:d666:72e1 with SMTP id d2e1a72fcca58-823411e00b6mr1765829b3a.10.1769363889653; Sun, 25 Jan 2026 09:58:09 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:09 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:25 +0800 Subject: [PATCH 02/12] mm, swap: clean up swapon process and locking Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-2-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=8467; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=GJOWST4EwFrMORO0TdNo6OHVZqB7RqYtXfEdQQZxN/s=; b=1+Q0vMwGlsfOLBPgZnFYykSZjHWaTml+SZM7kwZ5syYpPD+PnO8XuyTJXA8RNo2mdHC+SJ0hj soclkuBW5DXDg7uHf8r+yXfzeah0MBi6BWWAr3uvZo7zxtFLbxbMiri X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Slightly clean up the swapon process. Add comments about what swap_lock protects, introduce and rename helpers that wrap swap_map and cluster_info setup, and do it outside of the swap_lock lock. This lock protection is not needed for swap_map and cluster_info setup because all swap users must either hold the percpu ref or hold a stable allocated swap entry (e.g., locking a folio in the swap cache) before accessing. So before the swap device is exposed by enable_swap_info, nothing would use the swap device's map or cluster. So we are safe to allocate and set up swap data freely first, then expose the swap device and set the SWP_WRITEOK flag. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 87 ++++++++++++++++++++++++++++++++-----------------------= ---- 1 file changed, 48 insertions(+), 39 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 521f7713a7c3..53ce222c3aba 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -65,6 +65,13 @@ static void move_cluster(struct swap_info_struct *si, struct swap_cluster_info *ci, struct list_head *list, enum swap_cluster_flags new_flags); =20 +/* + * Protects the swap_info array, and the SWP_USED flag. swap_info contains + * lazily allocated & freed swap device info struts, and SWP_USED indicates + * which device is used, ~SWP_USED devices and can be reused. + * + * Also protects swap_active_head total_swap_pages, and the SWP_WRITEOK fl= ag. + */ static DEFINE_SPINLOCK(swap_lock); static unsigned int nr_swapfiles; atomic_long_t nr_swap_pages; @@ -2646,8 +2653,6 @@ static int setup_swap_extents(struct swap_info_struct= *sis, } =20 static void setup_swap_info(struct swap_info_struct *si, int prio, - unsigned char *swap_map, - struct swap_cluster_info *cluster_info, unsigned long *zeromap) { si->prio =3D prio; @@ -2657,8 +2662,6 @@ static void setup_swap_info(struct swap_info_struct *= si, int prio, */ si->list.prio =3D -si->prio; si->avail_list.prio =3D -si->prio; - si->swap_map =3D swap_map; - si->cluster_info =3D cluster_info; si->zeromap =3D zeromap; } =20 @@ -2676,13 +2679,11 @@ static void _enable_swap_info(struct swap_info_stru= ct *si) } =20 static void enable_swap_info(struct swap_info_struct *si, int prio, - unsigned char *swap_map, - struct swap_cluster_info *cluster_info, - unsigned long *zeromap) + unsigned long *zeromap) { spin_lock(&swap_lock); spin_lock(&si->lock); - setup_swap_info(si, prio, swap_map, cluster_info, zeromap); + setup_swap_info(si, prio, zeromap); spin_unlock(&si->lock); spin_unlock(&swap_lock); /* @@ -2700,7 +2701,7 @@ static void reinsert_swap_info(struct swap_info_struc= t *si) { spin_lock(&swap_lock); spin_lock(&si->lock); - setup_swap_info(si, si->prio, si->swap_map, si->cluster_info, si->zeromap= ); + setup_swap_info(si, si->prio, si->zeromap); _enable_swap_info(si); spin_unlock(&si->lock); spin_unlock(&swap_lock); @@ -2724,8 +2725,8 @@ static void wait_for_allocation(struct swap_info_stru= ct *si) } } =20 -static void free_cluster_info(struct swap_cluster_info *cluster_info, - unsigned long maxpages) +static void free_swap_cluster_info(struct swap_cluster_info *cluster_info, + unsigned long maxpages) { struct swap_cluster_info *ci; int i, nr_clusters =3D DIV_ROUND_UP(maxpages, SWAPFILE_CLUSTER); @@ -2883,7 +2884,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) p->global_cluster =3D NULL; vfree(swap_map); kvfree(zeromap); - free_cluster_info(cluster_info, maxpages); + free_swap_cluster_info(cluster_info, maxpages); /* Destroy swap account information */ swap_cgroup_swapoff(p->type); =20 @@ -3232,10 +3233,15 @@ static unsigned long read_swap_header(struct swap_i= nfo_struct *si, =20 static int setup_swap_map(struct swap_info_struct *si, union swap_header *swap_header, - unsigned char *swap_map, unsigned long maxpages) { unsigned long i; + unsigned char *swap_map; + + swap_map =3D vzalloc(maxpages); + si->swap_map =3D swap_map; + if (!swap_map) + return -ENOMEM; =20 swap_map[0] =3D SWAP_MAP_BAD; /* omit header page */ for (i =3D 0; i < swap_header->info.nr_badpages; i++) { @@ -3256,9 +3262,9 @@ static int setup_swap_map(struct swap_info_struct *si, return 0; } =20 -static struct swap_cluster_info *setup_clusters(struct swap_info_struct *s= i, - union swap_header *swap_header, - unsigned long maxpages) +static int setup_swap_clusters_info(struct swap_info_struct *si, + union swap_header *swap_header, + unsigned long maxpages) { unsigned long nr_clusters =3D DIV_ROUND_UP(maxpages, SWAPFILE_CLUSTER); struct swap_cluster_info *cluster_info; @@ -3328,10 +3334,11 @@ static struct swap_cluster_info *setup_clusters(str= uct swap_info_struct *si, } } =20 - return cluster_info; + si->cluster_info =3D cluster_info; + return 0; err: - free_cluster_info(cluster_info, maxpages); - return ERR_PTR(err); + free_swap_cluster_info(cluster_info, maxpages); + return err; } =20 SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) @@ -3347,9 +3354,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) int nr_extents; sector_t span; unsigned long maxpages; - unsigned char *swap_map =3D NULL; unsigned long *zeromap =3D NULL; - struct swap_cluster_info *cluster_info =3D NULL; struct folio *folio =3D NULL; struct inode *inode =3D NULL; bool inced_nr_rotate_swap =3D false; @@ -3360,6 +3365,11 @@ SYSCALL_DEFINE2(swapon, const char __user *, special= file, int, swap_flags) if (!capable(CAP_SYS_ADMIN)) return -EPERM; =20 + /* + * Allocate or reuse existing !SWP_USED swap_info. The returned + * si will stay in a dying status, so nothing will access its content + * until enable_swap_info resurrects its percpu ref and expose it. + */ si =3D alloc_swap_info(); if (IS_ERR(si)) return PTR_ERR(si); @@ -3442,18 +3452,17 @@ SYSCALL_DEFINE2(swapon, const char __user *, specia= lfile, int, swap_flags) =20 maxpages =3D si->max; =20 - /* OK, set up the swap map and apply the bad block list */ - swap_map =3D vzalloc(maxpages); - if (!swap_map) { - error =3D -ENOMEM; + /* Setup the swap map and apply bad block */ + error =3D setup_swap_map(si, swap_header, maxpages); + if (error) goto bad_swap_unlock_inode; - } =20 - error =3D swap_cgroup_swapon(si->type, maxpages); + /* Set up the swap cluster info */ + error =3D setup_swap_clusters_info(si, swap_header, maxpages); if (error) goto bad_swap_unlock_inode; =20 - error =3D setup_swap_map(si, swap_header, swap_map, maxpages); + error =3D swap_cgroup_swapon(si->type, maxpages); if (error) goto bad_swap_unlock_inode; =20 @@ -3481,13 +3490,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, special= file, int, swap_flags) inced_nr_rotate_swap =3D true; } =20 - cluster_info =3D setup_clusters(si, swap_header, maxpages); - if (IS_ERR(cluster_info)) { - error =3D PTR_ERR(cluster_info); - cluster_info =3D NULL; - goto bad_swap_unlock_inode; - } - if ((swap_flags & SWAP_FLAG_DISCARD) && si->bdev && bdev_max_discard_sectors(si->bdev)) { /* @@ -3540,7 +3542,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) prio =3D swap_flags & SWAP_FLAG_PRIO_MASK; =20 si->swap_file =3D swap_file; - enable_swap_info(si, prio, swap_map, cluster_info, zeromap); + + /* Sets SWP_WRITEOK, resurrect the percpu ref, expose the swap device */ + enable_swap_info(si, prio, zeromap); =20 pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s= %s%s\n", K(si->pages), name->name, si->prio, nr_extents, @@ -3566,13 +3570,18 @@ SYSCALL_DEFINE2(swapon, const char __user *, specia= lfile, int, swap_flags) inode =3D NULL; destroy_swap_extents(si, swap_file); swap_cgroup_swapoff(si->type); + vfree(si->swap_map); + si->swap_map =3D NULL; + free_swap_cluster_info(si->cluster_info, si->max); + si->cluster_info =3D NULL; + /* + * Clear the SWP_USED flag after all resources are freed so + * alloc_swap_info can reuse this si safely. + */ spin_lock(&swap_lock); si->flags =3D 0; spin_unlock(&swap_lock); - vfree(swap_map); kvfree(zeromap); - if (cluster_info) - free_cluster_info(cluster_info, maxpages); if (inced_nr_rotate_swap) atomic_dec(&nr_rotate_swap); if (swap_file) --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1AF82EE617 for ; Sun, 25 Jan 2026 17:58:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363896; cv=none; b=lWKhFjItev6LQd8+d6mHVDQJXOAIWrnk+Z1qEA7Tapws+FSkYAaD1pU00+WbX+Q2+6H4HEseAmPyWdAsxg0o6GD4kBX8xDIt0yjM6mvHgF/bDG7xLs118mqI3TSj8kIyhKZj+6n4e7Dvbz7P82o6k1KZkCo5DlRf7voLhqhIqL0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363896; c=relaxed/simple; bh=7vhr/Y5/qe1bdq79grBj7yWIKvJOSaKIgfx4h5h9r3Q=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=TDhLqCg5mC/KuTGEmTB/+dfqu58QAxSBVVvYeis8VeLrLoOLHurFQPXeiHiIi9b8kdRk9hqIhQdZTDImGcSnCCYKccC6/Bl/u3Y3/XOY8PFX7p4X8A5VeCTUiki8JWyq0doW1UQzEcaKMZx2NqlIoZ4H6yX47796Ju9lFxlCz+M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GQMQ8BfA; arc=none smtp.client-ip=209.85.216.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GQMQ8BfA" Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-3530715386cso3393357a91.2 for ; Sun, 25 Jan 2026 09:58:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363894; x=1769968694; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=/oxuyrcl+senacpl825Pc/8hD6orLDCzNPV1cvMYSnM=; b=GQMQ8BfAS3CxywFIljY5/tukKRpNXqjLoKVE98lBmkJEjUIm1QGsSAcCAm0/xarHaM rCuX30qzDc6VCEfwOvks4RFh3DdJu9zew/bEkPpslPmlzgL5kb+Jeb09zxisJi4pCcDD S6jdkzsScUrSnDR4WG4FBQnNLdVV5HRHo13zt16McsubHY+WXmfRtlEPAfHr4Wcnzklf Iu3q3aGM2B6LHVPvoNRW+V191L0bkJR14u3h6mUZTQS/Ti7vCa+QjS4IA6c4tIQPJcgk 8lKDy2f24lBEm1BFc+LgdeKtZeEnNNGARHH7CzK7LFk0mA6ziWGBbSmxYOb05C3nVFwp V5og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363894; x=1769968694; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=/oxuyrcl+senacpl825Pc/8hD6orLDCzNPV1cvMYSnM=; b=OtJsgAla14Iq3R+hHKcJo/VtTrZH97l+9ns0KfhvaViP7D8S1kFx8Z+9tRqvyHe+fr GG56np0yUi1SoT8iQrZrG4KqUKmmrhQxMVRUI1ZdbNmaOO70FtCn5TjJkpi3mgkePkqs 6cN8IOOraLPVOoQNlKkTMtzzZd1WAZ1Hrej5LCK5Ngp+wCOi/ByIfPeb1HqIjuY497QZ VYJFX5dpvkdIG4i+xPNPGI37YBcIvvaRI7ydb0VkafFFeu6UsS0LXr9qTIc9kkinTwiA usla7G2vXXC+I/AbREonM7xNtb+hDu6Kg77pUg7Bq3X4ofxrSWtrP2j6G2ZmL/SgSqOf pfUg== X-Forwarded-Encrypted: i=1; AJvYcCVyHXRwvnLiuTnD//+07IqpeurS490FNo/cl3/1LOo29xNjcisp8wwoK9sWHPJ9HNahM2UsW8GqTUsE+B0=@vger.kernel.org X-Gm-Message-State: AOJu0YwAzQyvS0NcptCUKCx7mkP+eN3YNMpygaLWE1azBCA2oSlZOl9m rbZu6H8QkCQ0cL2RarJFFs5S1glgDYybxVkG0E5VuGwQds1nC9UWrvFE X-Gm-Gg: AZuq6aKye8JSNsYDuq/UtIOrl6fDgaaUFZ5iheDZvWNakcaErvML0xnCTgEs8m7cI7a hoPVobTAxxTulWlbEffonWuHBiNeZ5lWdfP+U6pUzO3/U0f7T2RfdCKkFdp66Bcc9d18LbcQTBG PkWoUqQVlKDky+BsM1xBvSo6cH00BZQbEoLfCuHLcsl5H2JW/4Lp9kgDVvNklfb9lSLbQ5R+iWF N/bBNOVfhm6NtnEVYtVleAKhPDRTUkAmDw/rB324nRdisz19OEt9+ITzc4O0RmVTjZTG6Ks5/J6 pvbTxQLlz+4DSvidy809zWcB84WDB5yBTy2Bp5YdQCQ1bTDt/grHk2Jv64Dp5DIuZKwzdLtb9ue QGWuRGeEk4pbGQUxKaPBbKSMFk6ea8MHRubefpuEFdQ/4DCazbSBeXZaJn5jhGqMjRkVhe40L4T tUH/SJXZRjwKhprRhS6Y26advTOjM+Nn7KkOQu7rK6KVIedx+OPpB24pHffSw= X-Received: by 2002:a17:90b:2fc8:b0:352:ccae:fe65 with SMTP id 98e67ed59e1d1-353c40b3244mr1764884a91.4.1769363894105; Sun, 25 Jan 2026 09:58:14 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:13 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:26 +0800 Subject: [PATCH 03/12] mm, swap: remove redundant arguments and locking for enabling a device Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-3-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=4474; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=VcHfIq9fX/OfsYjH1rKwFGMkjPLRnW2PTE0+Zr1fpH8=; b=BCLvnz8yeXU8dUoomn0Bh/gnIRxlBMSPd7NlZBxGfV/XLaOAi1CaL+XqpeFh8CyCoD+yyMUm2 e04jECMZ8s1DzLOPcw3DRmjDGdjqiRXh9V//ESPOpcaJUiIFnIOlqjY X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song There is no need to repeatedly pass zero map and priority values. zeromap is similar to cluster info and swap_map, which are only used once the swap device is exposed. And the prio values are currently read only once set, and only used for the list insertion upon expose or swap info display. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 48 ++++++++++++++++++------------------------------ 1 file changed, 18 insertions(+), 30 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 53ce222c3aba..80bf0ea098f6 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2652,19 +2652,6 @@ static int setup_swap_extents(struct swap_info_struc= t *sis, return generic_swapfile_activate(sis, swap_file, span); } =20 -static void setup_swap_info(struct swap_info_struct *si, int prio, - unsigned long *zeromap) -{ - si->prio =3D prio; - /* - * the plist prio is negated because plist ordering is - * low-to-high, while swap ordering is high-to-low - */ - si->list.prio =3D -si->prio; - si->avail_list.prio =3D -si->prio; - si->zeromap =3D zeromap; -} - static void _enable_swap_info(struct swap_info_struct *si) { atomic_long_add(si->pages, &nr_swap_pages); @@ -2678,17 +2665,12 @@ static void _enable_swap_info(struct swap_info_stru= ct *si) add_to_avail_list(si, true); } =20 -static void enable_swap_info(struct swap_info_struct *si, int prio, - unsigned long *zeromap) +/* + * Called after the swap device is ready, resurrect its percpu ref, it's n= ow + * safe to reference it. Add it to the list to expose it to the allocator. + */ +static void enable_swap_info(struct swap_info_struct *si) { - spin_lock(&swap_lock); - spin_lock(&si->lock); - setup_swap_info(si, prio, zeromap); - spin_unlock(&si->lock); - spin_unlock(&swap_lock); - /* - * Finished initializing swap device, now it's safe to reference it. - */ percpu_ref_resurrect(&si->users); spin_lock(&swap_lock); spin_lock(&si->lock); @@ -2701,7 +2683,6 @@ static void reinsert_swap_info(struct swap_info_struc= t *si) { spin_lock(&swap_lock); spin_lock(&si->lock); - setup_swap_info(si, si->prio, si->zeromap); _enable_swap_info(si); spin_unlock(&si->lock); spin_unlock(&swap_lock); @@ -3354,7 +3335,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) int nr_extents; sector_t span; unsigned long maxpages; - unsigned long *zeromap =3D NULL; struct folio *folio =3D NULL; struct inode *inode =3D NULL; bool inced_nr_rotate_swap =3D false; @@ -3470,9 +3450,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) * Use kvmalloc_array instead of bitmap_zalloc as the allocation order mi= ght * be above MAX_PAGE_ORDER incase of a large swap file. */ - zeromap =3D kvmalloc_array(BITS_TO_LONGS(maxpages), sizeof(long), - GFP_KERNEL | __GFP_ZERO); - if (!zeromap) { + si->zeromap =3D kvmalloc_array(BITS_TO_LONGS(maxpages), sizeof(long), + GFP_KERNEL | __GFP_ZERO); + if (!si->zeromap) { error =3D -ENOMEM; goto bad_swap_unlock_inode; } @@ -3541,10 +3521,17 @@ SYSCALL_DEFINE2(swapon, const char __user *, specia= lfile, int, swap_flags) if (swap_flags & SWAP_FLAG_PREFER) prio =3D swap_flags & SWAP_FLAG_PRIO_MASK; =20 + /* + * The plist prio is negated because plist ordering is + * low-to-high, while swap ordering is high-to-low + */ + si->prio =3D prio; + si->list.prio =3D -si->prio; + si->avail_list.prio =3D -si->prio; si->swap_file =3D swap_file; =20 /* Sets SWP_WRITEOK, resurrect the percpu ref, expose the swap device */ - enable_swap_info(si, prio, zeromap); + enable_swap_info(si); =20 pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s= %s%s\n", K(si->pages), name->name, si->prio, nr_extents, @@ -3574,6 +3561,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) si->swap_map =3D NULL; free_swap_cluster_info(si->cluster_info, si->max); si->cluster_info =3D NULL; + kvfree(si->zeromap); + si->zeromap =3D NULL; /* * Clear the SWP_USED flag after all resources are freed so * alloc_swap_info can reuse this si safely. @@ -3581,7 +3570,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) spin_lock(&swap_lock); si->flags =3D 0; spin_unlock(&swap_lock); - kvfree(zeromap); if (inced_nr_rotate_swap) atomic_dec(&nr_rotate_swap); if (swap_file) --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A965D2EE5FC for ; Sun, 25 Jan 2026 17:58:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363901; cv=none; b=HXrJpMakCqjHLXvfEFyZNGwGbZd6zVH6btd06t3azr0ZtjZHnT9VD0lnEMvDHYGYqaGDthuknzRr3Lfp4rFE9SsZE7DAkNE42K4t1VXLACYxZZZ1aVAGiHnjccNk9B1knMz0ILzy9Wy8qnMnQdqctv65RbcMG+CDkfY9I3DGS90= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363901; c=relaxed/simple; bh=OWJY00Wf2dIBc05Nz8hzedH27aMPMMOuwTeZeud6eu0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=dWjkVq/uRy7GOb8ndF2k9+WywbTWzlsc94LZHbJZJG6nV8EyjhHonXH+1GGKEDsgeGlsMzNpx+k7Spq0YGHU3nMk8HTw13utVMT1jsR0g5ltuxpKaZ877RDUXE8WphN+iGyKxLa82W2i5VJumMII9UNWYNr5p378Cuk7LnExj58= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=AnxR/oP+; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="AnxR/oP+" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-81345800791so2447198b3a.0 for ; Sun, 25 Jan 2026 09:58:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363899; x=1769968699; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=aAw5ZLwfH3aOqNw+8qnqKgdPaynl21dDWoO/xPxb8Vc=; b=AnxR/oP+WCkTQ/6usIFyz39FnUyCkCUZvqiAQK1YYfbjdd13FS0ocR3+gr3iSNWJ6M byee5aVlHWW+WM4FvHYlUmkdduQOBMcl6ZoM1lxuHwTTNAhoAIWFUsGTGgCGZF1A3lYE PYPbeslWAwdGRF1933hAEX1jfNoi/Eomx0yR+2wb0K+352QVxwS5zLcjA/O9kMu88gBH vm3ScnwZln0i84JYr9mQc8HKQtL1yyc2jbTViV/nocOA4Hgg3m692rThFvf1iquIFSi/ 6CnR6s2fbfQR7f0o7x7Jyag7I7TnzOghikHsRkKD3zcS3tRHbadzloJVBXLFPp9AQ5em qW1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363899; x=1769968699; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=aAw5ZLwfH3aOqNw+8qnqKgdPaynl21dDWoO/xPxb8Vc=; b=RuOlpPzqKbyVAZknUVl6mZa3e2xFg8k1rqD6A3DfTZI4B7VkT/H/x4upPl6FfRak41 YJ0rtrfKb5jhLhdDmotLGRrdH9w8z0MV5inJ1jfifFSUH6xUFLbrjY6uIcRlhPO3V4xP 2DDo8HDf83LdFMqU4s3Jg9aKvtlABT9a0VKWJBRrWwN3l4uPM+dcTA5EkkEq95Uhmnoh s86WRpEfJ7CYLmh0f+sxfb3/gye11S76KZE9Min390Ps4Ao9+GCNegvoy1VTUfkDAIMt tR0oA8ZbK/LkHH1yOxskh74wHnI5fp2+m+X8AoFT0HeasX6yV19Ymq4UjmA4kWY44zMM 9owA== X-Forwarded-Encrypted: i=1; AJvYcCUZ6Esu1vSguPzC7L4BsJpLAtih+72+IMaHNZl4mZMHnHdx+3aPOjrKJ7sbBxtXFJiCS91KmSk/P5zbHVw=@vger.kernel.org X-Gm-Message-State: AOJu0YyJ8OyQyKZnAoB9T0+5k8KPE8cDqfPraYnXRmxuhR5cYOGRRQEJ ajG4Fc973bJfGU2kG4yYMchnaIF9e9B8+8aPsSEk23WPJqX4UcJIh0cm X-Gm-Gg: AZuq6aLtnC9WAMvKALRgQDzJVlPOLU86r8fWKua5E5rNMHcxmCbnHw5jLsjQ7yH0Vsx rNZ2mXO+Bf7ZLzanMbcooAAmbo/X8Mn39gDK8Gj9muwpg1xLP3uaA6dcmiFoaiu9KNo1gO6a0CK qKg4ZdL9aGg3LKy4cnE74UWCHRKDnm9yfDvdlEarryAyfwUy5t3jby7ajqyPCJCnS2NXhuMNcm1 8RSIhmkIVOxskFORObSJXUpaZBTnUSvT8AXvB2lTAkegZymAUKtfaFi0rFqkZtmOKcqI+AIJATl Qsajkb3Bl4bniOUmGTsXHZnFOisOnus32jhtw0Y+plS0F4hR8UAIJaJj88Q0WZMxFA1chf1ZwYj Gu2DCsABWTG2HHOx/J8HNr/LvCrCXre3iQVtlC5dgidOvXv5wd82FRoRlZpo0KTpQzvLWnouxp+ UZAe7I4TjXcmg/T75JgNPHjQpdXoEcn+m5ZUxOlm2awITlyLS2 X-Received: by 2002:a05:6a00:3c93:b0:821:807a:e427 with SMTP id d2e1a72fcca58-823411dad08mr1660383b3a.21.1769363899012; Sun, 25 Jan 2026 09:58:19 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:18 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:27 +0800 Subject: [PATCH 04/12] mm, swap: consolidate bad slots setup and make it more robust Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-4-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=4411; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=OidTajkMc0iGAT4CQSZ4FaMMIJHOoz6IAuhiV5n7RgI=; b=22h05Pvce5AkmGELBrC+VspfwsX4gVHJPoaq21bkR6YV6t5rRkvi2X8Y59JOGt3EG4xfxQBNi wV6+WfpY79sDSPNwBbtLq6PengLrG0audQ8Rc0lvcP8R/0MM3pRKruN X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song In preparation for using the swap table to track bad slots directly, move the bad slot setup to one place, set up the swap_map mark, and cluster counter update together. While at it, provide more informative logs and a more robust fallback if any bad slot info looks incorrect. Fixes a potential issue that a malformed swap file may cause the cluster to be unusable upon swapon, and provides a more verbose warning on a malformed swap file Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 68 +++++++++++++++++++++++++++++++++----------------------= ---- 1 file changed, 38 insertions(+), 30 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 80bf0ea098f6..df8b13eecab1 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -743,13 +743,37 @@ static void relocate_cluster(struct swap_info_struct = *si, * slot. The cluster will not be added to the free cluster list, and its * usage counter will be increased by 1. Only used for initialization. */ -static int swap_cluster_setup_bad_slot(struct swap_cluster_info *cluster_i= nfo, - unsigned long offset) +static int swap_cluster_setup_bad_slot(struct swap_info_struct *si, + struct swap_cluster_info *cluster_info, + unsigned int offset, bool mask) { unsigned long idx =3D offset / SWAPFILE_CLUSTER; struct swap_table *table; struct swap_cluster_info *ci; =20 + /* si->max may got shrunk by swap swap_activate() */ + if (offset >=3D si->max && !mask) { + pr_debug("Ignoring bad slot %u (max: %u)\n", offset, si->max); + return 0; + } + /* + * Account it, skip header slot: si->pages is initiated as + * si->max - 1. Also skip the masking of last cluster, + * si->pages doesn't include that part. + */ + if (offset && !mask) + si->pages -=3D 1; + if (!si->pages) { + pr_warn("Empty swap-file\n"); + return -EINVAL; + } + /* Check for duplicated bad swap slots. */ + if (si->swap_map[offset]) { + pr_warn("Duplicated bad slot offset %d\n", offset); + return -EINVAL; + } + + si->swap_map[offset] =3D SWAP_MAP_BAD; ci =3D cluster_info + idx; if (!ci->table) { table =3D swap_table_alloc(GFP_KERNEL); @@ -3216,30 +3240,12 @@ static int setup_swap_map(struct swap_info_struct *= si, union swap_header *swap_header, unsigned long maxpages) { - unsigned long i; unsigned char *swap_map; =20 swap_map =3D vzalloc(maxpages); si->swap_map =3D swap_map; if (!swap_map) return -ENOMEM; - - swap_map[0] =3D SWAP_MAP_BAD; /* omit header page */ - for (i =3D 0; i < swap_header->info.nr_badpages; i++) { - unsigned int page_nr =3D swap_header->info.badpages[i]; - if (page_nr =3D=3D 0 || page_nr > swap_header->info.last_page) - return -EINVAL; - if (page_nr < maxpages) { - swap_map[page_nr] =3D SWAP_MAP_BAD; - si->pages--; - } - } - - if (!si->pages) { - pr_warn("Empty swap-file\n"); - return -EINVAL; - } - return 0; } =20 @@ -3270,26 +3276,28 @@ static int setup_swap_clusters_info(struct swap_inf= o_struct *si, } =20 /* - * Mark unusable pages as unavailable. The clusters aren't - * marked free yet, so no list operations are involved yet. - * - * See setup_swap_map(): header page, bad pages, - * and the EOF part of the last cluster. + * Mark unusable pages (header page, bad pages, and the EOF part of + * the last cluster) as unavailable. The clusters aren't marked free + * yet, so no list operations are involved yet. */ - err =3D swap_cluster_setup_bad_slot(cluster_info, 0); + err =3D swap_cluster_setup_bad_slot(si, cluster_info, 0, false); if (err) goto err; for (i =3D 0; i < swap_header->info.nr_badpages; i++) { unsigned int page_nr =3D swap_header->info.badpages[i]; =20 - if (page_nr >=3D maxpages) - continue; - err =3D swap_cluster_setup_bad_slot(cluster_info, page_nr); + if (!page_nr || page_nr > swap_header->info.last_page) { + pr_warn("Bad slot offset is out of border: %d (last_page: %d)\n", + page_nr, swap_header->info.last_page); + err =3D -EINVAL; + goto err; + } + err =3D swap_cluster_setup_bad_slot(si, cluster_info, page_nr, false); if (err) goto err; } for (i =3D maxpages; i < round_up(maxpages, SWAPFILE_CLUSTER); i++) { - err =3D swap_cluster_setup_bad_slot(cluster_info, i); + err =3D swap_cluster_setup_bad_slot(si, cluster_info, i, true); if (err) goto err; } --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F344C2EDD50 for ; Sun, 25 Jan 2026 17:58:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363906; cv=none; b=ZCS4KVPp7HpRYOE3ADTvq0P6lAj/gm7X6R5KTW8MHTv/WItFa47BxjOcnEbTlWB6fg2JnGO3l2EQdjuXUbwY25204wJaSZxntd7Kr30krj3jD8XwDqC4cukB2D+89iy9JikonVfRQ2jSmgAhIM6gBMiGlfgA1WGDCz/XG8uOWsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363906; c=relaxed/simple; bh=kkyIO7qcZVQTh3+ze8oW5mG1QCXOTV0RvL1Yuf71ALQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=h0kw8d3FM75JdDVs6x01m1jHFpsJ1UOzx+CwlLAfH9behE+OUGFl7wF33vvPUDZw2u0DcepRVnYy7D75c0N57U1KHH+xJKAx4IQ2NjmFsz6wirUY0ZbwrYOftelZav87MY0V2J7ZL3ec5AMSau+eGHXy289bh6//+7KGQR1o74o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ibYLZfwm; arc=none smtp.client-ip=209.85.210.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ibYLZfwm" Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-81dab89f286so1862534b3a.2 for ; Sun, 25 Jan 2026 09:58:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363904; x=1769968704; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=CkTKKula0DQd5wgE6uWfgSKH0QJhpKGSFifSGYvHSoo=; b=ibYLZfwmEwBYuLCiGOWHzqeaZ24NwyP6aEYb6Z9ZxD/Ucttq9pG+D/cm+i6CuEd6ej T7e/vGDOwcArUawVwRbbR5VJa0zmG0ULr/QIQYGU6vD2ydnQZ832lunp4/JohdsTzZRn iY6Q3l1NXZpifb/TzksrEYp3YQuGc6laDJrxmruwIONuHFWW0jPlpuXEcFvBKrIdMhR9 PHQtd2qlLCJqjvfoNTb26WcIDlt6TbIq3QJuw+l5/+qYYfKoNdTaao2lFf9snpaV2FHF BZK5G1brd8JQm5I1H6MYrtkBWY/d4sXa/2OqWrQ0SUeECe3gY4qB0Don/dwAb2IlvYUx eauQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363904; x=1769968704; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=CkTKKula0DQd5wgE6uWfgSKH0QJhpKGSFifSGYvHSoo=; b=OVxk2vuM9TmvyKjrc6CtanP5qTc5UYzdB0Baom5FX6KF5yy1J8DwqddQhmYfSvE0GA rUYn2XnXPcbmmJYhqp+YCcbkFXxJgs7jplybw4QtlPWm6WVc7sjkrjpQ4gG15LRgwBri 4xWqxSLhmzWPyYKh2Ls5/omu3SJJnQZuKTWSQhrPB2ItICKv5j/eiR7T/e49tmLJFzUZ fTzwM+ny1hoshvPwD7vFL5oMoxpa2pLIwn/enTVdbb6wdyz7422kwZX4DONDrA5fHsy/ d/WdOyngV1Q1wsYKtDMc3OjxuEWCQPDdG64CTasD1kvIF2KS1pvVwf5pk4M5b0NhJ8FM Lz4Q== X-Forwarded-Encrypted: i=1; AJvYcCV7nlWBCZWwjjSYnGueSMba16epA5Sc0SNDFPG+9fltt0iG1CR0eOCnr+5h9G3nww6tr9eKui/xzvNJjxM=@vger.kernel.org X-Gm-Message-State: AOJu0YwMOZy+UHD06eTZsK1lDkGjwVjwSvIUYGJhDCx54qQ98MsiXuyf s0Cn5di8o/qRfzKYI/BM+4ZSYxWdFpCpP871vvUG3DRXiQoJk6aUC4zH X-Gm-Gg: AZuq6aKEmHech05lQaRjXPFOzdiSBupKKOLofHT7maoFJ6+mJ/6+kGP8M6gu/em3tSi IYmtXW/crM9kpqHzkKlzDjceZZupIx3vYYAszxDlPHgXyIr2nL/UU39zD032b0V0VJkhpichb/s PBYndJVHGG2/9cfsQcOxCBIW1MacPqFgEe1+tCbLsk2Ee2Fsw87N6tEw1kARdF/8tpm6osbTNup uDohY7d8wl6dn+LCGO9V4F5lnOf84oqjPOiwmpc53r+jH3wvov9cCGlVJCEKAIk2AlroKCGMRqX Y7kGZKaDtJpau4ep+IXabczBkGKf2hnz4KBV6ScOZNWNm3gdDQQOdG7ZzAoJfWXKVHniMeX2Rxp kcuXByqCS5ma4Prpk/ImqgM0FUF3TWhZ18p7oG+L6gBQxleLE+Racwh+wQn7osaJ0IZbYCmFbB3 snQGQIhJBIqql2o9wr6lVWz0FYpcwEYpMjE2xev2fY5uMMsN0cPfnwNwCd9Fg= X-Received: by 2002:a05:6a00:418c:b0:81e:81fb:b392 with SMTP id d2e1a72fcca58-823411dd0bbmr1927837b3a.11.1769363904271; Sun, 25 Jan 2026 09:58:24 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:23 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:28 +0800 Subject: [PATCH 05/12] mm/workingset: leave highest bits empty for anon shadow Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-5-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=8883; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=FaLMbQqW7ETAqIJMdO/kAwe98yiKvc5zz/kgiu2u1QQ=; b=zZFLWTrUx22bzcHL3DV9tNiGxIw22UXloT7umwsUfIDGmzimDMLLgL1mCbikMWqIVoer/2AwY jU8qKLCxKR+C3BqaGaZIwT3R3S++voh36ZJ1yDI6KK7CyQoaVFJQdrj X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Swap table entry will need 4 bits reserved for swap count in the shadow, so the anon shadow should have its leading 4 bits remain 0. This should be OK for the foreseeable future. Take 52 bits of physical address space as an example: for 4K pages, there would be at most 40 bits for addressable pages. Currently, we have 36 bits available (64 - 1 - 16 - 10 - 1, where XA_VALUE takes 1 bit for marker, MEM_CGROUP_ID_SHIFT takes 16 bits, NODES_SHIFT takes <=3D10 bits, WORKINGSET flags takes 1 bit). So in the worst case, we previously need to pack the 40 bits of address in 36 bits fields using a 64K bucket (bucket_order =3D 4). After this, the bucket will be increased to 1M. Which should be fine, as on such large machines, the working set size will be way larger than the bucket size. And for MGLRU's gen number tracking, it should be even more than enough, MGLRU's gen number (max_seq) increment is much slower compared to the eviction counter (nonresident_age). And after all, either the refault distance or the gen distance is only a hint that can tolerate inaccuracy just fine. And the 4 bits can be shrunk to 3, or extended to a higher value if needed later. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swap_table.h | 4 ++++ mm/workingset.c | 49 ++++++++++++++++++++++++++++++------------------- 2 files changed, 34 insertions(+), 19 deletions(-) diff --git a/mm/swap_table.h b/mm/swap_table.h index ea244a57a5b7..10e11d1f3b04 100644 --- a/mm/swap_table.h +++ b/mm/swap_table.h @@ -12,6 +12,7 @@ struct swap_table { }; =20 #define SWP_TABLE_USE_PAGE (sizeof(struct swap_table) =3D=3D PAGE_SIZE) +#define SWP_TB_COUNT_BITS 4 =20 /* * A swap table entry represents the status of a swap slot on a swap @@ -22,6 +23,9 @@ struct swap_table { * (shadow), or NULL. */ =20 +/* Macro for shadow offset calculation */ +#define SWAP_COUNT_SHIFT SWP_TB_COUNT_BITS + /* * Helpers for casting one type of info into a swap table entry. */ diff --git a/mm/workingset.c b/mm/workingset.c index 13422d304715..37a94979900f 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -16,6 +16,7 @@ #include #include #include +#include "swap_table.h" #include "internal.h" =20 /* @@ -184,7 +185,9 @@ #define EVICTION_SHIFT ((BITS_PER_LONG - BITS_PER_XA_VALUE) + \ WORKINGSET_SHIFT + NODES_SHIFT + \ MEM_CGROUP_ID_SHIFT) +#define EVICTION_SHIFT_ANON (EVICTION_SHIFT + SWAP_COUNT_SHIFT) #define EVICTION_MASK (~0UL >> EVICTION_SHIFT) +#define EVICTION_MASK_ANON (~0UL >> EVICTION_SHIFT_ANON) =20 /* * Eviction timestamps need to be able to cover the full range of @@ -194,12 +197,12 @@ * that case, we have to sacrifice granularity for distance, and group * evictions into coarser buckets by shaving off lower timestamp bits. */ -static unsigned int bucket_order __read_mostly; +static unsigned int bucket_order[ANON_AND_FILE] __read_mostly; =20 static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long evic= tion, - bool workingset) + bool workingset, bool file) { - eviction &=3D EVICTION_MASK; + eviction &=3D file ? EVICTION_MASK : EVICTION_MASK_ANON; eviction =3D (eviction << MEM_CGROUP_ID_SHIFT) | memcgid; eviction =3D (eviction << NODES_SHIFT) | pgdat->node_id; eviction =3D (eviction << WORKINGSET_SHIFT) | workingset; @@ -244,7 +247,8 @@ static void *lru_gen_eviction(struct folio *folio) struct mem_cgroup *memcg =3D folio_memcg(folio); struct pglist_data *pgdat =3D folio_pgdat(folio); =20 - BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > BITS_PER_LONG - EVICTION_SH= IFT); + BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > + BITS_PER_LONG - max(EVICTION_SHIFT, EVICTION_SHIFT_ANON)); =20 lruvec =3D mem_cgroup_lruvec(memcg, pgdat); lrugen =3D &lruvec->lrugen; @@ -254,7 +258,7 @@ static void *lru_gen_eviction(struct folio *folio) hist =3D lru_hist_from_seq(min_seq); atomic_long_add(delta, &lrugen->evicted[hist][type][tier]); =20 - return pack_shadow(mem_cgroup_private_id(memcg), pgdat, token, workingset= ); + return pack_shadow(mem_cgroup_private_id(memcg), pgdat, token, workingset= , type); } =20 /* @@ -262,7 +266,7 @@ static void *lru_gen_eviction(struct folio *folio) * Fills in @lruvec, @token, @workingset with the values unpacked from sha= dow. */ static bool lru_gen_test_recent(void *shadow, struct lruvec **lruvec, - unsigned long *token, bool *workingset) + unsigned long *token, bool *workingset, bool file) { int memcg_id; unsigned long max_seq; @@ -275,7 +279,7 @@ static bool lru_gen_test_recent(void *shadow, struct lr= uvec **lruvec, *lruvec =3D mem_cgroup_lruvec(memcg, pgdat); =20 max_seq =3D READ_ONCE((*lruvec)->lrugen.max_seq); - max_seq &=3D EVICTION_MASK >> LRU_REFS_WIDTH; + max_seq &=3D (file ? EVICTION_MASK : EVICTION_MASK_ANON) >> LRU_REFS_WIDT= H; =20 return abs_diff(max_seq, *token >> LRU_REFS_WIDTH) < MAX_NR_GENS; } @@ -293,7 +297,7 @@ static void lru_gen_refault(struct folio *folio, void *= shadow) =20 rcu_read_lock(); =20 - recent =3D lru_gen_test_recent(shadow, &lruvec, &token, &workingset); + recent =3D lru_gen_test_recent(shadow, &lruvec, &token, &workingset, type= ); if (lruvec !=3D folio_lruvec(folio)) goto unlock; =20 @@ -331,7 +335,7 @@ static void *lru_gen_eviction(struct folio *folio) } =20 static bool lru_gen_test_recent(void *shadow, struct lruvec **lruvec, - unsigned long *token, bool *workingset) + unsigned long *token, bool *workingset, bool file) { return false; } @@ -381,6 +385,7 @@ void workingset_age_nonresident(struct lruvec *lruvec, = unsigned long nr_pages) void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_m= emcg) { struct pglist_data *pgdat =3D folio_pgdat(folio); + int file =3D folio_is_file_lru(folio); unsigned long eviction; struct lruvec *lruvec; int memcgid; @@ -397,10 +402,10 @@ void *workingset_eviction(struct folio *folio, struct= mem_cgroup *target_memcg) /* XXX: target_memcg can be NULL, go through lruvec */ memcgid =3D mem_cgroup_private_id(lruvec_memcg(lruvec)); eviction =3D atomic_long_read(&lruvec->nonresident_age); - eviction >>=3D bucket_order; + eviction >>=3D bucket_order[file]; workingset_age_nonresident(lruvec, folio_nr_pages(folio)); return pack_shadow(memcgid, pgdat, eviction, - folio_test_workingset(folio)); + folio_test_workingset(folio), file); } =20 /** @@ -431,14 +436,15 @@ bool workingset_test_recent(void *shadow, bool file, = bool *workingset, bool recent; =20 rcu_read_lock(); - recent =3D lru_gen_test_recent(shadow, &eviction_lruvec, &eviction, work= ingset); + recent =3D lru_gen_test_recent(shadow, &eviction_lruvec, &eviction, + workingset, file); rcu_read_unlock(); return recent; } =20 rcu_read_lock(); unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset); - eviction <<=3D bucket_order; + eviction <<=3D bucket_order[file]; =20 /* * Look up the memcg associated with the stored ID. It might @@ -495,7 +501,8 @@ bool workingset_test_recent(void *shadow, bool file, bo= ol *workingset, * longest time, so the occasional inappropriate activation * leading to pressure on the active list is not a problem. */ - refault_distance =3D (refault - eviction) & EVICTION_MASK; + refault_distance =3D ((refault - eviction) & + (file ? EVICTION_MASK : EVICTION_MASK_ANON)); =20 /* * Compare the distance to the existing workingset size. We @@ -780,8 +787,8 @@ static struct lock_class_key shadow_nodes_key; =20 static int __init workingset_init(void) { + unsigned int timestamp_bits, timestamp_bits_anon; struct shrinker *workingset_shadow_shrinker; - unsigned int timestamp_bits; unsigned int max_order; int ret =3D -ENOMEM; =20 @@ -794,11 +801,15 @@ static int __init workingset_init(void) * double the initial memory by using totalram_pages as-is. */ timestamp_bits =3D BITS_PER_LONG - EVICTION_SHIFT; + timestamp_bits_anon =3D BITS_PER_LONG - EVICTION_SHIFT_ANON; max_order =3D fls_long(totalram_pages() - 1); - if (max_order > timestamp_bits) - bucket_order =3D max_order - timestamp_bits; - pr_info("workingset: timestamp_bits=3D%d max_order=3D%d bucket_order=3D%u= \n", - timestamp_bits, max_order, bucket_order); + if (max_order > (BITS_PER_LONG - EVICTION_SHIFT)) + bucket_order[WORKINGSET_FILE] =3D max_order - timestamp_bits; + if (max_order > timestamp_bits_anon) + bucket_order[WORKINGSET_ANON] =3D max_order - timestamp_bits_anon; + pr_info("workingset: timestamp_bits=3D%d (anon: %d) max_order=3D%d bucket= _order=3D%u (anon: %d)\n", + timestamp_bits, timestamp_bits_anon, max_order, + bucket_order[WORKINGSET_FILE], bucket_order[WORKINGSET_ANON]); =20 workingset_shadow_shrinker =3D shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B4EB2EE617 for ; Sun, 25 Jan 2026 17:58:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363911; cv=none; b=oyo0rdmBwDFqbMQ+LYRj7tOJFmzwkRoc8Zvdyb92PUhTBRYi8wNDPy12uPOaMXvbk/euFMruW8MvpJC0HK04xKN+Nggao7ENZL0KOIWhSpsoLQDaU0CEDj6XCzL5feBhsHOvVnQQpAa/SmoTwlEh4oSr5V34iFdjIsEOJhJeslY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363911; c=relaxed/simple; bh=9JG/XV5J/7frqVIB+Q/fkHIf5KnU2m3Pr3OBIKUnbQc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=s0tPko09rhysTkYd6rufOJeLU2bzcp2PQs98Rcr1XktvpR8jpx25eDosV6Nv+Fa0H9dpvSb2NFUzcPAzjoape4xX1HxhPaP2wt4Z+HCZRUG1ck+yEWXQhLWHiBBKC7iyjNJE49suvvTXZPpgRgoVeJl2PlbjESMu4X0+2l9HWUQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FqCPiHE+; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FqCPiHE+" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-8230c33f477so1526565b3a.2 for ; Sun, 25 Jan 2026 09:58:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363909; x=1769968709; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=fnn5pIpGyxOH5+PjxlLrdlv4INj5zaNmBOfsLC8NtaY=; b=FqCPiHE+PHy0tuvsk1rlhDJxBK1940R6yv6F4RnWbPY90cnOTet6bPJEfNFAV/3+pf W7uoc1yrUwO/Jm2SRfIOY6mV4FFMI4E1hywR4Bba60sJtqw1EJ3vmscDW9knH7TI+Woa R59FIsredxTgdU8diUPQ758JqfF1/KRMJIT82vgts1PpRRLUk3urWRJMrYG9kZFkYk+x tqiCPCiGNUz0TxBTmM/tSnG/BRKGYHEAkS0+Ovx9zTWkwwSuY0zAkVr5yhzHj63b9FWV 5AGWssEt/2yU5mtuZufFYYapmEz9n1xskv+ScBNqFEoXHG8TF+CRac1m3QUcSn5edqhf E4+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363909; x=1769968709; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=fnn5pIpGyxOH5+PjxlLrdlv4INj5zaNmBOfsLC8NtaY=; b=ir7W/9oohMgFvVK2Yoyx1k/Lkl9whH6DEQArRcd1l16n5B3hTHVAC4WFHvUHmiijKy ftg06gNqFhxZ5u3RUvpIXGvpH7AVo3zAF0TTJ45zRpS1nOIvpQNFhUzJNlvCLv17mTrv AQF5G35GEceRpM1dO482gCs0vrlZ3hI47uScXZmhQLXb6Mb/BWrjZCKIpiA0SZVJAPCs X934Eqpk27msgLxs+N2tQ1ro6TMHpu1A9ySmcj13tXwij/WO6cz1bJYtR0HQKPAf+yhD cy8iWACU7lzBZ9RR7xCcRDflJtTvCwCr0nwXvCZxgt3FMgCQiR7KfEvfPqEJBiPKzbPr sgZg== X-Forwarded-Encrypted: i=1; AJvYcCWb+UqIkYbYK8YwPItF/fClCoaF6Pba/sPWW/PTMZJRIi30HDkNKld083qqYnoYK8Tr+RHQ9zkFHyG1LQY=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+IQOZjjqUdFvMTrDlLfirrj9eiS8Nfkv9wA9Vbq1M8g5+ZNmY zANMo/Vx2ZRGO9kpBHEikRKYjZQteDG4wp7cJLgJ2fSMaj3ih3TenlPo X-Gm-Gg: AZuq6aLDapLovu5PHpisma/gKxOe0sOlO3LjGsrz173bJ4TuQtNIwfyMms4i5KzmVAa S+BB/WOi5W5xeultng9GPBSDbPZULntloPMTCi+xCNj7MJvY5BQwWi6/qWNio5J9dWL08Sn8T7z n/tutK4d1kGsQnbQx1GwcRMa2OFSJC1l3ScFlFnJME8UAv7xBHnpUoe37lXgJmHbGgbq3BwPD+1 PqvlsgIl9ckgGPYZpjBsnPqTYFF2bLJT0nrbLN9mmsg4IHtweu3o+1D9C7XQhaTnQZK5GF/fwgi W0wP+Tt4eeqn6TK1M8ayF8N5hBPsKd4nybyQLKDjPviYpfh/pvciWI0KbKKNWLDo5jlym7ByK4Y e64K4Yp0WB3YQ0BrW3DHuRPbpi8b4a6uYzHk2pBTquLgjOZ50wW1DeAdYHMKbfRCQID25J8+cbZ aRw6LOfVZ+7zuYgf0NPJrjgM1tx/sbYpYMjgePc4h0b2jXZ3N4 X-Received: by 2002:a05:6a00:4649:b0:81f:9a5b:e8fc with SMTP id d2e1a72fcca58-823412f82f3mr1913955b3a.54.1769363909394; Sun, 25 Jan 2026 09:58:29 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:27 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:29 +0800 Subject: [PATCH 06/12] mm, swap: implement helpers for reserving data in the swap table Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-6-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=8400; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=4fT4YuQVwXd0BVzfEj6WLA1Z9x9x2nMJyWoNmCPj5xw=; b=TpUkgudnYLmqTnl6Pa5LFXR+VM2YIcaeBS5a2Q34muNpI/zXkuLP/+e9Vvc5T9NyztUZWZ6h1 5OPJ5B730QCA4O7CR/08fSidaUy1ugaaKpJop0MiU4yKkCoKOLC82Qi X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song To prepare for using the swap table as the unified swap layer, introduce macros and helpers for storing multiple kinds of data in a swap table entry. From now on, we are storing PFN in the swap table to make space for extra counting bits (SWAP_COUNT). Shadows are still stored as they are, as the SWAP_COUNT is not used yet. Also, rename shadow_swp_to_tb to shadow_to_swp_tb; that's a spelling error, not really worth a separate fix. No behaviour change yet, just prepare the API. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swap_state.c | 6 +-- mm/swap_table.h | 124 +++++++++++++++++++++++++++++++++++++++++++++++++++-= ---- 2 files changed, 117 insertions(+), 13 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 6d0eef7470be..e213ee35c1d2 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -148,7 +148,7 @@ void __swap_cache_add_folio(struct swap_cluster_info *c= i, VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); VM_WARN_ON_ONCE_FOLIO(!folio_test_swapbacked(folio), folio); =20 - new_tb =3D folio_to_swp_tb(folio); + new_tb =3D folio_to_swp_tb(folio, 0); ci_start =3D swp_cluster_offset(entry); ci_off =3D ci_start; ci_end =3D ci_start + nr_pages; @@ -249,7 +249,7 @@ void __swap_cache_del_folio(struct swap_cluster_info *c= i, struct folio *folio, VM_WARN_ON_ONCE_FOLIO(folio_test_writeback(folio), folio); =20 si =3D __swap_entry_to_info(entry); - new_tb =3D shadow_swp_to_tb(shadow); + new_tb =3D shadow_to_swp_tb(shadow, 0); ci_start =3D swp_cluster_offset(entry); ci_end =3D ci_start + nr_pages; ci_off =3D ci_start; @@ -331,7 +331,7 @@ void __swap_cache_replace_folio(struct swap_cluster_inf= o *ci, VM_WARN_ON_ONCE(!entry.val); =20 /* Swap cache still stores N entries instead of a high-order entry */ - new_tb =3D folio_to_swp_tb(new); + new_tb =3D folio_to_swp_tb(new, 0); do { old_tb =3D __swap_table_xchg(ci, ci_off, new_tb); WARN_ON_ONCE(!swp_tb_is_folio(old_tb) || swp_tb_to_folio(old_tb) !=3D ol= d); diff --git a/mm/swap_table.h b/mm/swap_table.h index 10e11d1f3b04..9c4083e4e4f2 100644 --- a/mm/swap_table.h +++ b/mm/swap_table.h @@ -12,17 +12,72 @@ struct swap_table { }; =20 #define SWP_TABLE_USE_PAGE (sizeof(struct swap_table) =3D=3D PAGE_SIZE) -#define SWP_TB_COUNT_BITS 4 =20 /* * A swap table entry represents the status of a swap slot on a swap * (physical or virtual) device. The swap table in each cluster is a * 1:1 map of the swap slots in this cluster. * - * Each swap table entry could be a pointer (folio), a XA_VALUE - * (shadow), or NULL. + * Swap table entry type and bits layouts: + * + * NULL: |---------------- 0 ---------------| - Free slot + * Shadow: | SWAP_COUNT |---- SHADOW_VAL ---|1| - Swapped out slot + * PFN: | SWAP_COUNT |------ PFN -------|10| - Cached slot + * Pointer: |----------- Pointer ----------|100| - (Unused) + * Bad: |------------- 1 -------------|1000| - Bad slot + * + * SWAP_COUNT is `SWP_TB_COUNT_BITS` long, each entry is an atomic long. + * + * Usages: + * + * - NULL: Swap slot is unused, could be allocated. + * + * - Shadow: Swap slot is used and not cached (usually swapped out). It re= uses + * the XA_VALUE format to be compatible with working set shadows. SHADOW= _VAL + * part might be all 0 if the working shadow info is absent. In such a c= ase, + * we still want to keep the shadow format as a placeholder. + * + * Memcg ID is embedded in SHADOW_VAL. + * + * - PFN: Swap slot is in use, and cached. Memcg info is recorded on the p= age + * struct. + * + * - Pointer: Unused yet. `0b100` is reserved for potential pointer usage + * because only the lower three bits can be used as a marker for 8 bytes + * aligned pointers. + * + * - Bad: Swap slot is reserved, protects swap header or holes on swap dev= ices. */ =20 +/* Common SWAP_COUNT part */ +#define SWP_TB_COUNT_BITS 4 /* This can be shrunk or extended if needed */ +#define SWP_TB_COUNT_MASK (~((~0UL) >> SWP_TB_COUNT_BITS)) +#define SWP_TB_COUNT_SHIFT (BITS_PER_LONG - SWP_TB_COUNT_BITS) +#define SWP_TB_COUNT_MAX ((1 << SWP_TB_COUNT_BITS) - 2) + +/* NULL Entry, all 0 */ +#define SWP_TB_NULL 0UL + +/* Swapped out: Shadow */ +#define SWP_TB_SHADOW_MARK 0b1UL + +/* Cached: PFN */ +#define SWP_TB_PFN_MASK ((~0UL) >> SWP_TB_COUNT_BITS) +#define SWP_TB_PFN_MARK 0b10UL +#define SWP_TB_PFN_MARK_BITS 2 +#define SWP_TB_PFN_MARK_MASK (BIT(SWP_TB_PFN_MARK_BITS) - 1) + +/* Bad slot, ends with 0b1000 and rests of bits are all 1 */ +#define SWP_TB_BAD ((~0UL) << 3) + +#if defined(MAX_POSSIBLE_PHYSMEM_BITS) +#define SWAP_CACHE_PFN_BITS (MAX_POSSIBLE_PHYSMEM_BITS - PAGE_SHIFT) +#elif defined(MAX_PHYSMEM_BITS) +#define SWAP_CACHE_PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT) +#else +#define SWAP_CACHE_PFN_BITS (BITS_PER_LONG - PAGE_SHIFT) +#endif + /* Macro for shadow offset calculation */ #define SWAP_COUNT_SHIFT SWP_TB_COUNT_BITS =20 @@ -35,18 +90,41 @@ static inline unsigned long null_to_swp_tb(void) return 0; } =20 -static inline unsigned long folio_to_swp_tb(struct folio *folio) +static inline unsigned long __count_to_swp_tb(unsigned char count) { + VM_WARN_ON(count > SWP_TB_COUNT_MAX); + return ((unsigned long)count) << SWP_TB_COUNT_SHIFT; +} + +static inline unsigned long pfn_to_swp_tb(unsigned long pfn, unsigned int = count) +{ + unsigned long swp_tb; + BUILD_BUG_ON(sizeof(unsigned long) !=3D sizeof(void *)); - return (unsigned long)folio; + BUILD_BUG_ON(SWAP_CACHE_PFN_BITS > + (BITS_PER_LONG - SWP_TB_PFN_MARK_BITS - SWP_TB_COUNT_BITS)); + + swp_tb =3D (pfn << SWP_TB_PFN_MARK_BITS) | SWP_TB_PFN_MARK; + VM_WARN_ON_ONCE(swp_tb & SWP_TB_COUNT_MASK); + + return swp_tb | __count_to_swp_tb(count); } =20 -static inline unsigned long shadow_swp_to_tb(void *shadow) +static inline unsigned long folio_to_swp_tb(struct folio *folio, unsigned = int count) +{ + return pfn_to_swp_tb(folio_pfn(folio), count); +} + +static inline unsigned long shadow_to_swp_tb(void *shadow, unsigned int co= unt) { BUILD_BUG_ON((BITS_PER_XA_VALUE + 1) !=3D BITS_PER_BYTE * sizeof(unsigned long)); + BUILD_BUG_ON((unsigned long)xa_mk_value(0) !=3D SWP_TB_SHADOW_MARK); + VM_WARN_ON_ONCE(shadow && !xa_is_value(shadow)); - return (unsigned long)shadow; + VM_WARN_ON_ONCE(shadow && ((unsigned long)shadow & SWP_TB_COUNT_MASK)); + + return (unsigned long)shadow | __count_to_swp_tb(count) | SWP_TB_SHADOW_M= ARK; } =20 /* @@ -59,7 +137,7 @@ static inline bool swp_tb_is_null(unsigned long swp_tb) =20 static inline bool swp_tb_is_folio(unsigned long swp_tb) { - return !xa_is_value((void *)swp_tb) && !swp_tb_is_null(swp_tb); + return ((swp_tb & SWP_TB_PFN_MARK_MASK) =3D=3D SWP_TB_PFN_MARK); } =20 static inline bool swp_tb_is_shadow(unsigned long swp_tb) @@ -67,19 +145,43 @@ static inline bool swp_tb_is_shadow(unsigned long swp_= tb) return xa_is_value((void *)swp_tb); } =20 +static inline bool swp_tb_is_bad(unsigned long swp_tb) +{ + return swp_tb =3D=3D SWP_TB_BAD; +} + +static inline bool swp_tb_is_countable(unsigned long swp_tb) +{ + return (swp_tb_is_shadow(swp_tb) || swp_tb_is_folio(swp_tb) || + swp_tb_is_null(swp_tb)); +} + /* * Helpers for retrieving info from swap table. */ static inline struct folio *swp_tb_to_folio(unsigned long swp_tb) { VM_WARN_ON(!swp_tb_is_folio(swp_tb)); - return (void *)swp_tb; + return pfn_folio((swp_tb & SWP_TB_PFN_MASK) >> SWP_TB_PFN_MARK_BITS); } =20 static inline void *swp_tb_to_shadow(unsigned long swp_tb) { VM_WARN_ON(!swp_tb_is_shadow(swp_tb)); - return (void *)swp_tb; + return (void *)(swp_tb & ~SWP_TB_COUNT_MASK); +} + +static inline unsigned char __swp_tb_get_count(unsigned long swp_tb) +{ + VM_WARN_ON(!swp_tb_is_countable(swp_tb)); + return ((swp_tb & SWP_TB_COUNT_MASK) >> SWP_TB_COUNT_SHIFT); +} + +static inline int swp_tb_get_count(unsigned long swp_tb) +{ + if (swp_tb_is_countable(swp_tb)) + return __swp_tb_get_count(swp_tb); + return -EINVAL; } =20 /* @@ -124,6 +226,8 @@ static inline unsigned long swap_table_get(struct swap_= cluster_info *ci, atomic_long_t *table; unsigned long swp_tb; =20 + VM_WARN_ON_ONCE(off >=3D SWAPFILE_CLUSTER); + rcu_read_lock(); table =3D rcu_dereference(ci->table); swp_tb =3D table ? atomic_long_read(&table[off]) : null_to_swp_tb(); --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E4502EE5FC for ; Sun, 25 Jan 2026 17:58:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363915; cv=none; b=TYII/wRUu4Utnrut6sRxGRaJPFpNk7Yn8HNn1k1ca0btg0k2DDMzc3bjg4MpWyng813qFezZR3VeCoj5qLFD8HZR6Den6cgOItLUVZVQje1CjCtCJkHKlePo0gutwAh5NOrx2pT8AuV85sHe9fhn/an7qiFhDjaZyJOaP37TOow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363915; c=relaxed/simple; bh=Er02vhxaEVbwJu/z/eFj3qEl2mARw7tjDYxAvy8TGt0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=GnIcDYQyikZLCxAtviZ99KhpC9orWb1+xKQLAd+eXN4y6uAnMSSa5dCs5Eb7zckk72YlzrEi70n5+rTkoIy6jhQIaLLaf6GDR0DKORjMMrzhPEYcsalByyMTt9og+dukhfkCvIjPvBGZ0gXfevXMvcvVpluzRW9LNayCLbaR5GM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=VBNrQbvJ; arc=none smtp.client-ip=209.85.210.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="VBNrQbvJ" Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-81f4c0e2b42so1880680b3a.1 for ; Sun, 25 Jan 2026 09:58:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363913; x=1769968713; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=FByi9ivqPJyIjI850sU//sSIfdwjIzLDbOJYWWeWKFE=; b=VBNrQbvJTAVmLpalIhY1hKhmpmO4/B2jXPIFAMIMUoIe7Zu7pm0RzNYvQx6ikMsW90 jxZJjeLcbpPjxR4jNE9d6ZnH1Z8LvQfkc+tfF176Q7jMNc6d+FjaOZemNSy6M4/NIOke OBjb9mztHJQZDK/Y7k1/PHgjVSIOaXPmSNwFEVrW8yws4BndoSkN5XwKuN80xWQ903RF KRHM27Jb+0js8FLVAkU2p3QodLBKjL2YaBK2ASixKoH7hhJW2OpKElq+HyfG7K6K+CZM hw9Z0Qk/WGiReTTCY+mxUqk04MitwXKrWk/3tSf7ofnXv+Fqi559f989aNLonQJtDl59 mOfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363913; x=1769968713; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=FByi9ivqPJyIjI850sU//sSIfdwjIzLDbOJYWWeWKFE=; b=f89CbH1IavbJa35xm09yxVhE6F4SHjNrNhxUTJLq4sj8Aq0kiD/uSxHllFxXSIZFXw qp1EfLK2oS8rcYe42Hg8ScwVRyciR9TXj4wphgoYmOjMwj5A13/NDnh9EQOVtPeNNXH/ 5EKb8UimBIRs516hjLfzZWu6803C3x3X5n+CNccA3h9F2dtOWz4GrwqJ4AqUE94zzdRt kJPvE1PxjLU1A0I69NbFq25rod+lS7aMohQo8k4mpNp2g19Dk1w37fcm9frxd5phS6BO mdoy8XntNafKdhJQjrEPulQyQ8l472jYahtXz5TzJvUJBBodprHCZuzZfJp5eezpUFBX qbLw== X-Forwarded-Encrypted: i=1; AJvYcCXwylBPTL0coYn2huD+HvKXBPSkvf7nqHKQAhUPZ279Sa1BVUUMgMviVPMbHzAMI+I1Cam2jvLGDV1qHSE=@vger.kernel.org X-Gm-Message-State: AOJu0YyR9P3UfzwjvqYKaONQiWaXa04JOGV6QKBZSpQ9jE6Iax9ONjU/ yZnhOkE/HlYgH/CCGw9HNOxSvFPyarBZTD5TvNowT5reqh4jdTYndD/fB187SVQTyVs= X-Gm-Gg: AZuq6aI4W2EtCf7NDeEUD7sLsVy4BRE6bwDeIgZiIv6YbGobU5VonKfD8Gx952rKaIf KRfAKuTNCT7roBCi4BK0KWbgZy8xs6C4wtTaKn2AlvitZuR/dwLVWnazLgx/iK49Tn5ZdXZShN1 Ld+rRbdLg5e8eldgmMAZV/xaN9gvlBxBfjSnOKSti8eo4S0XBvI/Wd7GrYlwpvDUC/A3smEaJ9e oSipYOUNP+qiWQlxQHxDKaxFcZ/PDFdiCqoBwBHK0hligMRs2w1MB3GRxDSUDlnR1fIDoOxADPx 8UJgW1v68pxWpl7dpEZP1/hex2iZyQd6iw+1nXnMR4rmEEejFagJ+mPhpw+pA350GXsXPMcRqSo kR1Cd/P0iLDum8wL1g22pDxCZPYRp3AzISpHYqHjMlEiFuQpE5BlftWfaHiJbhAo0vLC4VC3ES7 LT8vrypq3KkBYeYChbwBS2f6IfvXqHiF8WSdP3ClEuljjtchpp X-Received: by 2002:a05:6a00:4143:b0:81f:4e60:1c67 with SMTP id d2e1a72fcca58-823412d954dmr1785767b3a.67.1769363913484; Sun, 25 Jan 2026 09:58:33 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:32 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:30 +0800 Subject: [PATCH 07/12] mm, swap: mark bad slots in swap table directly Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-7-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=4470; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=8m9COjsQHMkFcnP8SRLKraCOnV7wwqbxj9agW6bIMxQ=; b=SpWzpfa4WqhuNM3Pe4jfaTwdbufTKN2DF7Uh2F/b+DvUK0vdZtXufwCirOKUf4gLbdtTA582W nhqSZ5cuQ0KCPQShNbDFuSBFTHpqPcHDcRoLKJH+OQU4wOUZsjuiCEd X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song In preparing the deprecating swap_map, mark bad slots in the swap table too when setting SWAP_MAP_BAD in swap_map. Also, refine the swap table sanity check on freeing to adapt to the bad slots change. For swapoff, the bad slots count must match the cluster usage count, as nothing should touch them, and they contribute to the cluster usage count on swapon. For ordinary swap table freeing, the swap table of clusters with bad slots should never be freed since the cluster usage count never reaches zero. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 56 +++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 41 insertions(+), 15 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index df8b13eecab1..bdce2abd9135 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -454,16 +454,37 @@ static void swap_table_free(struct swap_table *table) swap_table_free_folio_rcu_cb); } =20 +/* + * Sanity check to ensure nothing leaked, and the specified range is empty. + * One special case is that bad slots can't be freed, so check the number = of + * bad slots for swapoff, and non-swapoff path must never free bad slots. + */ +static void swap_cluster_assert_empty(struct swap_cluster_info *ci, bool s= wapoff) +{ + unsigned int ci_off =3D 0, ci_end =3D SWAPFILE_CLUSTER; + unsigned long swp_tb; + int bad_slots =3D 0; + + if (!IS_ENABLED(CONFIG_DEBUG_VM) && !swapoff) + return; + + do { + swp_tb =3D __swap_table_get(ci, ci_off); + if (swp_tb_is_bad(swp_tb)) + bad_slots++; + else + WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); + } while (++ci_off < ci_end); + + WARN_ON_ONCE(bad_slots !=3D (swapoff ? ci->count : 0)); +} + static void swap_cluster_free_table(struct swap_cluster_info *ci) { - unsigned int ci_off; struct swap_table *table; =20 /* Only empty cluster's table is allow to be freed */ lockdep_assert_held(&ci->lock); - VM_WARN_ON_ONCE(!cluster_is_empty(ci)); - for (ci_off =3D 0; ci_off < SWAPFILE_CLUSTER; ci_off++) - VM_WARN_ON_ONCE(!swp_tb_is_null(__swap_table_get(ci, ci_off))); table =3D (void *)rcu_dereference_protected(ci->table, true); rcu_assign_pointer(ci->table, NULL); =20 @@ -567,6 +588,7 @@ static void swap_cluster_schedule_discard(struct swap_i= nfo_struct *si, =20 static void __free_cluster(struct swap_info_struct *si, struct swap_cluste= r_info *ci) { + swap_cluster_assert_empty(ci, false); swap_cluster_free_table(ci); move_cluster(si, ci, &si->free_clusters, CLUSTER_FLAG_FREE); ci->order =3D 0; @@ -747,9 +769,11 @@ static int swap_cluster_setup_bad_slot(struct swap_inf= o_struct *si, struct swap_cluster_info *cluster_info, unsigned int offset, bool mask) { + unsigned int ci_off =3D offset % SWAPFILE_CLUSTER; unsigned long idx =3D offset / SWAPFILE_CLUSTER; - struct swap_table *table; struct swap_cluster_info *ci; + struct swap_table *table; + int ret =3D 0; =20 /* si->max may got shrunk by swap swap_activate() */ if (offset >=3D si->max && !mask) { @@ -767,13 +791,7 @@ static int swap_cluster_setup_bad_slot(struct swap_inf= o_struct *si, pr_warn("Empty swap-file\n"); return -EINVAL; } - /* Check for duplicated bad swap slots. */ - if (si->swap_map[offset]) { - pr_warn("Duplicated bad slot offset %d\n", offset); - return -EINVAL; - } =20 - si->swap_map[offset] =3D SWAP_MAP_BAD; ci =3D cluster_info + idx; if (!ci->table) { table =3D swap_table_alloc(GFP_KERNEL); @@ -781,13 +799,21 @@ static int swap_cluster_setup_bad_slot(struct swap_in= fo_struct *si, return -ENOMEM; rcu_assign_pointer(ci->table, table); } - - ci->count++; + spin_lock(&ci->lock); + /* Check for duplicated bad swap slots. */ + if (__swap_table_xchg(ci, ci_off, SWP_TB_BAD) !=3D SWP_TB_NULL) { + pr_warn("Duplicated bad slot offset %d\n", offset); + ret =3D -EINVAL; + } else { + si->swap_map[offset] =3D SWAP_MAP_BAD; + ci->count++; + } + spin_unlock(&ci->lock); =20 WARN_ON(ci->count > SWAPFILE_CLUSTER); WARN_ON(ci->flags); =20 - return 0; + return ret; } =20 /* @@ -2743,7 +2769,7 @@ static void free_swap_cluster_info(struct swap_cluste= r_info *cluster_info, /* Cluster with bad marks count will have a remaining table */ spin_lock(&ci->lock); if (rcu_dereference_protected(ci->table, true)) { - ci->count =3D 0; + swap_cluster_assert_empty(ci, true); swap_cluster_free_table(ci); } spin_unlock(&ci->lock); --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C9C5238C15 for ; Sun, 25 Jan 2026 17:58:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363921; cv=none; b=lTltfZpswkyTtF1p3ScWUAK/APDj04U6ALeq2T7grawitNn2ML2eiIuNUmTU3rqPN5QV9P01LsVgkU/XSZXpM0xudDaHGLqTm6kSThsCVMHpaIlTzp/xz7dm9UY2bVgXAVQozqIE2QHrJ+nP3/9kjUMmDGCVzHM87LUg+R93LpA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363921; c=relaxed/simple; bh=duthAvrprtmW88C0Xisiob8i1iw8kK3zMTNLkXt6lqI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=K9dGbURUxJPb/B8HRXxd9N1N5gNVilayyLjz+SJJZ3v0lYTGV82Ozxb1oZUJXfYAnTyEFcmfBX80pOkdduepLIMmop/1zha5oSor6nsnIfuHRTA2/kpWVmWVOGHq+Y+28fHwgU7d/fq+uZWyNABuond0EBqY3ggn7GgK7O0r68I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Fo4UhwA0; arc=none smtp.client-ip=209.85.216.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Fo4UhwA0" Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-34c868b197eso3429997a91.2 for ; Sun, 25 Jan 2026 09:58:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363919; x=1769968719; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=bBjV8XyXDLoEReDqKUKA22e4GI0fZeu23FHRrREbJdM=; b=Fo4UhwA0pGbVwEsXrKOQKU+qsT7V9/lczYBQ3n19KjK24gK2dU6onZqG+f9dVJLzyO CS0f5/YuZxFvxFeWQOaIRiCuCQJwslwvFdDA0ceZmawhqR2rvTJAmEqni6AJ1pv5HGpC DdMrjqdij+ynQ8D/ttzlwqjcwiwzN5PUhs99IJ51tEaX9g6JRzDf8dHTZ2aag9tdkRrG KVd1/vJ6P/lgcSgYGNeps8/3FtEJo0JzVTAjLtgZmyeRnlubtd8+gtXMM4Fbw+QM8AFw TWkiyHSfSHD5dxC43RrMqe0C/N0iftCTipYvspx7+qEfjWhrKUKPty2u6loOnsO4quM3 25eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363919; x=1769968719; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=bBjV8XyXDLoEReDqKUKA22e4GI0fZeu23FHRrREbJdM=; b=vBWkf46uuX5KCWryHkQQCzbrEgtGdyoUIe3UuuxA/mpGj8LJ+7ll/7kjIiUvbj7TBi cduLhIoRfYy4wI5k0qWE4f9dWLajqDnDFAuH8iutzppD7hgVPqt92HbcDLZu9dL70Qwi WUImtAKyQ9SBSkULBIP7ombegFUryrDGo41R8pPMdJ2p3v6MtnEB3/KUWPs9DleQfu4q rsRjudiHUxykpVwZY+VN7PLhyXu0yXJwqVYdJl9T+gpB6/0ZSCvtNWiSzflcAZ+OASJa /IeJsVpksvlmEaS+Svq2Jk2kOi/76GzO1Pc/gvFWrE1W78ggooNVHM0IKXqn6wYxKCOP 2rnA== X-Forwarded-Encrypted: i=1; AJvYcCV/I9xkXTXcg1PgN5gNdhxMPAE/5yHpg6exyBBZZF43AakFAH3Is5dzqSh5a6R75vjgOJ5JnleNUkdymho=@vger.kernel.org X-Gm-Message-State: AOJu0YxvWwNU4WSwb+9TRG42zVn4+qbofVv3XbgBJgchCiQooW3/zdvo MNnsKsKCZmPxfVS5vPXcKN4YxV9ZoConJIVXkMRA5FpZthSfKoUZ2HcDcnDVrQAM8EY= X-Gm-Gg: AZuq6aL07a9bNuF6elbbOUmGGdZ6eMRWlJPvsxJY9L4HszKhktrNi/4iMvsCDyOyn7U Pz9eyTKpOnh5z58C58RYYtbI4Fh85iwijvg7DvnJhk+FUq+OiIdieY2zCycef0s8deOSWn5DDh5 vcIS6lvTTnzILHiXF9/IQSL258CAM/L3dj3n8D8EKiEFIM5/NWPgGZZ0je5RIZSPXHm93y/pfNe oRRkpUKZInJLcPebbHNWWSHJ/BbE49CQPcezOkS/Pd46hjWhdIZgkdTanJImWZnhWnZ9c8C2SNd 0n5tW/Di2v45SQAfcTDYLnfpqYpVBzrtRjSYaWqP9Y/6bhxGX+zgKIF4a+ql1m1t+0MG9gmsbdk 56S9QzDx2TiAQXPguWvFSL5iFmw0d2UAr+Y4vHglaQuTvyyqdI9iEmP8djHLSCxGmBcoRMLB7Rm my4rMvGDR3FWPfjvdeTT6sm8X3o8cxHmyw9Ck721QTTkk/NAEs X-Received: by 2002:a17:90b:548e:b0:33b:b020:597a with SMTP id 98e67ed59e1d1-353c3fde050mr1632030a91.0.1769363919442; Sun, 25 Jan 2026 09:58:39 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:38 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:31 +0800 Subject: [PATCH 08/12] mm, swap: simplify swap table sanity range check Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-8-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=3805; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=xEm9vmKYs9I8cGuUNSJYWeygrYtNmSS9LIdq7v6ecsc=; b=FCG72J3dYPeMjzlN2+p8N1F/IqZer4WAcNKJ/kLqN0BHkotDArPvXlR/hYqlVNWS4glF2dlTS 4NngxeKIwoFAIxg4FABZHBfwqiOh6oBbVULHlPkJ/sDZnYU4wNVFokZ X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song The newly introduced helper, which checks bad slots and emptiness of a cluster, can cover the older sanity check just fine, with a more rigorous condition check. So merge them. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 35 +++++++++-------------------------- 1 file changed, 9 insertions(+), 26 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index bdce2abd9135..968153691fc4 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -459,9 +459,11 @@ static void swap_table_free(struct swap_table *table) * One special case is that bad slots can't be freed, so check the number = of * bad slots for swapoff, and non-swapoff path must never free bad slots. */ -static void swap_cluster_assert_empty(struct swap_cluster_info *ci, bool s= wapoff) +static void swap_cluster_assert_empty(struct swap_cluster_info *ci, + unsigned int ci_off, unsigned int nr, + bool swapoff) { - unsigned int ci_off =3D 0, ci_end =3D SWAPFILE_CLUSTER; + unsigned int ci_end =3D ci_off + nr; unsigned long swp_tb; int bad_slots =3D 0; =20 @@ -588,7 +590,7 @@ static void swap_cluster_schedule_discard(struct swap_i= nfo_struct *si, =20 static void __free_cluster(struct swap_info_struct *si, struct swap_cluste= r_info *ci) { - swap_cluster_assert_empty(ci, false); + swap_cluster_assert_empty(ci, 0, SWAPFILE_CLUSTER, false); swap_cluster_free_table(ci); move_cluster(si, ci, &si->free_clusters, CLUSTER_FLAG_FREE); ci->order =3D 0; @@ -898,26 +900,6 @@ static bool cluster_scan_range(struct swap_info_struct= *si, return true; } =20 -/* - * Currently, the swap table is not used for count tracking, just - * do a sanity check here to ensure nothing leaked, so the swap - * table should be empty upon freeing. - */ -static void swap_cluster_assert_table_empty(struct swap_cluster_info *ci, - unsigned int start, unsigned int nr) -{ - unsigned int ci_off =3D start % SWAPFILE_CLUSTER; - unsigned int ci_end =3D ci_off + nr; - unsigned long swp_tb; - - if (IS_ENABLED(CONFIG_DEBUG_VM)) { - do { - swp_tb =3D __swap_table_get(ci, ci_off); - VM_WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); - } while (++ci_off < ci_end); - } -} - static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster_info *ci, struct folio *folio, @@ -943,13 +925,14 @@ static bool cluster_alloc_range(struct swap_info_stru= ct *si, if (likely(folio)) { order =3D folio_order(folio); nr_pages =3D 1 << order; + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false= ); __swap_cache_add_folio(ci, folio, swp_entry(si->type, offset)); } else if (IS_ENABLED(CONFIG_HIBERNATION)) { order =3D 0; nr_pages =3D 1; WARN_ON_ONCE(si->swap_map[offset]); si->swap_map[offset] =3D 1; - swap_cluster_assert_table_empty(ci, offset, 1); + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, 1, false); } else { /* Allocation without folio is only possible with hibernation */ WARN_ON_ONCE(1); @@ -1768,7 +1751,7 @@ void swap_entries_free(struct swap_info_struct *si, =20 mem_cgroup_uncharge_swap(entry, nr_pages); swap_range_free(si, offset, nr_pages); - swap_cluster_assert_table_empty(ci, offset, nr_pages); + swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false); =20 if (!ci->count) free_cluster(si, ci); @@ -2769,7 +2752,7 @@ static void free_swap_cluster_info(struct swap_cluste= r_info *cluster_info, /* Cluster with bad marks count will have a remaining table */ spin_lock(&ci->lock); if (rcu_dereference_protected(ci->table, true)) { - swap_cluster_assert_empty(ci, true); + swap_cluster_assert_empty(ci, 0, SWAPFILE_CLUSTER, true); swap_cluster_free_table(ci); } spin_unlock(&ci->lock); --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD57F2EDD50 for ; Sun, 25 Jan 2026 17:58:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363928; cv=none; b=uHABW8cr5n0/7KlaKIhUJ4Y8Tjn3grqSp/bP9mqB4q4sCtCgg4XN45B3B+d1vAKHjs4rdsLBJ/5ujoxHLuer7/thQQlmObqnCgLy8jeBvke4Cwj+l51la66dTo1yUk3PMUaIfHZzYkktbFjb3SpnvvAxNR9Jlac+LoyL1UrBFkQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363928; c=relaxed/simple; bh=no2KDrzD4UL3kiqUMZjQ4ndzlgXKFlwGzyHcdugxvrk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=aZ39xrB5ExhwUhr8OTT3BfkqKwNWmaiwM1b3CHjMvZ06DhT/SXHGntWO1m332x9RO185obmc8D7O+C7e5oiaAdgXP84tVf/3SGz/k6J8patgI0UXiowGg1/XVWeGLF04Nfl+7GgxEqfmb7aP4aTOKNYMdRkHbokEapcct5Kai6k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=c5cQcS7P; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="c5cQcS7P" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-8230c839409so2912813b3a.3 for ; Sun, 25 Jan 2026 09:58:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363925; x=1769968725; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=53RbtdMTPApPEQsNoQjM96hvGg0osjNpdsJODwq62NY=; b=c5cQcS7PXJ1MhBTrjrKsmWqHbya2TCo5CF+630JOZAB2LcVY0KwfgHndl6Nm7/4jqq FpT281zvY0SqFTgLt0HtC9MDvC+bSpHIKKaWMgbav7/9bql10y7uUp8ZfrZPcpH0gvXH x6bVqZILiyAX0BHhu0dxWjr7OPebHByZi0pVEjgUDRjL1HQ16m9avOdNutu8v3T2hY2/ pxzFBJSUuqaDK2cLrg7sHSENi6K18QbzVziJ2mumhbex9EKvsdXj5LrjFxXLLSZLBMwH sdRH7q65/kQf37Fjopmm+mD6lgn8RPzrGOTeskRHBESj2v7pmAf19hqMBNgEDIR3HGrg kg5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363925; x=1769968725; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=53RbtdMTPApPEQsNoQjM96hvGg0osjNpdsJODwq62NY=; b=DowJ/sQmjr+fqVEr3B/W8HYk6XDzUABn+kPv77i5Di0fkueIcPmCFPiYfj9W6Sb5oT EgNLkmAexY21m9yZ2r53Ls5jJdc/0/HZTYWOViPtZ96PleIf9Nw+HmvnUvU7+840IHh7 /3t1mrIGm7Ok1agdZEhQybP5CJYjgPHDbbYvorP40Q8BVLCCP61qSH1UKEG9c6HG1Y4S aUsIh7Whojl8cwTaeC0f5SZbhRg6YDgA2eqq97j6XJeG10ZOmqUPHC8NENorojh3wj0H pRFYV3jMQ3Q6Te9i7+iV74Uv0hC/JLq9fqkauFW59hO1u1/bt6N6NBt9UnXzgbtRQiG+ oQZQ== X-Forwarded-Encrypted: i=1; AJvYcCX0bFfQT6+1bCYQWuX93xCRAsF2TeO69Q0TbqYArioXVsI5zxrnMaaBQZsyZGJPAE3eU231Rt3w65ngp+8=@vger.kernel.org X-Gm-Message-State: AOJu0YwXJhr6rvmiTutbm7utzZ/yzwqW3t2srEdgc49lYoZKH/hsB6sg okOj1tWgi9GAvla/Gp6QP1UXX72KrVjpsL12sEAV8MgGWul3WeITzvLi X-Gm-Gg: AZuq6aKJpvM4ggiaJkbbiw9Dgtoc0jo5DSBWulko7wmyyAK/j2bkp2/DQiZ7wVcibPc DIktOJR/M5n+gxJ3+pemcPbMUg7QPNmXNoeP2P62/IO9DK+vKobAX6A7TNThcGodmRMm2Li+Cqh urI9WQQv0lEG5CShrGN4tcszm68Tuzltq6nuJ3S180ZJVMj8XLotsUwLHy+WNnQKHlNENyKnubS MSBOfiFh4PFBLQ9KDGSy9MNbpGV5jNEL2zj2qiXaCOnaYROEt6XzQoXwmLN/W5/Xi1sy7W8aDPC TRqKBcv5shGSixgeOs66LVzF+/cdklLm3Bihp/UCUGv+rj8P/zwQxJYs/oTAqLqEdJ9M+bDy3IH TeEVx79wR/V88kL3Tgl3tugDiC/5c5GZ/EwmX8MvQKPkxll8l94Jw8j+CBTvIjLsgTD/XvHrbn4 TlQXqcU7MxnA/D4ceErs5FMZuN8FoBn/VWOY+7zY7wilMCfXlu X-Received: by 2002:a05:6a00:4f91:b0:81e:f623:ba0c with SMTP id d2e1a72fcca58-8234129f6e8mr1894972b3a.44.1769363924930; Sun, 25 Jan 2026 09:58:44 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:44 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:32 +0800 Subject: [PATCH 09/12] mm, swap: use the swap table to track the swap count Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-9-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=51000; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=1RyAkhAcxYrri53TokcENEpCbb83sm6IY+yjwo+FPdw=; b=uz/j4PcwsPQ8Acfy44Sc08C33nstocYVMnAmgVbqAZryXl9QMj4jMSZ0sTegJtyiUzmiTbXrN SjpHNjrrdPtAgVFv5q6kjYzJ2tUgJqAtZKZ6S2DfBStwkeq5/sjarle X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Now all the infrastructures are ready, switch to using the swap table only. This is unfortunately a large patch because the whole old counting mechanism, especially SWP_CONTINUED, has to be gone and switch to the new mechanism together, with no intermediate steps available. The swap table is capable of holding up to SWP_TB_COUNT_MAX - 1 counts in the higher bits of each table entry, so using that, the swap_map can be completely dropped. swap_map also had a limit of SWAP_CONT_MAX. Any value beyond that limit will require a COUNT_CONTINUED page. COUNT_CONTINUED is a bit complex to maintain, so for the swap table, a simpler approach is used: when the count goes beyond SWP_TB_COUNT_MAX - 1, the cluster will have an extend_table allocated, which is a swap cluster-sized array of unsigned long. The counting is basically offloaded there until the count drops below SWAP_CONT_MAX again. Both the swap table and the extend table are cluster-based, so they exhibit good performance and sparsity. To make the switch from swap_map to swap table clean, this commit cleans up and introduces a new set of functions based on the swap table design, for manipulating swap counts: - __swap_cluster_dup_entry, __swap_cluster_put_entry, __swap_cluster_alloc_entry, __swap_cluster_free_entry: Increase/decrease the count of a swap slot, or alloc / free a swap slot. This is the internal routine that does the counting work based on the swap table and handles all the complexities. The caller will need to lock the cluster before calling them. All swap count-related update operations are wrapped by these four helpers. - swap_dup_entries_cluster, swap_put_entries_cluster: Increase/decrease the swap count of one or a set of swap slots in the same cluster range. These two helpers serve as the common routines for folio_dup_swap & swap_dup_entry_direct, or folio_put_swap & swap_put_entries_direct. And use these helpers to replace all existing callers. This helps to simplify the count tracking by a lot, and the swap_map is gone. Suggested-by: Chris Li Signed-off-by: Kairui Song --- include/linux/swap.h | 28 +- mm/memory.c | 2 +- mm/swap.h | 14 +- mm/swap_state.c | 53 ++-- mm/swap_table.h | 5 + mm/swapfile.c | 773 +++++++++++++++++++----------------------------= ---- 6 files changed, 328 insertions(+), 547 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 62fc7499b408..0effe3cc50f5 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -208,7 +208,6 @@ enum { SWP_DISCARDABLE =3D (1 << 2), /* blkdev support discard */ SWP_DISCARDING =3D (1 << 3), /* now discarding a free cluster */ SWP_SOLIDSTATE =3D (1 << 4), /* blkdev seeks are cheap */ - SWP_CONTINUED =3D (1 << 5), /* swap_map has count continuation */ SWP_BLKDEV =3D (1 << 6), /* its a block device */ SWP_ACTIVATED =3D (1 << 7), /* set after swap_activate success */ SWP_FS_OPS =3D (1 << 8), /* swapfile operations go through fs */ @@ -223,16 +222,6 @@ enum { #define SWAP_CLUSTER_MAX_SKIPPED (SWAP_CLUSTER_MAX << 10) #define COMPACT_CLUSTER_MAX SWAP_CLUSTER_MAX =20 -/* Bit flag in swap_map */ -#define COUNT_CONTINUED 0x80 /* Flag swap_map continuation for full count = */ - -/* Special value in first swap_map */ -#define SWAP_MAP_MAX 0x3e /* Max count */ -#define SWAP_MAP_BAD 0x3f /* Note page is bad */ - -/* Special value in each swap_map continuation */ -#define SWAP_CONT_MAX 0x7f /* Max count */ - /* * The first page in the swap file is the swap header, which is always mar= ked * bad to prevent it from being allocated as an entry. This also prevents = the @@ -264,8 +253,7 @@ struct swap_info_struct { signed short prio; /* swap priority of this type */ struct plist_node list; /* entry in swap_active_head */ signed char type; /* strange name for an index */ - unsigned int max; /* extent of the swap_map */ - unsigned char *swap_map; /* vmalloc'ed array of usage counts */ + unsigned int max; /* size of this swap device */ unsigned long *zeromap; /* kvmalloc'ed bitmap to track zero pages */ struct swap_cluster_info *cluster_info; /* cluster info. Only for SSD */ struct list_head free_clusters; /* free clusters list */ @@ -284,18 +272,14 @@ struct swap_info_struct { struct completion comp; /* seldom referenced */ spinlock_t lock; /* * protect map scan related fields like - * swap_map, inuse_pages and all cluster - * lists. other fields are only changed + * inuse_pages and all cluster lists. + * Other fields are only changed * at swapon/swapoff, so are protected * by swap_lock. changing flags need * hold this lock and swap_lock. If * both locks need hold, hold swap_lock * first. */ - spinlock_t cont_lock; /* - * protect swap count continuation page - * list. - */ struct work_struct discard_work; /* discard worker */ struct work_struct reclaim_work; /* reclaim worker */ struct list_head discard_clusters; /* discard clusters list */ @@ -451,7 +435,6 @@ static inline long get_nr_swap_pages(void) } =20 extern void si_swapinfo(struct sysinfo *); -extern int add_swap_count_continuation(swp_entry_t, gfp_t); int swap_type_of(dev_t device, sector_t offset); int find_first_swap(dev_t *device); extern unsigned int count_swap_pages(int, int); @@ -517,11 +500,6 @@ static inline void free_swap_cache(struct folio *folio) { } =20 -static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_m= ask) -{ - return 0; -} - static inline int swap_dup_entry_direct(swp_entry_t ent) { return 0; diff --git a/mm/memory.c b/mm/memory.c index 7238dd9dd629..a5f471d79507 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1348,7 +1348,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct= vm_area_struct *src_vma, =20 if (ret =3D=3D -EIO) { VM_WARN_ON_ONCE(!entry.val); - if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) { + if (swap_retry_table_alloc(entry, GFP_KERNEL) < 0) { ret =3D -ENOMEM; goto out; } diff --git a/mm/swap.h b/mm/swap.h index bfafa637c458..751430e2d2a5 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -37,6 +37,7 @@ struct swap_cluster_info { u8 flags; u8 order; atomic_long_t __rcu *table; /* Swap table entries, see mm/swap_table.h */ + unsigned long *extend_table; /* For large swap count, protected by ci->lo= ck */ struct list_head list; }; =20 @@ -183,6 +184,8 @@ static inline void swap_cluster_unlock_irq(struct swap_= cluster_info *ci) spin_unlock_irq(&ci->lock); } =20 +extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp); + /* * Below are the core routines for doing swap for a folio. * All helpers requires the folio to be locked, and a locked folio @@ -206,9 +209,9 @@ int folio_dup_swap(struct folio *folio, struct page *su= bpage); void folio_put_swap(struct folio *folio, struct page *subpage); =20 /* For internal use */ -extern void swap_entries_free(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset, unsigned int nr_pages); +extern void __swap_cluster_free_entries(struct swap_info_struct *si, + struct swap_cluster_info *ci, + unsigned int ci_off, unsigned int nr_pages); =20 /* linux/mm/page_io.c */ int sio_pool_init(void); @@ -446,6 +449,11 @@ static inline int swap_writeout(struct folio *folio, return 0; } =20 +static inline int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp) +{ + return -EINVAL; +} + static inline bool swap_cache_has_folio(swp_entry_t entry) { return false; diff --git a/mm/swap_state.c b/mm/swap_state.c index e213ee35c1d2..c808f0948b10 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -140,21 +140,20 @@ void *swap_cache_get_shadow(swp_entry_t entry) void __swap_cache_add_folio(struct swap_cluster_info *ci, struct folio *folio, swp_entry_t entry) { - unsigned long new_tb; - unsigned int ci_start, ci_off, ci_end; + unsigned int ci_off =3D swp_cluster_offset(entry), ci_end; unsigned long nr_pages =3D folio_nr_pages(folio); + unsigned long pfn =3D folio_pfn(folio); + unsigned long old_tb; =20 VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); VM_WARN_ON_ONCE_FOLIO(!folio_test_swapbacked(folio), folio); =20 - new_tb =3D folio_to_swp_tb(folio, 0); - ci_start =3D swp_cluster_offset(entry); - ci_off =3D ci_start; - ci_end =3D ci_start + nr_pages; + ci_end =3D ci_off + nr_pages; do { - VM_WARN_ON_ONCE(swp_tb_is_folio(__swap_table_get(ci, ci_off))); - __swap_table_set(ci, ci_off, new_tb); + old_tb =3D __swap_table_get(ci, ci_off); + VM_WARN_ON_ONCE(swp_tb_is_folio(old_tb)); + __swap_table_set(ci, ci_off, pfn_to_swp_tb(pfn, __swp_tb_get_count(old_t= b))); } while (++ci_off < ci_end); =20 folio_ref_add(folio, nr_pages); @@ -183,14 +182,13 @@ static int swap_cache_add_folio(struct folio *folio, = swp_entry_t entry, unsigned long old_tb; struct swap_info_struct *si; struct swap_cluster_info *ci; - unsigned int ci_start, ci_off, ci_end, offset; + unsigned int ci_start, ci_off, ci_end; unsigned long nr_pages =3D folio_nr_pages(folio); =20 si =3D __swap_entry_to_info(entry); ci_start =3D swp_cluster_offset(entry); ci_end =3D ci_start + nr_pages; ci_off =3D ci_start; - offset =3D swp_offset(entry); ci =3D swap_cluster_lock(si, swp_offset(entry)); if (unlikely(!ci->table)) { err =3D -ENOENT; @@ -202,13 +200,12 @@ static int swap_cache_add_folio(struct folio *folio, = swp_entry_t entry, err =3D -EEXIST; goto failed; } - if (unlikely(!__swap_count(swp_entry(swp_type(entry), offset)))) { + if (unlikely(!__swp_tb_get_count(old_tb))) { err =3D -ENOENT; goto failed; } if (swp_tb_is_shadow(old_tb)) shadow =3D swp_tb_to_shadow(old_tb); - offset++; } while (++ci_off < ci_end); __swap_cache_add_folio(ci, folio, entry); swap_cluster_unlock(ci); @@ -237,8 +234,9 @@ static int swap_cache_add_folio(struct folio *folio, sw= p_entry_t entry, void __swap_cache_del_folio(struct swap_cluster_info *ci, struct folio *fo= lio, swp_entry_t entry, void *shadow) { + int count; + unsigned long old_tb; struct swap_info_struct *si; - unsigned long old_tb, new_tb; unsigned int ci_start, ci_off, ci_end; bool folio_swapped =3D false, need_free =3D false; unsigned long nr_pages =3D folio_nr_pages(folio); @@ -249,20 +247,20 @@ void __swap_cache_del_folio(struct swap_cluster_info = *ci, struct folio *folio, VM_WARN_ON_ONCE_FOLIO(folio_test_writeback(folio), folio); =20 si =3D __swap_entry_to_info(entry); - new_tb =3D shadow_to_swp_tb(shadow, 0); ci_start =3D swp_cluster_offset(entry); ci_end =3D ci_start + nr_pages; ci_off =3D ci_start; do { - /* If shadow is NULL, we sets an empty shadow */ - old_tb =3D __swap_table_xchg(ci, ci_off, new_tb); + old_tb =3D __swap_table_get(ci, ci_off); WARN_ON_ONCE(!swp_tb_is_folio(old_tb) || swp_tb_to_folio(old_tb) !=3D folio); - if (__swap_count(swp_entry(si->type, - swp_offset(entry) + ci_off - ci_start))) + count =3D __swp_tb_get_count(old_tb); + if (count) folio_swapped =3D true; else need_free =3D true; + /* If shadow is NULL, we sets an empty shadow. */ + __swap_table_set(ci, ci_off, shadow_to_swp_tb(shadow, count)); } while (++ci_off < ci_end); =20 folio->swap.val =3D 0; @@ -271,13 +269,13 @@ void __swap_cache_del_folio(struct swap_cluster_info = *ci, struct folio *folio, lruvec_stat_mod_folio(folio, NR_SWAPCACHE, -nr_pages); =20 if (!folio_swapped) { - swap_entries_free(si, ci, swp_offset(entry), nr_pages); + __swap_cluster_free_entries(si, ci, ci_start, nr_pages); } else if (need_free) { + ci_off =3D ci_start; do { - if (!__swap_count(entry)) - swap_entries_free(si, ci, swp_offset(entry), 1); - entry.val++; - } while (--nr_pages); + if (!__swp_tb_get_count(__swap_table_get(ci, ci_off))) + __swap_cluster_free_entries(si, ci, ci_off, 1); + } while (++ci_off < ci_end); } } =20 @@ -324,17 +322,18 @@ void __swap_cache_replace_folio(struct swap_cluster_i= nfo *ci, unsigned long nr_pages =3D folio_nr_pages(new); unsigned int ci_off =3D swp_cluster_offset(entry); unsigned int ci_end =3D ci_off + nr_pages; - unsigned long old_tb, new_tb; + unsigned int pfn =3D folio_pfn(new); + unsigned long old_tb; =20 VM_WARN_ON_ONCE(!folio_test_swapcache(old) || !folio_test_swapcache(new)); VM_WARN_ON_ONCE(!folio_test_locked(old) || !folio_test_locked(new)); VM_WARN_ON_ONCE(!entry.val); =20 /* Swap cache still stores N entries instead of a high-order entry */ - new_tb =3D folio_to_swp_tb(new, 0); do { - old_tb =3D __swap_table_xchg(ci, ci_off, new_tb); + old_tb =3D __swap_table_get(ci, ci_off); WARN_ON_ONCE(!swp_tb_is_folio(old_tb) || swp_tb_to_folio(old_tb) !=3D ol= d); + __swap_table_set(ci, ci_off, pfn_to_swp_tb(pfn, __swp_tb_get_count(old_t= b))); } while (++ci_off < ci_end); =20 /* @@ -368,7 +367,7 @@ void __swap_cache_clear_shadow(swp_entry_t entry, int n= r_ents) ci_end =3D ci_off + nr_ents; do { old =3D __swap_table_xchg(ci, ci_off, null_to_swp_tb()); - WARN_ON_ONCE(swp_tb_is_folio(old)); + WARN_ON_ONCE(swp_tb_is_folio(old) || swp_tb_get_count(old)); } while (++ci_off < ci_end); } =20 diff --git a/mm/swap_table.h b/mm/swap_table.h index 9c4083e4e4f2..dce55b48fb74 100644 --- a/mm/swap_table.h +++ b/mm/swap_table.h @@ -184,6 +184,11 @@ static inline int swp_tb_get_count(unsigned long swp_t= b) return -EINVAL; } =20 +static inline unsigned long __swp_tb_mk_count(unsigned long swp_tb, int co= unt) +{ + return ((swp_tb & ~SWP_TB_COUNT_MASK) | __count_to_swp_tb(count)); +} + /* * Helpers for accessing or modifying the swap table of a cluster, * the swap cluster must be locked. diff --git a/mm/swapfile.c b/mm/swapfile.c index 968153691fc4..2febe868986e 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -51,15 +51,8 @@ #include "swap_table.h" #include "swap.h" =20 -static bool swap_count_continued(struct swap_info_struct *, pgoff_t, - unsigned char); -static void free_swap_count_continuations(struct swap_info_struct *); static void swap_range_alloc(struct swap_info_struct *si, unsigned int nr_entries); -static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr= ); -static void swap_put_entry_locked(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset); static bool folio_swapcache_freeable(struct folio *folio); static void move_cluster(struct swap_info_struct *si, struct swap_cluster_info *ci, struct list_head *list, @@ -182,22 +175,19 @@ static long swap_usage_in_pages(struct swap_info_stru= ct *si) /* Reclaim the swap entry if swap is getting full */ #define TTRS_FULL 0x4 =20 -static bool swap_only_has_cache(struct swap_info_struct *si, - struct swap_cluster_info *ci, +static bool swap_only_has_cache(struct swap_cluster_info *ci, unsigned long offset, int nr_pages) { unsigned int ci_off =3D offset % SWAPFILE_CLUSTER; - unsigned char *map =3D si->swap_map + offset; - unsigned char *map_end =3D map + nr_pages; + unsigned int ci_end =3D ci_off + nr_pages; unsigned long swp_tb; =20 do { swp_tb =3D __swap_table_get(ci, ci_off); VM_WARN_ON_ONCE(!swp_tb_is_folio(swp_tb)); - if (*map) + if (swp_tb_get_count(swp_tb)) return false; - ++ci_off; - } while (++map < map_end); + } while (++ci_off < ci_end); =20 return true; } @@ -256,7 +246,7 @@ static int __try_to_reclaim_swap(struct swap_info_struc= t *si, * reference or pending writeback, and can't be allocated to others. */ ci =3D swap_cluster_lock(si, offset); - need_reclaim =3D swap_only_has_cache(si, ci, offset, nr_pages); + need_reclaim =3D swap_only_has_cache(ci, offset, nr_pages); swap_cluster_unlock(ci); if (!need_reclaim) goto out_unlock; @@ -479,6 +469,7 @@ static void swap_cluster_assert_empty(struct swap_clust= er_info *ci, } while (++ci_off < ci_end); =20 WARN_ON_ONCE(bad_slots !=3D (swapoff ? ci->count : 0)); + WARN_ON_ONCE(nr =3D=3D SWAPFILE_CLUSTER && ci->extend_table); } =20 static void swap_cluster_free_table(struct swap_cluster_info *ci) @@ -529,7 +520,7 @@ swap_cluster_alloc_table(struct swap_info_struct *si, spin_unlock(&si->global_cluster_lock); local_unlock(&percpu_swap_cluster.lock); =20 - table =3D swap_table_alloc(__GFP_HIGH | __GFP_NOMEMALLOC | GFP_KERNEL); + table =3D swap_table_alloc(__GFP_HIGH | __GFP_NOMEMALLOC | GFP_KERNEL | _= _GFP_NOWARN); =20 /* * Back to atomic context. We might have migrated to a new CPU with a @@ -807,7 +798,6 @@ static int swap_cluster_setup_bad_slot(struct swap_info= _struct *si, pr_warn("Duplicated bad slot offset %d\n", offset); ret =3D -EINVAL; } else { - si->swap_map[offset] =3D SWAP_MAP_BAD; ci->count++; } spin_unlock(&ci->lock); @@ -829,18 +819,16 @@ static bool cluster_reclaim_range(struct swap_info_st= ruct *si, { unsigned int nr_pages =3D 1 << order; unsigned long offset =3D start, end =3D start + nr_pages; - unsigned char *map =3D si->swap_map; unsigned long swp_tb; =20 spin_unlock(&ci->lock); do { - if (READ_ONCE(map[offset])) - break; swp_tb =3D swap_table_get(ci, offset % SWAPFILE_CLUSTER); - if (swp_tb_is_folio(swp_tb)) { + if (swp_tb_get_count(swp_tb)) + break; + if (swp_tb_is_folio(swp_tb)) if (__try_to_reclaim_swap(si, offset, TTRS_ANYWAY) < 0) break; - } } while (++offset < end); spin_lock(&ci->lock); =20 @@ -864,7 +852,7 @@ static bool cluster_reclaim_range(struct swap_info_stru= ct *si, */ for (offset =3D start; offset < end; offset++) { swp_tb =3D __swap_table_get(ci, offset % SWAPFILE_CLUSTER); - if (map[offset] || !swp_tb_is_null(swp_tb)) + if (!swp_tb_is_null(swp_tb)) return false; } =20 @@ -876,37 +864,35 @@ static bool cluster_scan_range(struct swap_info_struc= t *si, unsigned long offset, unsigned int nr_pages, bool *need_reclaim) { - unsigned long end =3D offset + nr_pages; - unsigned char *map =3D si->swap_map; + unsigned int ci_off =3D offset % SWAPFILE_CLUSTER; + unsigned int ci_end =3D ci_off + nr_pages; unsigned long swp_tb; =20 - if (cluster_is_empty(ci)) - return true; - do { - if (map[offset]) - return false; - swp_tb =3D __swap_table_get(ci, offset % SWAPFILE_CLUSTER); - if (swp_tb_is_folio(swp_tb)) { + swp_tb =3D __swap_table_get(ci, ci_off); + if (swp_tb_is_null(swp_tb)) + continue; + if (swp_tb_is_folio(swp_tb) && !__swp_tb_get_count(swp_tb)) { if (!vm_swap_full()) return false; *need_reclaim =3D true; - } else { - /* A entry with no count and no cache must be null */ - VM_WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); + continue; } - } while (++offset < end); + /* Slot with zero count can only be NULL or folio */ + VM_WARN_ON(!swp_tb_get_count(swp_tb)); + return false; + } while (++ci_off < ci_end); =20 return true; } =20 -static bool cluster_alloc_range(struct swap_info_struct *si, - struct swap_cluster_info *ci, - struct folio *folio, - unsigned int offset) +static bool __swap_cluster_alloc_entries(struct swap_info_struct *si, + struct swap_cluster_info *ci, + struct folio *folio, + unsigned int ci_off) { - unsigned long nr_pages; unsigned int order; + unsigned long nr_pages; =20 lockdep_assert_held(&ci->lock); =20 @@ -925,14 +911,15 @@ static bool cluster_alloc_range(struct swap_info_stru= ct *si, if (likely(folio)) { order =3D folio_order(folio); nr_pages =3D 1 << order; - swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false= ); - __swap_cache_add_folio(ci, folio, swp_entry(si->type, offset)); + swap_cluster_assert_empty(ci, ci_off, nr_pages, false); + __swap_cache_add_folio(ci, folio, swp_entry(si->type, + ci_off + cluster_offset(si, ci))); } else if (IS_ENABLED(CONFIG_HIBERNATION)) { order =3D 0; nr_pages =3D 1; - WARN_ON_ONCE(si->swap_map[offset]); - si->swap_map[offset] =3D 1; - swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, 1, false); + swap_cluster_assert_empty(ci, ci_off, 1, false); + /* Sets a fake shadow as placeholder */ + __swap_table_set(ci, ci_off, shadow_to_swp_tb(NULL, 1)); } else { /* Allocation without folio is only possible with hibernation */ WARN_ON_ONCE(1); @@ -983,7 +970,7 @@ static unsigned int alloc_swap_scan_cluster(struct swap= _info_struct *si, if (!ret) continue; } - if (!cluster_alloc_range(si, ci, folio, offset)) + if (!__swap_cluster_alloc_entries(si, ci, folio, offset % SWAPFILE_CLUST= ER)) break; found =3D offset; offset +=3D nr_pages; @@ -1030,7 +1017,7 @@ static void swap_reclaim_full_clusters(struct swap_in= fo_struct *si, bool force) long to_scan =3D 1; unsigned long offset, end; struct swap_cluster_info *ci; - unsigned char *map =3D si->swap_map; + unsigned long swp_tb; int nr_reclaim; =20 if (force) @@ -1042,8 +1029,8 @@ static void swap_reclaim_full_clusters(struct swap_in= fo_struct *si, bool force) to_scan--; =20 while (offset < end) { - if (!READ_ONCE(map[offset]) && - swp_tb_is_folio(swap_table_get(ci, offset % SWAPFILE_CLUSTER))) { + swp_tb =3D swap_table_get(ci, offset % SWAPFILE_CLUSTER); + if (swp_tb_is_folio(swp_tb) && !__swp_tb_get_count(swp_tb)) { spin_unlock(&ci->lock); nr_reclaim =3D __try_to_reclaim_swap(si, offset, TTRS_ANYWAY); @@ -1452,40 +1439,126 @@ static bool swap_sync_discard(void) return false; } =20 +/* + * Allocate an array of unsigned long to contain counts above SWP_TB_COUNT= _MAX. + */ +static int swap_extend_table_alloc(struct swap_info_struct *si, + struct swap_cluster_info *ci, gfp_t gfp) +{ + void *table =3D kzalloc(sizeof(unsigned long) * SWAPFILE_CLUSTER, gfp); + + if (!table) + return -ENOMEM; + + spin_lock(&ci->lock); + if (!ci->extend_table) + ci->extend_table =3D table; + else + kfree(table); + spin_unlock(&ci->lock); + return 0; +} + +int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp) +{ + int ret; + struct swap_info_struct *si; + struct swap_cluster_info *ci; + unsigned long offset =3D swp_offset(entry); + + si =3D get_swap_device(entry); + if (!si) + return 0; + + ci =3D __swap_offset_to_cluster(si, offset); + ret =3D swap_extend_table_alloc(si, ci, gfp); + + put_swap_device(si); + return ret; +} + +static void swap_extend_table_try_free(struct swap_cluster_info *ci) +{ + unsigned long i; + bool can_free =3D true; + + for (i =3D 0; i < SWAPFILE_CLUSTER; i++) { + if (ci->extend_table[i]) + can_free =3D false; + } + + if (can_free) { + kfree(ci->extend_table); + ci->extend_table =3D NULL; + } +} + +/* Decrease the swap count of one slot, without freeing it */ +static void __swap_cluster_put_entry(struct swap_cluster_info *ci, + unsigned int ci_off) +{ + int count; + unsigned long swp_tb; + + lockdep_assert_held(&ci->lock); + swp_tb =3D __swap_table_get(ci, ci_off); + count =3D __swp_tb_get_count(swp_tb); + + VM_WARN_ON_ONCE(count <=3D 0); + VM_WARN_ON_ONCE(count > SWP_TB_COUNT_MAX); + + if (count =3D=3D SWP_TB_COUNT_MAX) { + count =3D ci->extend_table[ci_off]; + /* Overflow starts with SWP_TB_COUNT_MAX */ + VM_WARN_ON_ONCE(count < SWP_TB_COUNT_MAX); + count--; + if (count =3D=3D (SWP_TB_COUNT_MAX - 1)) { + ci->extend_table[ci_off] =3D 0; + __swap_table_set(ci, ci_off, __swp_tb_mk_count(swp_tb, count)); + swap_extend_table_try_free(ci); + } else { + ci->extend_table[ci_off] =3D count; + } + } else { + __swap_table_set(ci, ci_off, __swp_tb_mk_count(swp_tb, --count)); + } +} + /** - * swap_put_entries_cluster - Decrease the swap count of a set of slots. + * swap_put_entries_cluster - Decrease the swap count of slots within one = cluster * @si: The swap device. - * @start: start offset of slots. + * @offset: start offset of slots. * @nr: number of slots. - * @reclaim_cache: if true, also reclaim the swap cache. + * @reclaim_cache: if true, also reclaim the swap cache if slots are freed. * * This helper decreases the swap count of a set of slots and tries to * batch free them. Also reclaims the swap cache if @reclaim_cache is true. - * Context: The caller must ensure that all slots belong to the same - * cluster and their swap count doesn't go underflow. + * + * Context: The specified slots must be pinned by existing swap count or s= wap + * cache reference, so they won't be released until this helper returns. */ static void swap_put_entries_cluster(struct swap_info_struct *si, - unsigned long start, int nr, + pgoff_t offset, int nr, bool reclaim_cache) { - unsigned long offset =3D start, end =3D start + nr; - unsigned long batch_start =3D SWAP_ENTRY_INVALID; struct swap_cluster_info *ci; + unsigned int ci_off, ci_end; + pgoff_t end =3D offset + nr; bool need_reclaim =3D false; unsigned int nr_reclaimed; unsigned long swp_tb; - unsigned int count; + int ci_batch =3D -1; =20 ci =3D swap_cluster_lock(si, offset); + ci_off =3D offset % SWAPFILE_CLUSTER; + ci_end =3D ci_off + nr; do { - swp_tb =3D __swap_table_get(ci, offset % SWAPFILE_CLUSTER); - count =3D si->swap_map[offset]; - VM_WARN_ON(count < 1 || count =3D=3D SWAP_MAP_BAD); - if (count =3D=3D 1) { + swp_tb =3D __swap_table_get(ci, ci_off); + if (swp_tb_get_count(swp_tb) =3D=3D 1) { /* count =3D=3D 1 and non-cached slots will be batch freed. */ if (!swp_tb_is_folio(swp_tb)) { - if (!batch_start) - batch_start =3D offset; + if (ci_batch =3D=3D -1) + ci_batch =3D ci_off; continue; } /* count will be 0 after put, slot can be reclaimed */ @@ -1497,21 +1570,20 @@ static void swap_put_entries_cluster(struct swap_in= fo_struct *si, * slots will be freed when folio is removed from swap cache * (__swap_cache_del_folio). */ - swap_put_entry_locked(si, ci, offset); - if (batch_start) { - swap_entries_free(si, ci, batch_start, offset - batch_start); - batch_start =3D SWAP_ENTRY_INVALID; + __swap_cluster_put_entry(ci, ci_off); + if (ci_batch !=3D -1) { + __swap_cluster_free_entries(si, ci, ci_batch, ci_off - ci_batch); + ci_batch =3D -1; } - } while (++offset < end); + } while (++ci_off < ci_end); =20 - if (batch_start) - swap_entries_free(si, ci, batch_start, offset - batch_start); + if (ci_batch !=3D -1) + __swap_cluster_free_entries(si, ci, ci_batch, ci_off - ci_batch); swap_cluster_unlock(ci); =20 if (!need_reclaim || !reclaim_cache) return; =20 - offset =3D start; do { nr_reclaimed =3D __try_to_reclaim_swap(si, offset, TTRS_UNMAPPED | TTRS_FULL); @@ -1521,6 +1593,90 @@ static void swap_put_entries_cluster(struct swap_inf= o_struct *si, } while (offset < end); } =20 +/* Increase the swap count of one slot. */ +static int __swap_cluster_dup_entry(struct swap_cluster_info *ci, + unsigned int ci_off) +{ + int count; + unsigned long swp_tb; + + lockdep_assert_held(&ci->lock); + swp_tb =3D __swap_table_get(ci, ci_off); + /* Bad or special slots can't be handled */ + if (WARN_ON_ONCE(swp_tb_is_bad(swp_tb))) + return -EINVAL; + count =3D __swp_tb_get_count(swp_tb); + /* Must be either cached or have a count already */ + if (WARN_ON_ONCE(!count && !swp_tb_is_folio(swp_tb))) + return -ENOENT; + + if (likely(count < (SWP_TB_COUNT_MAX - 1))) { + __swap_table_set(ci, ci_off, __swp_tb_mk_count(swp_tb, count + 1)); + VM_WARN_ON_ONCE(ci->extend_table && ci->extend_table[ci_off]); + } else if (count =3D=3D (SWP_TB_COUNT_MAX - 1)) { + if (ci->extend_table) { + VM_WARN_ON_ONCE(ci->extend_table[ci_off]); + ci->extend_table[ci_off] =3D SWP_TB_COUNT_MAX; + __swap_table_set(ci, ci_off, __swp_tb_mk_count(swp_tb, SWP_TB_COUNT_MAX= )); + } else { + return -ENOMEM; + } + } else if (count =3D=3D SWP_TB_COUNT_MAX) { + ++ci->extend_table[ci_off]; + } else { + /* Never happens unless counting went wrong */ + WARN_ON_ONCE(1); + } + + return 0; +} + +/** + * swap_dup_entries_cluster: Increase the swap count of slots within one c= luster. + * @si: The swap device. + * @offset: start offset of slots. + * @nr: number of slots. + * + * Context: The specified slots must be pinned by existing swap count or s= wap + * cache reference, so they won't be released until this helper returns. + * Return: 0 on success. -ENOMEM if the swap count maxed out (SWP_TB_COUNT= _MAX) + * and failed to allocate an extended table. + */ +static int swap_dup_entries_cluster(struct swap_info_struct *si, + pgoff_t offset, int nr) +{ + int err; + struct swap_cluster_info *ci; + unsigned int ci_start, ci_off, ci_end; + + ci_start =3D offset % SWAPFILE_CLUSTER; + ci_end =3D ci_start + nr; + ci_off =3D ci_start; + ci =3D swap_cluster_lock(si, offset); +restart: + do { + err =3D __swap_cluster_dup_entry(ci, ci_off); + if (unlikely(err)) { + if (err =3D=3D -ENOMEM) { + spin_unlock(&ci->lock); + err =3D swap_extend_table_alloc(si, ci, GFP_ATOMIC); + spin_lock(&ci->lock); + if (!err) + goto restart; + } + goto failed; + } + } while (++ci_off < ci_end); + swap_cluster_unlock(ci); + return 0; +failed: + while (ci_off-- > ci_start) + __swap_cluster_put_entry(ci, ci_off); + swap_extend_table_try_free(ci); + swap_cluster_unlock(ci); + return err; +} + /** * folio_alloc_swap - allocate swap space for a folio * @folio: folio we want to move to swap @@ -1595,7 +1751,6 @@ int folio_alloc_swap(struct folio *folio) */ int folio_dup_swap(struct folio *folio, struct page *subpage) { - int err =3D 0; swp_entry_t entry =3D folio->swap; unsigned long nr_pages =3D folio_nr_pages(folio); =20 @@ -1607,10 +1762,8 @@ int folio_dup_swap(struct folio *folio, struct page = *subpage) nr_pages =3D 1; } =20 - while (!err && __swap_duplicate(entry, 1, nr_pages) =3D=3D -ENOMEM) - err =3D add_swap_count_continuation(entry, GFP_ATOMIC); - - return err; + return swap_dup_entries_cluster(swap_entry_to_info(entry), + swp_offset(entry), nr_pages); } =20 /** @@ -1639,28 +1792,6 @@ void folio_put_swap(struct folio *folio, struct page= *subpage) swap_put_entries_cluster(si, swp_offset(entry), nr_pages, false); } =20 -static void swap_put_entry_locked(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset) -{ - unsigned char count; - - count =3D si->swap_map[offset]; - if ((count & ~COUNT_CONTINUED) <=3D SWAP_MAP_MAX) { - if (count =3D=3D COUNT_CONTINUED) { - if (swap_count_continued(si, offset, count)) - count =3D SWAP_MAP_MAX | COUNT_CONTINUED; - else - count =3D SWAP_MAP_MAX; - } else - count--; - } - - WRITE_ONCE(si->swap_map[offset], count); - if (!count && !swp_tb_is_folio(__swap_table_get(ci, offset % SWAPFILE_CLU= STER))) - swap_entries_free(si, ci, offset, 1); -} - /* * When we get a swap entry, if there aren't some other ways to * prevent swapoff, such as the folio in swap cache is locked, RCU @@ -1727,31 +1858,30 @@ struct swap_info_struct *get_swap_device(swp_entry_= t entry) } =20 /* - * Drop the last ref of swap entries, caller have to ensure all entries - * belong to the same cgroup and cluster. + * Free a set of swap slots after their swap count dropped to zero, or wil= l be + * zero after putting the last ref (saves one __swap_cluster_put_entry cal= l). */ -void swap_entries_free(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset, unsigned int nr_pages) +void __swap_cluster_free_entries(struct swap_info_struct *si, + struct swap_cluster_info *ci, + unsigned int ci_start, unsigned int nr_pages) { - swp_entry_t entry =3D swp_entry(si->type, offset); - unsigned char *map =3D si->swap_map + offset; - unsigned char *map_end =3D map + nr_pages; + unsigned long old_tb; + unsigned int ci_off =3D ci_start, ci_end =3D ci_start + nr_pages; + unsigned long offset =3D cluster_offset(si, ci) + ci_start; =20 - /* It should never free entries across different clusters */ - VM_BUG_ON(ci !=3D __swap_offset_to_cluster(si, offset + nr_pages - 1)); - VM_BUG_ON(cluster_is_empty(ci)); - VM_BUG_ON(ci->count < nr_pages); + VM_WARN_ON(ci->count < nr_pages); =20 ci->count -=3D nr_pages; do { - VM_WARN_ON(*map > 1); - *map =3D 0; - } while (++map < map_end); + old_tb =3D __swap_table_get(ci, ci_off); + /* Release the last ref, or after swap cache is dropped */ + VM_WARN_ON(!swp_tb_is_shadow(old_tb) || __swp_tb_get_count(old_tb) > 1); + __swap_table_set(ci, ci_off, null_to_swp_tb()); + } while (++ci_off < ci_end); =20 - mem_cgroup_uncharge_swap(entry, nr_pages); + mem_cgroup_uncharge_swap(swp_entry(si->type, offset), nr_pages); swap_range_free(si, offset, nr_pages); - swap_cluster_assert_empty(ci, offset % SWAPFILE_CLUSTER, nr_pages, false); + swap_cluster_assert_empty(ci, ci_start, nr_pages, false); =20 if (!ci->count) free_cluster(si, ci); @@ -1761,10 +1891,10 @@ void swap_entries_free(struct swap_info_struct *si, =20 int __swap_count(swp_entry_t entry) { - struct swap_info_struct *si =3D __swap_entry_to_info(entry); - pgoff_t offset =3D swp_offset(entry); + struct swap_cluster_info *ci =3D __swap_entry_to_cluster(entry); + unsigned int ci_off =3D swp_cluster_offset(entry); =20 - return si->swap_map[offset]; + return swp_tb_get_count(__swap_table_get(ci, ci_off)); } =20 /** @@ -1776,81 +1906,62 @@ bool swap_entry_swapped(struct swap_info_struct *si= , swp_entry_t entry) { pgoff_t offset =3D swp_offset(entry); struct swap_cluster_info *ci; - int count; + unsigned long swp_tb; =20 ci =3D swap_cluster_lock(si, offset); - count =3D si->swap_map[offset]; + swp_tb =3D swap_table_get(ci, offset % SWAPFILE_CLUSTER); swap_cluster_unlock(ci); =20 - return count && count !=3D SWAP_MAP_BAD; + return swp_tb_get_count(swp_tb) > 0; } =20 /* * How many references to @entry are currently swapped out? - * This considers COUNT_CONTINUED so it returns exact answer. + * This returns exact answer. */ int swp_swapcount(swp_entry_t entry) { - int count, tmp_count, n; struct swap_info_struct *si; struct swap_cluster_info *ci; - struct page *page; - pgoff_t offset; - unsigned char *map; + unsigned long swp_tb; + int count; =20 si =3D get_swap_device(entry); if (!si) return 0; =20 - offset =3D swp_offset(entry); - - ci =3D swap_cluster_lock(si, offset); - - count =3D si->swap_map[offset]; - if (!(count & COUNT_CONTINUED)) - goto out; - - count &=3D ~COUNT_CONTINUED; - n =3D SWAP_MAP_MAX + 1; - - page =3D vmalloc_to_page(si->swap_map + offset); - offset &=3D ~PAGE_MASK; - VM_BUG_ON(page_private(page) !=3D SWP_CONTINUED); - - do { - page =3D list_next_entry(page, lru); - map =3D kmap_local_page(page); - tmp_count =3D map[offset]; - kunmap_local(map); - - count +=3D (tmp_count & ~COUNT_CONTINUED) * n; - n *=3D (SWAP_CONT_MAX + 1); - } while (tmp_count & COUNT_CONTINUED); -out: + ci =3D swap_cluster_lock(si, swp_offset(entry)); + swp_tb =3D __swap_table_get(ci, swp_cluster_offset(entry)); + count =3D swp_tb_get_count(swp_tb); + if (count =3D=3D SWP_TB_COUNT_MAX) + count =3D ci->extend_table[swp_cluster_offset(entry)]; swap_cluster_unlock(ci); put_swap_device(si); - return count; + + return count < 0 ? 0 : count; } =20 static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, swp_entry_t entry, int order) { struct swap_cluster_info *ci; - unsigned char *map =3D si->swap_map; unsigned int nr_pages =3D 1 << order; unsigned long roffset =3D swp_offset(entry); unsigned long offset =3D round_down(roffset, nr_pages); + unsigned int ci_off; int i; bool ret =3D false; =20 ci =3D swap_cluster_lock(si, offset); if (nr_pages =3D=3D 1) { - if (map[roffset]) + ci_off =3D roffset % SWAPFILE_CLUSTER; + if (swp_tb_get_count(__swap_table_get(ci, ci_off))) ret =3D true; goto unlock_out; } for (i =3D 0; i < nr_pages; i++) { - if (map[offset + i]) { + ci_off =3D (offset + i) % SWAPFILE_CLUSTER; + if (swp_tb_get_count(__swap_table_get(ci, ci_off))) { ret =3D true; break; } @@ -2005,7 +2116,8 @@ void swap_free_hibernation_slot(swp_entry_t entry) return; =20 ci =3D swap_cluster_lock(si, offset); - swap_put_entry_locked(si, ci, offset); + __swap_cluster_put_entry(ci, offset % SWAPFILE_CLUSTER); + __swap_cluster_free_entries(si, ci, offset % SWAPFILE_CLUSTER, 1); swap_cluster_unlock(ci); =20 /* In theory readahead might add it to the swap cache by accident */ @@ -2237,6 +2349,7 @@ static int unuse_pte_range(struct vm_area_struct *vma= , pmd_t *pmd, do { struct folio *folio; unsigned long offset; + unsigned long swp_tb; unsigned char swp_count; softleaf_t entry; int ret; @@ -2273,8 +2386,10 @@ static int unuse_pte_range(struct vm_area_struct *vm= a, pmd_t *pmd, &vmf); } if (!folio) { - swp_count =3D READ_ONCE(si->swap_map[offset]); - if (swp_count =3D=3D 0 || swp_count =3D=3D SWAP_MAP_BAD) + swp_tb =3D swap_table_get(__swap_entry_to_cluster(entry), + swp_cluster_offset(entry)); + swp_count =3D swp_tb_get_count(swp_tb); + if (swp_count <=3D 0) continue; return -ENOMEM; } @@ -2402,7 +2517,7 @@ static int unuse_mm(struct mm_struct *mm, unsigned in= t type) } =20 /* - * Scan swap_map from current position to next entry still in use. + * Scan swap table from current position to next entry still in use. * Return 0 if there are no inuse entries after prev till end of * the map. */ @@ -2411,7 +2526,6 @@ static unsigned int find_next_to_unuse(struct swap_in= fo_struct *si, { unsigned int i; unsigned long swp_tb; - unsigned char count; =20 /* * No need for swap_lock here: we're just looking @@ -2420,12 +2534,9 @@ static unsigned int find_next_to_unuse(struct swap_i= nfo_struct *si, * allocations from this area (while holding swap_lock). */ for (i =3D prev + 1; i < si->max; i++) { - count =3D READ_ONCE(si->swap_map[i]); swp_tb =3D swap_table_get(__swap_offset_to_cluster(si, i), i % SWAPFILE_CLUSTER); - if (count =3D=3D SWAP_MAP_BAD) - continue; - if (count || swp_tb_is_folio(swp_tb)) + if (!swp_tb_is_null(swp_tb) && !swp_tb_is_bad(swp_tb)) break; if ((i % LATENCY_LIMIT) =3D=3D 0) cond_resched(); @@ -2785,7 +2896,6 @@ static void flush_percpu_swap_cluster(struct swap_inf= o_struct *si) SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) { struct swap_info_struct *p =3D NULL; - unsigned char *swap_map; unsigned long *zeromap; struct swap_cluster_info *cluster_info; struct file *swap_file, *victim; @@ -2868,8 +2978,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) flush_percpu_swap_cluster(p); =20 destroy_swap_extents(p, p->swap_file); - if (p->flags & SWP_CONTINUED) - free_swap_count_continuations(p); =20 if (!(p->flags & SWP_SOLIDSTATE)) atomic_dec(&nr_rotate_swap); @@ -2881,8 +2989,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) =20 swap_file =3D p->swap_file; p->swap_file =3D NULL; - swap_map =3D p->swap_map; - p->swap_map =3D NULL; zeromap =3D p->zeromap; p->zeromap =3D NULL; maxpages =3D p->max; @@ -2896,7 +3002,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) mutex_unlock(&swapon_mutex); kfree(p->global_cluster); p->global_cluster =3D NULL; - vfree(swap_map); kvfree(zeromap); free_swap_cluster_info(cluster_info, maxpages); /* Destroy swap account information */ @@ -3118,7 +3223,6 @@ static struct swap_info_struct *alloc_swap_info(void) kvfree(defer); } spin_lock_init(&p->lock); - spin_lock_init(&p->cont_lock); atomic_long_set(&p->inuse_pages, SWAP_USAGE_OFFLIST_BIT); init_completion(&p->comp); =20 @@ -3245,19 +3349,6 @@ static unsigned long read_swap_header(struct swap_in= fo_struct *si, return maxpages; } =20 -static int setup_swap_map(struct swap_info_struct *si, - union swap_header *swap_header, - unsigned long maxpages) -{ - unsigned char *swap_map; - - swap_map =3D vzalloc(maxpages); - si->swap_map =3D swap_map; - if (!swap_map) - return -ENOMEM; - return 0; -} - static int setup_swap_clusters_info(struct swap_info_struct *si, union swap_header *swap_header, unsigned long maxpages) @@ -3449,11 +3540,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, special= file, int, swap_flags) =20 maxpages =3D si->max; =20 - /* Setup the swap map and apply bad block */ - error =3D setup_swap_map(si, swap_header, maxpages); - if (error) - goto bad_swap_unlock_inode; - /* Set up the swap cluster info */ error =3D setup_swap_clusters_info(si, swap_header, maxpages); if (error) @@ -3574,8 +3660,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) inode =3D NULL; destroy_swap_extents(si, swap_file); swap_cgroup_swapoff(si->type); - vfree(si->swap_map); - si->swap_map =3D NULL; free_swap_cluster_info(si->cluster_info, si->max); si->cluster_info =3D NULL; kvfree(si->zeromap); @@ -3618,82 +3702,6 @@ void si_swapinfo(struct sysinfo *val) spin_unlock(&swap_lock); } =20 -/* - * Verify that nr swap entries are valid and increment their swap map coun= ts. - * - * Returns error code in following case. - * - success -> 0 - * - swp_entry is invalid -> EINVAL - * - swap-mapped reference is requested but the entry is not used. -> ENOE= NT - * - swap-mapped reference requested but needs continued swap count. -> EN= OMEM - */ -static int swap_dup_entries(struct swap_info_struct *si, - struct swap_cluster_info *ci, - unsigned long offset, - unsigned char usage, int nr) -{ - int i; - unsigned char count; - - for (i =3D 0; i < nr; i++) { - count =3D si->swap_map[offset + i]; - /* - * For swapin out, allocator never allocates bad slots. for - * swapin, readahead is guarded by swap_entry_swapped. - */ - if (WARN_ON(count =3D=3D SWAP_MAP_BAD)) - return -ENOENT; - /* - * Swap count duplication must be guarded by either swap cache folio (fr= om - * folio_dup_swap) or external lock of existing entry (from swap_dup_ent= ry_direct). - */ - if (WARN_ON(!count && - !swp_tb_is_folio(__swap_table_get(ci, offset % SWAPFILE_CLUSTER)))) - return -ENOENT; - if (WARN_ON((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX)) - return -EINVAL; - } - - for (i =3D 0; i < nr; i++) { - count =3D si->swap_map[offset + i]; - if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) - count +=3D usage; - else if (swap_count_continued(si, offset + i, count)) - count =3D COUNT_CONTINUED; - else { - /* - * Don't need to rollback changes, because if - * usage =3D=3D 1, there must be nr =3D=3D 1. - */ - return -ENOMEM; - } - - WRITE_ONCE(si->swap_map[offset + i], count); - } - - return 0; -} - -static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr) -{ - int err; - struct swap_info_struct *si; - struct swap_cluster_info *ci; - unsigned long offset =3D swp_offset(entry); - - si =3D swap_entry_to_info(entry); - if (WARN_ON_ONCE(!si)) { - pr_err("%s%08lx\n", Bad_file, entry.val); - return -EINVAL; - } - - VM_WARN_ON(nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER); - ci =3D swap_cluster_lock(si, offset); - err =3D swap_dup_entries(si, ci, offset, usage, nr); - swap_cluster_unlock(ci); - return err; -} - /* * swap_dup_entry_direct() - Increase reference count of a swap entry by o= ne. * @entry: first swap entry from which we want to increase the refcount. @@ -3707,233 +3715,16 @@ static int __swap_duplicate(swp_entry_t entry, uns= igned char usage, int nr) * owner. e.g., locking the PTL of a PTE containing the entry being increa= sed. */ int swap_dup_entry_direct(swp_entry_t entry) -{ - int err =3D 0; - while (!err && __swap_duplicate(entry, 1, 1) =3D=3D -ENOMEM) - err =3D add_swap_count_continuation(entry, GFP_ATOMIC); - return err; -} - -/* - * add_swap_count_continuation - called when a swap count is duplicated - * beyond SWAP_MAP_MAX, it allocates a new page and links that to the entr= y's - * page of the original vmalloc'ed swap_map, to hold the continuation count - * (for that entry and for its neighbouring PAGE_SIZE swap entries). Call= ed - * again when count is duplicated beyond SWAP_MAP_MAX * SWAP_CONT_MAX, etc. - * - * These continuation pages are seldom referenced: the common paths all wo= rk - * on the original swap_map, only referring to a continuation page when the - * low "digit" of a count is incremented or decremented through SWAP_MAP_M= AX. - * - * add_swap_count_continuation(, GFP_ATOMIC) can be called while holding - * page table locks; if it fails, add_swap_count_continuation(, GFP_KERNEL) - * can be called after dropping locks. - */ -int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) { struct swap_info_struct *si; - struct swap_cluster_info *ci; - struct page *head; - struct page *page; - struct page *list_page; - pgoff_t offset; - unsigned char count; - int ret =3D 0; - - /* - * When debugging, it's easier to use __GFP_ZERO here; but it's better - * for latency not to zero a page while GFP_ATOMIC and holding locks. - */ - page =3D alloc_page(gfp_mask | __GFP_HIGHMEM); - - si =3D get_swap_device(entry); - if (!si) { - /* - * An acceptable race has occurred since the failing - * __swap_duplicate(): the swap device may be swapoff - */ - goto outer; - } =20 - offset =3D swp_offset(entry); - - ci =3D swap_cluster_lock(si, offset); - - count =3D si->swap_map[offset]; - - if ((count & ~COUNT_CONTINUED) !=3D SWAP_MAP_MAX) { - /* - * The higher the swap count, the more likely it is that tasks - * will race to add swap count continuation: we need to avoid - * over-provisioning. - */ - goto out; - } - - if (!page) { - ret =3D -ENOMEM; - goto out; - } - - head =3D vmalloc_to_page(si->swap_map + offset); - offset &=3D ~PAGE_MASK; - - spin_lock(&si->cont_lock); - /* - * Page allocation does not initialize the page's lru field, - * but it does always reset its private field. - */ - if (!page_private(head)) { - BUG_ON(count & COUNT_CONTINUED); - INIT_LIST_HEAD(&head->lru); - set_page_private(head, SWP_CONTINUED); - si->flags |=3D SWP_CONTINUED; - } - - list_for_each_entry(list_page, &head->lru, lru) { - unsigned char *map; - - /* - * If the previous map said no continuation, but we've found - * a continuation page, free our allocation and use this one. - */ - if (!(count & COUNT_CONTINUED)) - goto out_unlock_cont; - - map =3D kmap_local_page(list_page) + offset; - count =3D *map; - kunmap_local(map); - - /* - * If this continuation count now has some space in it, - * free our allocation and use this one. - */ - if ((count & ~COUNT_CONTINUED) !=3D SWAP_CONT_MAX) - goto out_unlock_cont; - } - - list_add_tail(&page->lru, &head->lru); - page =3D NULL; /* now it's attached, don't free it */ -out_unlock_cont: - spin_unlock(&si->cont_lock); -out: - swap_cluster_unlock(ci); - put_swap_device(si); -outer: - if (page) - __free_page(page); - return ret; -} - -/* - * swap_count_continued - when the original swap_map count is incremented - * from SWAP_MAP_MAX, check if there is already a continuation page to car= ry - * into, carry if so, or else fail until a new continuation page is alloca= ted; - * when the original swap_map count is decremented from 0 with continuatio= n, - * borrow from the continuation and report whether it still holds more. - * Called while __swap_duplicate() or caller of swap_put_entry_locked() - * holds cluster lock. - */ -static bool swap_count_continued(struct swap_info_struct *si, - pgoff_t offset, unsigned char count) -{ - struct page *head; - struct page *page; - unsigned char *map; - bool ret; - - head =3D vmalloc_to_page(si->swap_map + offset); - if (page_private(head) !=3D SWP_CONTINUED) { - BUG_ON(count & COUNT_CONTINUED); - return false; /* need to add count continuation */ - } - - spin_lock(&si->cont_lock); - offset &=3D ~PAGE_MASK; - page =3D list_next_entry(head, lru); - map =3D kmap_local_page(page) + offset; - - if (count =3D=3D SWAP_MAP_MAX) /* initial increment from swap_map */ - goto init_map; /* jump over SWAP_CONT_MAX checks */ - - if (count =3D=3D (SWAP_MAP_MAX | COUNT_CONTINUED)) { /* incrementing */ - /* - * Think of how you add 1 to 999 - */ - while (*map =3D=3D (SWAP_CONT_MAX | COUNT_CONTINUED)) { - kunmap_local(map); - page =3D list_next_entry(page, lru); - BUG_ON(page =3D=3D head); - map =3D kmap_local_page(page) + offset; - } - if (*map =3D=3D SWAP_CONT_MAX) { - kunmap_local(map); - page =3D list_next_entry(page, lru); - if (page =3D=3D head) { - ret =3D false; /* add count continuation */ - goto out; - } - map =3D kmap_local_page(page) + offset; -init_map: *map =3D 0; /* we didn't zero the page */ - } - *map +=3D 1; - kunmap_local(map); - while ((page =3D list_prev_entry(page, lru)) !=3D head) { - map =3D kmap_local_page(page) + offset; - *map =3D COUNT_CONTINUED; - kunmap_local(map); - } - ret =3D true; /* incremented */ - - } else { /* decrementing */ - /* - * Think of how you subtract 1 from 1000 - */ - BUG_ON(count !=3D COUNT_CONTINUED); - while (*map =3D=3D COUNT_CONTINUED) { - kunmap_local(map); - page =3D list_next_entry(page, lru); - BUG_ON(page =3D=3D head); - map =3D kmap_local_page(page) + offset; - } - BUG_ON(*map =3D=3D 0); - *map -=3D 1; - if (*map =3D=3D 0) - count =3D 0; - kunmap_local(map); - while ((page =3D list_prev_entry(page, lru)) !=3D head) { - map =3D kmap_local_page(page) + offset; - *map =3D SWAP_CONT_MAX | count; - count =3D COUNT_CONTINUED; - kunmap_local(map); - } - ret =3D count =3D=3D COUNT_CONTINUED; + si =3D swap_entry_to_info(entry); + if (WARN_ON_ONCE(!si)) { + pr_err("%s%08lx\n", Bad_file, entry.val); + return -EINVAL; } -out: - spin_unlock(&si->cont_lock); - return ret; -} =20 -/* - * free_swap_count_continuations - swapoff free all the continuation pages - * appended to the swap_map, after swap_map is quiesced, before vfree'ing = it. - */ -static void free_swap_count_continuations(struct swap_info_struct *si) -{ - pgoff_t offset; - - for (offset =3D 0; offset < si->max; offset +=3D PAGE_SIZE) { - struct page *head; - head =3D vmalloc_to_page(si->swap_map + offset); - if (page_private(head)) { - struct page *page, *next; - - list_for_each_entry_safe(page, next, &head->lru, lru) { - list_del(&page->lru); - __free_page(page); - } - } - } + return swap_dup_entries_cluster(si, swp_offset(entry), 1); } =20 #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1B7A2F0685 for ; Sun, 25 Jan 2026 17:58:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363931; cv=none; b=tmSJUTE00bVf7wUEeGDkFxJocpdxfUJHKBEA+7mWHzmxyIMsChbhc0aF5RRAlcwn98+gE5Qv+qSA5/uUarKPc4pMM/puNzPs/MEbmgNT9+6AGD5Ch++400/d/j8VEMz1nNCftlkGNQVz74vL1AcBmhJvWsVZUzg+bK4b+RKqktU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363931; c=relaxed/simple; bh=2zdnU+7a25Pu+OEbE8AH9bgYY+XeoPXSJ97PofmKUOs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Qj699K6n6lO5UFo6gkDQhmbBE8e80u/GOcLe9s1j1JqM4dt/Ptyo0/V3wKcEXU5rciiFbyoU/iCOb8TkfcI6jrywqv0KqjCt72Q3RAlfIC5mtsTjN8VITx3Cx9D/VsAgKva2rsNxCRypUNVJEr6W9t/8kvPBzMYHy6fM3DcTDTo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=SbT/ZuSU; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SbT/ZuSU" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-8217f2ad01eso3335846b3a.2 for ; Sun, 25 Jan 2026 09:58:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363929; x=1769968729; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=uQE8yyzGZiePeQPVynz0h/tnTx0LJ5DrJMXW3u+lBh4=; b=SbT/ZuSUaSy1kgpylW1tovY2M1o9BVe8+cqYGMG7FTbPRCl3n3Oa44R9aqUGD4nPrO 8iyYGkVyRHtlw1EBuEAuaEXxMxoDl181ve5ST0dpzuN3R1VxpS4Efj0pnSzKegJ4COd0 zMgRFASQT3ujvvpj6555ZfwQ4orDY/y3JaYuwZd60R+7/NKub9b+AIiSpIatiEn3lxcC MUs/ypfQfqF/KMLQlgE/0folI7UcCQtZq0nehSZ3zpMgBrg5bpwqYlsQZ6dweslkCdCl cRI07B+LznCOpuYC/OAMhL5RCqs8Cdso4+zKWl0L2Qjh6gCyRnWdoDvkRetTaTMHHBo1 1j6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363929; x=1769968729; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=uQE8yyzGZiePeQPVynz0h/tnTx0LJ5DrJMXW3u+lBh4=; b=aX6cxoZzdpZGJPBtar1oumafFbKRL7T3xLtFwiKlrKBrv0Rz3RsJ3d48onqFHJqG8j b2Sxrg6dnUUlSsPDw6wMsF31SraUtNbb+nOoukaBWBptYJmsybaZDyL3AhAFh/RkaP/d 0JjtnXJnsLW0GHBtjW5ucYtEbmwXUSphUv3KL35nN0mc7NNS64jHVzc/4V7GolU2Oxtk ZGKaNRMfSk2cKAAotHDD5dF+99Wk/K6j/uaCxXYbphclwJ/HhFN5tgEt0CJTSDr9KWNo 0Qbt+YWx1t3LUVTEfMWH0YfXbsuAuGpFHJNBMbTWTU0nDTvKvcsp3+2ggXE210oSoZjg 33ng== X-Forwarded-Encrypted: i=1; AJvYcCXrxxXHRNxzco+f+jYnwWMPTyap2JlAYBqPhu2Q5t74ai0/D3Y2GzIKaK4jzkpxnQR8Z66UD/C8VYQzI/I=@vger.kernel.org X-Gm-Message-State: AOJu0Yy/xydf1F31dVcGe4n0Uhw2f5XzQMV/AH/bpyx7wCtH37DgbHiE H5BpQNcQ2/H5XQ78HwRK+nRx2PIwJrn5XtE5JT3r0ufttK1Zccin04c3 X-Gm-Gg: AZuq6aLCIqy3acqlCvHXpvCfEgWC151OyfzbsoTqPsuAoch9ZuIMPiiA1ONbK5MT/im hnHPoBdv0kBaEMNBbzi5XukNcb976rV+IwXXmgDK53R1ukR+NESknotNaz1kZ8RfMvdsbwuO0JT rvv3FdUllllKjV5i4jHc1J3VORF3MihXvt3CcTPAo3t3lsmGQ4dmmgeXHCjDYpFgNbxSVAD0ShU qJJSVeym5A2Pit3R+WLRovtxB1fT5I1Jepd34a8zC4vWim65YXFQC5M8OiSSukpiP4rxFHuR3R5 od7ItbAfQECal5r5Ekrv8BJXzuN0A+aKhgE+Ea/PV8tDLsHDS7t0kIewxCiyNUnrfBzEfzDUopk X2069+v32QNOlJsybLQ6c6F05XGvt8rAxIH2NMZKAL/q51OOa40jJqoTQKsadOPn7CBc8Y49fVG jBTPtXo07nfoOzeBAuOVlEHOGqCOG1xZLVt6MWjZfVVOZdHr5s X-Received: by 2002:a05:6a00:2e12:b0:81e:c91c:70c5 with SMTP id d2e1a72fcca58-82341213bdemr1724693b3a.29.1769363929082; Sun, 25 Jan 2026 09:58:49 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:48 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:33 +0800 Subject: [PATCH 10/12] mm, swap: no need to truncate the scan border Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-10-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=1121; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=pq1GjIvtpTGgHlTDVRKdCpvcoSbubtPSJToL8CsvV0Q=; b=9Ka7MZr9XGk+rhRF3WNsZWfQXccbdJxuFEdCPU7riv3bXTdRHYfrGVzUs1h9iEOe8efMuv2HA Nd2Ymw1K5cGABeXNOlAyslHx7NJ26jpo2X3rH2J/njM608lHR7SBDKs X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song swap_map had a static flexible size, so the last cluster won't be fully covered, hence the allocator needs to check the scan border to avoid OOB. But the swap table has a fixed-sized swap table for each cluster, and the slots beyond the device size are marked as bad slots. The allocator can simply scan all slots as usual, and any bad slots will be skipped. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swapfile.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 2febe868986e..1cf18761c0fd 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -945,8 +945,8 @@ static unsigned int alloc_swap_scan_cluster(struct swap= _info_struct *si, { unsigned int next =3D SWAP_ENTRY_INVALID, found =3D SWAP_ENTRY_INVALID; unsigned long start =3D ALIGN_DOWN(offset, SWAPFILE_CLUSTER); - unsigned long end =3D min(start + SWAPFILE_CLUSTER, si->max); unsigned int order =3D likely(folio) ? folio_order(folio) : 0; + unsigned long end =3D start + SWAPFILE_CLUSTER; unsigned int nr_pages =3D 1 << order; bool need_reclaim, ret, usable; =20 --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 393422EF665 for ; Sun, 25 Jan 2026 17:58:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363935; cv=none; b=oRbts7CXFZSOTn/MKKoOIK0SzcLb3zQpQ5zOspV0s6V9fmNbuGT0PLKgKLa+XEk01z40O9bM5citt1YsAGWbkW49cRuKX8COJGEtx3TbVYtunFG/TnDj9Ca6NWmAKihGohuhEn38xgANVLokqP0Vm6Z0PpjnRp1KR47W7UnKll8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363935; c=relaxed/simple; bh=1LwIQgJ5QLFFSEXQuxx2YLbh6ikRLLEdp3AUw4a3lHc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=mXgeQxs7ESMl/YGlwpi6f1ZfwiNr6iYK+LQSuB2zOFiiRQnjMUhxt/d7wFBPrZjZCBWMsHEq80oHhRfdOM/JgXmzdRtQxXZ0fJ3aVWg5CSsbGqyP0KSNFuqk1P5n0E4c8C1ViZaPhWrfUC28DrJCzAS8ohuQ6dkQ1SnAaVVXlDE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Vv0IdCMq; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Vv0IdCMq" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-81f4f4d4822so1936428b3a.3 for ; Sun, 25 Jan 2026 09:58:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363933; x=1769968733; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=OKXYNAqC58IBkWvtFGcy+ttT57KuezlLy+MzWN+rz2c=; b=Vv0IdCMqwWpWOLnsUvQIU9B3iJTuHabedyjY2TW/c1ChxQOXNr+XaPONF34tw2Lbi9 fbA0fFZ5BeNIMlJ0eEQBh7mpFJCpVsfJKDslYa5clee5/rUiIb+f6zBlQtUVKofH0bel f2Bkk06v829r3+iqZPju9GCXalH1SW9S+NMOYeVrOqyp5QHaYXkedZoFqtOq+c8p4eBb gY5vkiFkKnC5I9aBkT5uAtWJJUMmZ5MHEggNUn0h0r2vSoQshYHn40sXBtWRNoUYZ82X IUgLyHVe1bjOvkJb3zhAjKqkiTExt/eyibxng4BPlDy1ulj6WjpM9zuzPc3aNF8yG1c7 YHUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363933; x=1769968733; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=OKXYNAqC58IBkWvtFGcy+ttT57KuezlLy+MzWN+rz2c=; b=sFspdIpAoUpVZD6oCktx5YqGClosQ5hn7MxfaJydCUPxg6kYe+auJclDXz+wJOiX9q a4o23XyTSdOil2/sn65nSZ1J5xpB30boPOKGZS5lbF2TQWkG/ohC3M/1fTxlQ/yTOmc2 loSfSD+pH7VmVtLngYjvRqLCj40JGxXzzejw3cuVolqJESSl02zpsS4NANHRNVttec7/ aJ3tg8ykJ4vdlbygu+1PJO6VxYE9F60+qgKmwINd+okXbxEsE8X1NxAZxEuf9gWaOGYN EwWTgFKha/oLtvBnUJTyoQXi6f0UDF+SjTCLDbK2uNcsNHwyksMfUT8YvZvTuBGvGBTl DDug== X-Forwarded-Encrypted: i=1; AJvYcCXqtEBzUEwle6xOjvKo8m1eqYoOhkpQsAk2RxOxN1fBg49qBT+PQOuKvmNU3VWPJiJ9fEtQIxJ5NSkPpII=@vger.kernel.org X-Gm-Message-State: AOJu0YysdIEDqTWOzgk9D+WzEwwpQxfFcAfLTaXx4sQFr+JlOqos1k74 bX6QvzsUqzwG9dBwhx1vzg4My0n9OzvW4QIKOWw08aezwtlJ2L15Mm4b X-Gm-Gg: AZuq6aI6TuNSxcNI9BdGk9y8jI7XBeCEYbgmfypuDFE1FMw1U/6QqLue/+AXiDkNiKB hCA7jaT9/FsM+sgC/fN+NMXMkOfBX3y5Y/C36umOAybs0NQeJnLjSV9tEvaGZ19c8PwvkclZgcT Lr0XYtO9Md1WsJPu0J3vuDde2Tzf4cCL7N7b0vNGB9pQcQMSUw/ONbKqpSeDoyTjXQTMjd9OCyq +wJe6ewgNeIafkz8c+C7WrVWI3OqRNwdXtJeGQ3HHga1QVchgSyR49Xk7MudfA51DpThCRC/KzV Nm2vNGeYpIkbYxaoHumDWflBhKPxK5GOe29v8ptexsA74OC8QaSMjdwQcXqW2ZYtP90ph+eZXXO 6N+KioJRWdl/ll9XY7vB6OEO0ZF6zfcHt5BizTdCBR692XDYSnjqN16Qn5HzyitnpMFHZaciv+k 1fNKFO57S5AjZf4Ce91cN+FWqOp0WkKtOFKecrRZ2c3hZXEf4IZzO8+q24aQo= X-Received: by 2002:a05:6a00:6c94:b0:81f:3d13:e081 with SMTP id d2e1a72fcca58-82341286034mr1728052b3a.41.1769363933483; Sun, 25 Jan 2026 09:58:53 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:52 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:34 +0800 Subject: [PATCH 11/12] mm, swap: simplify checking if a folio is swapped Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-11-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=7000; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=8TW7AOA0FyTSy07GVfKQSiXTCJR3nOx0B++VcBYD3xg=; b=pEzWHdtTTghSM7y2k/qDzilzweYWaIo/0d1n0BBKRJPn3omkQQM1w3UVRSo9dKwvUCoh3+lxv JgCcK0I0lDFDSs1Ho7U8htmHbBq0XbtuhWOrefmRcRNVD88lAd40L2R X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Clean up and simplify how we check if a folio is swapped. The helper already requires the folio to be in swap cache and locked. That's enough to pin the swap cluster from being freed, so there is no need to lock anything else to avoid UAF. And besides, we have cleaned up and defined the swap operation to be mostly folio based, and now the only place a folio will have any of its swap slots' count increased from 0 to 1 is folio_dup_swap, which also requires the folio lock. So as we are holding the folio lock here, a folio can't change its swap status from not swapped (all swap slots have a count of 0) to swapped (any slot has a swap count larger than 0). So there won't be any false negatives of this helper if we simply depend on the folio lock to stabilize the cluster. We are only using this helper to determine if we can and should release the swap cache. So false positives are completely harmless, and also already exist before. Depending on the timing, previously, it's also possible that a racing thread releases the swap count right after releasing the ci lock and before this helper returns. In any case, the worst that could happen is we leave a clean swap cache. It will still be reclaimed when under pressure just fine. So, in conclusion, we can simplify and make the check much simpler and lockless. Also, rename it to folio_maybe_swapped to reflect the design. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swap.h | 5 ++-- mm/swapfile.c | 82 ++++++++++++++++++++++++++++++++-----------------------= ---- 2 files changed, 48 insertions(+), 39 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 751430e2d2a5..3395c2aa1956 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -195,12 +195,13 @@ extern int swap_retry_table_alloc(swp_entry_t entry, = gfp_t gfp); * * folio_alloc_swap(): the entry point for a folio to be swapped * out. It allocates swap slots and pins the slots with swap cache. - * The slots start with a swap count of zero. + * The slots start with a swap count of zero. The slots are pinned + * by swap cache reference which doesn't contribute to swap count. * * folio_dup_swap(): increases the swap count of a folio, usually * during it gets unmapped and a swap entry is installed to replace * it (e.g., swap entry in page table). A swap slot with swap - * count =3D=3D 0 should only be increasd by this helper. + * count =3D=3D 0 can only be increased by this helper. * * folio_put_swap(): does the opposite thing of folio_dup_swap(). */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 1cf18761c0fd..f2b5013c7692 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1740,7 +1740,11 @@ int folio_alloc_swap(struct folio *folio) * @subpage: if not NULL, only increase the swap count of this subpage. * * Typically called when the folio is unmapped and have its swap entry to - * take its palce. + * take its place: Swap entries allocated to a folio has count =3D=3D 0 an= d pinned + * by swap cache. The swap cache pin doesn't increase the swap count. This + * helper sets the initial count =3D=3D 1 and increases the count as the f= olio is + * unmapped and swap entries referencing the slots are generated to replace + * the folio. * * Context: Caller must ensure the folio is locked and in the swap cache. * NOTE: The caller also has to ensure there is no raced call to @@ -1941,49 +1945,44 @@ int swp_swapcount(swp_entry_t entry) return count < 0 ? 0 : count; } =20 -static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, - swp_entry_t entry, int order) +/* + * folio_maybe_swapped - Test if a folio covers any swap slot with count >= 0. + * + * Check if a folio is swapped. Holding the folio lock ensures the folio w= on't + * go from not-swapped to swapped because the initial swap count increment= can + * only be done by folio_dup_swap, which also locks the folio. But a concu= rrent + * decrease of swap count is possible through swap_put_entries_direct, so = this + * may return a false positive. + * + * Context: Caller must ensure the folio is locked and in the swap cache. + */ +static bool folio_maybe_swapped(struct folio *folio) { + swp_entry_t entry =3D folio->swap; struct swap_cluster_info *ci; - unsigned int nr_pages =3D 1 << order; - unsigned long roffset =3D swp_offset(entry); - unsigned long offset =3D round_down(roffset, nr_pages); - unsigned int ci_off; - int i; + unsigned int ci_off, ci_end; bool ret =3D false; =20 - ci =3D swap_cluster_lock(si, offset); - if (nr_pages =3D=3D 1) { - ci_off =3D roffset % SWAPFILE_CLUSTER; - if (swp_tb_get_count(__swap_table_get(ci, ci_off))) - ret =3D true; - goto unlock_out; - } - for (i =3D 0; i < nr_pages; i++) { - ci_off =3D (offset + i) % SWAPFILE_CLUSTER; - if (swp_tb_get_count(__swap_table_get(ci, ci_off))) { - ret =3D true; - break; - } - } -unlock_out: - swap_cluster_unlock(ci); - return ret; -} - -static bool folio_swapped(struct folio *folio) -{ - swp_entry_t entry =3D folio->swap; - struct swap_info_struct *si; - VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_ONCE_FOLIO(!folio_test_swapcache(folio), folio); =20 - si =3D __swap_entry_to_info(entry); - if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!folio_test_large(folio))) - return swap_entry_swapped(si, entry); + ci =3D __swap_entry_to_cluster(entry); + ci_off =3D swp_cluster_offset(entry); + ci_end =3D ci_off + folio_nr_pages(folio); + /* + * Extra locking not needed, folio lock ensures its swap entries + * won't be released, the backing data won't be gone either. + */ + rcu_read_lock(); + do { + if (__swp_tb_get_count(__swap_table_get(ci, ci_off))) { + ret =3D true; + break; + } + } while (++ci_off < ci_end); + rcu_read_unlock(); =20 - return swap_page_trans_huge_swapped(si, entry, folio_order(folio)); + return ret; } =20 static bool folio_swapcache_freeable(struct folio *folio) @@ -2029,7 +2028,7 @@ bool folio_free_swap(struct folio *folio) { if (!folio_swapcache_freeable(folio)) return false; - if (folio_swapped(folio)) + if (folio_maybe_swapped(folio)) return false; =20 swap_cache_del_folio(folio); @@ -3713,6 +3712,8 @@ void si_swapinfo(struct sysinfo *val) * * Context: Caller must ensure there is no race condition on the reference * owner. e.g., locking the PTL of a PTE containing the entry being increa= sed. + * Also the swap entry must have a count >=3D 1. Otherwise folio_dup_swap = should + * be used. */ int swap_dup_entry_direct(swp_entry_t entry) { @@ -3724,6 +3725,13 @@ int swap_dup_entry_direct(swp_entry_t entry) return -EINVAL; } =20 + /* + * The caller must be increasing the swap count from a direct + * reference of the swap slot (e.g. a swap entry in page table). + * So the swap count must be >=3D 1. + */ + VM_WARN_ON_ONCE(!swap_entry_swapped(si, entry)); + return swap_dup_entries_cluster(si, swp_offset(entry), 1); } =20 --=20 2.52.0 From nobody Sun Feb 8 10:21:58 2026 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E34F32F0685 for ; Sun, 25 Jan 2026 17:58:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363940; cv=none; b=FTmUeePw1zo9i7APU1y8vOEybtKw3izbgqe9fpJ2vwIHd4zaYsY6aD7/QTgM3zd+wwP5qSLVHKb0gsK1Zk3PIJ/fdcuW1gosrXppy+z37gva7zAPak1nG7VXElTM86YCjV30jYTsqQyzlRqf00LG5iYIsl7Nde0Sd0+xWY1jX4U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363940; c=relaxed/simple; bh=IJVFHWTeAE64r9NlkLITh4dLk1+AofYxHk5oUGHO5b4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=iQHLzKE/rOdW5496TuvMnSTsRhKoAE9jlU4yutZvwPzvCM5qHXgw3xvb9lIgzhonT09js9nyWOpQdjYp4yDdtA4C9u9yLB2BT3hShFiNGTRpK3a52N/qTtPmzQ9h+vOyNsX/0bg1m2B79HpDHnDh1Vfhseytl9oT3dEJtnjcccU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ed1iuAa+; arc=none smtp.client-ip=209.85.216.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ed1iuAa+" Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-352ccc61658so1687962a91.0 for ; Sun, 25 Jan 2026 09:58:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363938; x=1769968738; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=WshGWNJYF++wijTnl4L4ygl5/AaRqG4ZVzkhCy8zUn0=; b=ed1iuAa+Z4y/Vea/oSjoAEmbUHWPwCRXOaMFZPgOQItXj+gcEUfaer4mr/c3G4PFzw LaMJMuUIgVaWQE5zUhTdrqQhGLrNpptBPx410xUGOKgUlNybPaQDcJkRVwuUeSUlTTQP EELT56kO6LfEJEyXheeJWRttyvAtbc/10VcPCBtXtrP1sMdPefS1zB6JcCCZJfyCCctp ztJHSnHHbD7aHLGM/jDYves5gunWAoq5GESsZahCE/XzL/L3pWhd5eeG3BlGg38sM+bh 3USFrA1hdCi10hbpRUKzUo5b32nQW8dgyj91SBQ/nNxWal8aaevYrAH4cwqVfZJXGzrS z4eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363938; x=1769968738; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=WshGWNJYF++wijTnl4L4ygl5/AaRqG4ZVzkhCy8zUn0=; b=ZNQqFYYyrbL103ElEIqoV2BZzBhXyC5tgTtpQ4vPeIr4Llf3AY8fyVvm0OHNg4C5z6 NzwNxxhf3wXXsAkYYEQSEv5JxS14a3oz/ieiHib55OSh/Z60dKi6twubauWNZ/Bz/J0f o/vgu/mvg0W0PuwbtbApaLUArgcpTpYicylUD9jhKkuB666gb6QGaF+QLUDzC68g9O/A MBOtBt7DZR5Cj+9mSx9PpYk5Z/5DdbMB2vM6fO0yKJz6NcaRI5DO9B3ve30wQAt+CmOy /iHbtMO3GaqGKjL4zN0YBkBmCmC6jEDh5gwSrmX7NrFvlI/LRicHtnCvwkw62KHSUdEg hvnw== X-Forwarded-Encrypted: i=1; AJvYcCW172XfVOymXOm8EW+722aR1yri1ChFcM8aK8V1h4DUd4FzXvMsz02eYQfRqFm3p24uX726i/KvuYuZmbg=@vger.kernel.org X-Gm-Message-State: AOJu0YwyeSp9x36gnw5afqQivC0lJElT6uhQ0Kq1A/sjX80WD6qECCqw Qe3YJnZvOacdQV5ABGIxtH8uc+qK/dqT+9rMFPlhL7rlah/nYVDNyTk+ X-Gm-Gg: AZuq6aIF8q4J3smzGWxy6FsDT97BfSpqAuorzBxH7CzJ2UGwRqllhkRpd5AyCm7CWBA IxZn5506wDxpCJVGt1ElExX3aI1POmpWVqieVp9o9Fi6PytOTi+NI0yoYWsonLSUWCRFpnhNZpD IaJslQ/L1BHyNGxvBW/XKctHMaQpUjCQJiyX1wFSXd4pKV2Je/eIVoMJ0+DdzxUFr5znwkToJxU xcGXTgB/VktS55ynNUnKcs989YVn8T/mqUUSXtcS1qlJJNgn1AjDGxq9zo2OySR6djaRZ1IztDE vVHeePy1X8Vl43LIn0lDeAK/2SPjTfudUKHQ7dfHw/2uKqmmcuzKpw5fgOjS/JEs9zkxgM6k9Do DgAnN8ju7J0zYK3IWZjjiMYCUZ+i4WtPTjpgn2sSjhp5LZUrCuy7/npLH7Y8EeD7fI1OqhnjYVB G5Hu3QC1BVU+5ftliSzuCOkvWMK+xJPbn4RD/LGKa5FaPdQJhl+gVblCWYMy8= X-Received: by 2002:a17:90b:3c05:b0:33b:cfac:d5c6 with SMTP id 98e67ed59e1d1-353c4182f32mr1864176a91.29.1769363938190; Sun, 25 Jan 2026 09:58:58 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:57 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:35 +0800 Subject: [PATCH 12/12] mm, swap: no need to clear the shadow explicitly Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-12-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=3144; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=N+Tk0tXxMjVdLEG5ORllLHnt9dnHiUYojx+HV4QDfJ0=; b=i9jf9SH6BTPb7v7w5NstkVhNdATjfP9CoAhJRvX0yoJ/dpS/eDKY6k9RsGncAQq/k3RI9llO+ Cfh0llN/mwoC/CbIZ2iNwEuCWHabMS4vr/ZkvGUe3MNPYcq3SCECwKN X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Since we no longer bypass the swap cache, every swap-in will clear the swap shadow by inserting the folio into the swap table. The only place we may seem to need to free the swap shadow is when the swap slots are freed directly without a folio (swap_put_entries_direct). But with the swap table, that is not needed either. Freeing a slot in the swap table will set the table entry to NULL, which erases the shadow just fine. So just delete all explicit shadow clearing, it's no longer needed. Also, rearrange the freeing. Signed-off-by: Kairui Song Suggested-by: Chris Li --- mm/swap.h | 1 - mm/swap_state.c | 21 --------------------- mm/swapfile.c | 2 -- 3 files changed, 24 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 3395c2aa1956..087cef49cf69 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -290,7 +290,6 @@ void __swap_cache_del_folio(struct swap_cluster_info *c= i, struct folio *folio, swp_entry_t entry, void *shadow); void __swap_cache_replace_folio(struct swap_cluster_info *ci, struct folio *old, struct folio *new); -void __swap_cache_clear_shadow(swp_entry_t entry, int nr_ents); =20 void show_swap_cache_info(void); void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int n= r); diff --git a/mm/swap_state.c b/mm/swap_state.c index c808f0948b10..20c4c2414db3 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -350,27 +350,6 @@ void __swap_cache_replace_folio(struct swap_cluster_in= fo *ci, } } =20 -/** - * __swap_cache_clear_shadow - Clears a set of shadows in the swap cache. - * @entry: The starting index entry. - * @nr_ents: How many slots need to be cleared. - * - * Context: Caller must ensure the range is valid, all in one single clust= er, - * not occupied by any folio, and lock the cluster. - */ -void __swap_cache_clear_shadow(swp_entry_t entry, int nr_ents) -{ - struct swap_cluster_info *ci =3D __swap_entry_to_cluster(entry); - unsigned int ci_off =3D swp_cluster_offset(entry), ci_end; - unsigned long old; - - ci_end =3D ci_off + nr_ents; - do { - old =3D __swap_table_xchg(ci, ci_off, null_to_swp_tb()); - WARN_ON_ONCE(swp_tb_is_folio(old) || swp_tb_get_count(old)); - } while (++ci_off < ci_end); -} - /* * If we are the only user, then try to free up the swap cache. * diff --git a/mm/swapfile.c b/mm/swapfile.c index f2b5013c7692..82b152d7c4c5 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1287,7 +1287,6 @@ static void swap_range_alloc(struct swap_info_struct = *si, static void swap_range_free(struct swap_info_struct *si, unsigned long off= set, unsigned int nr_entries) { - unsigned long begin =3D offset; unsigned long end =3D offset + nr_entries - 1; void (*swap_slot_free_notify)(struct block_device *, unsigned long); unsigned int i; @@ -1312,7 +1311,6 @@ static void swap_range_free(struct swap_info_struct *= si, unsigned long offset, swap_slot_free_notify(si->bdev, offset); offset++; } - __swap_cache_clear_shadow(swp_entry(si->type, begin), nr_entries); =20 /* * Make sure that try_to_unuse() observes si->inuse_pages reaching 0 --=20 2.52.0