From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0D8413CC44 for ; Tue, 26 Mar 2024 19:04:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479863; cv=none; b=tZshbS+TxfQs3n4vYNwvi1oKZ6cBTLOZeIZ91KpzNpP/O/iuRdYq9oQrVqTzGm9vxV09r6m1eGoXdS/4QtNyypTqZCIeFRjFCQm+77nxsrnYOWZXuoxjRYsOWGzsGkF5+OCh+eR8nobdVW4rxofIuv3q/2tK/QvppSCMyoSYk44= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479863; c=relaxed/simple; bh=tkhzRdtmDHxGPLSNGrCz95fBiO47MFeM9d6uWVZhm/k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=V549Xv4xPhjvMebThlmr9QSpIFGsFyTvm6g7Ao7xFncr+t7SMHDHr1ko+jNCcu3W3hTc+ptCp7u8oOkyOa4JUJTYPwYIr+7qGoL6WsYMwTge+dIkQR6cphJEmBL6+ROf4F0r805ZI5/YGlX/aEAGj14BbKTAUS7jsHEDUzbN3UQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LfFZL5W9; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LfFZL5W9" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-6e6b5432439so4576676b3a.1 for ; Tue, 26 Mar 2024 12:04:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479861; x=1712084661; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=MPCeiBQ6Vuv/VMKCl+70rHoy18EK+MdrkqBGmpRbWL4=; b=LfFZL5W9iI8VA0D87qczM7mAbvunmMc6ltI0tk36kwnYX0S3s8Ei0gSxMxpXIN6c2j DnPWc3A5Z7amYnWfonyKeenwgwghOrCLLft8UcXDIwFJMhnNuCWttpk4Jyy9J4lTBIYq cthoTByMsCiXIoLMqfKs0Gbv7UfUjiSRDwRWiFTPxogvxs5uHKV4Iji3oIy2pRENLrdH 8+iohbDEs8JUwc0DXFfxrk0pibpAgiEcsXf8Dr3eK6slwANvCW9O/k1tPv8Xl8OMG+lh VAPlr+316zCGHnx+07/aeiqRp7ruEZOlG23UBEYq3TuCJYOraGbaUREFFjiMSS1YluZF Wl5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479861; x=1712084661; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=MPCeiBQ6Vuv/VMKCl+70rHoy18EK+MdrkqBGmpRbWL4=; b=MrsJtFM7Rv2/txNxOZ6U5tKNpSE6VNjFmgCes52mLPwsmTzD1AMe7gONncVnRW54j5 M1xcCYUCvRUIf6XApHhyK2UeMNjbVuNzkDzE7sClmFGmwpgVCk6chSxs+HLuDqgQvgPI QymkzleW2zZs3nJIivMTkbBttX8zgtC3hztOZ3r3bb1tB1gZL/iVqiRxLGA1AItXZtgG 9SGTenLHkmBrSYkK+jUw7rvjBkTOZLYieJH2UpLwmClNt5L3SHR1yFM534oNPAP93bFi 8vrUY2kAC/sJH17IS9/xHIYdjUZTi5bc2NewoWDdFsoB1aKYgB+GGA4RVtEeViSo69si qq8A== X-Forwarded-Encrypted: i=1; AJvYcCWQr3wlk0lOuYhddkWSYh5RCxHyY/b3tOVObw09TyyPqvY7TxF34h6qZ1kWU64Mx2Xc3p36FAsaXbGXd+tX3/xsN34zUoyA8z1o+B8N X-Gm-Message-State: AOJu0YwCDF4CwiS+9n//r0sUIzJN3f0KddIAEN8IFEm68cfUiB3CD7LA 1DtJC3zUIqlGbmXG/i+0bOG19/QCXNzNUpDS/ugdyVLrsFA5Ak8Z X-Google-Smtp-Source: AGHT+IEvuj2NywPp4edKIveEUgd69iy3TfxxMJh0u9KQ2EoAOriC83NY/O9bepwSOY7NB56Qt1OqEA== X-Received: by 2002:a05:6a00:814:b0:6e6:830:cd13 with SMTP id m20-20020a056a00081400b006e60830cd13mr2478320pfk.23.1711479860988; Tue, 26 Mar 2024 12:04:20 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.16 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:20 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 01/10] mm/filemap: split filemap storing logic into a standalone helper Date: Wed, 27 Mar 2024 02:50:23 +0800 Message-ID: <20240326185032.72159-2-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song Swapcache can reuse this part for multi index support, no change of performance from page cache side except noise: Test in 8G memory cgroup and 16G brd ramdisk. echo 3 > /proc/sys/vm/drop_caches fio -name=3Dcached --numjobs=3D16 --filename=3D/mnt/test.img \ --buffered=3D1 --ioengine=3Dmmap --rw=3Drandread --time_based \ --ramp_time=3D30s --runtime=3D5m --group_reporting Before: bw ( MiB/s): min=3D 493, max=3D 3947, per=3D100.00%, avg=3D2625.56, stdev= =3D25.74, samples=3D8651 iops : min=3D126454, max=3D1010681, avg=3D672142.61, stdev=3D6590.48= , samples=3D8651 After: bw ( MiB/s): min=3D 298, max=3D 3840, per=3D100.00%, avg=3D2614.34, stdev= =3D23.77, samples=3D8689 iops : min=3D76464, max=3D983045, avg=3D669270.35, stdev=3D6084.31, = samples=3D8689 Test result with THP (do a THP randread then switch to 4K page in hope it issues a lot of splitting): echo 3 > /proc/sys/vm/drop_caches fio -name=3Dcached --numjobs=3D16 --filename=3D/mnt/test.img \ --buffered=3D1 --ioengine=3Dmmap -thp=3D1 --readonly \ --rw=3Drandread --time_based --ramp_time=3D30s --runtime=3D10m \ --group_reporting fio -name=3Dcached --numjobs=3D16 --filename=3D/mnt/test.img \ --buffered=3D1 --ioengine=3Dmmap \ --rw=3Drandread --time_based --runtime=3D5s --group_reporting Before: bw ( KiB/s): min=3D 4611, max=3D15370, per=3D100.00%, avg=3D8928.74, stdev= =3D105.17, samples=3D19146 iops : min=3D 1151, max=3D 3842, avg=3D2231.27, stdev=3D26.29, sampl= es=3D19146 READ: bw=3D4635B/s (4635B/s), 4635B/s-4635B/s (4635B/s-4635B/s), io=3D64.0K= iB (65.5kB), run=3D14137-14137msec After: bw ( KiB/s): min=3D 4691, max=3D15666, per=3D100.00%, avg=3D8890.30, stdev= =3D104.53, samples=3D19056 iops : min=3D 1167, max=3D 3913, avg=3D2218.68, stdev=3D26.15, sampl= es=3D19056 READ: bw=3D4590B/s (4590B/s), 4590B/s-4590B/s (4590B/s-4590B/s), io=3D64.0K= iB (65.5kB), run=3D14275-14275msec Signed-off-by: Kairui Song --- mm/filemap.c | 124 +++++++++++++++++++++++++++------------------------ 1 file changed, 65 insertions(+), 59 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 90b86f22a9df..0ccdc9e92764 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -848,38 +848,23 @@ void replace_page_cache_folio(struct folio *old, stru= ct folio *new) } EXPORT_SYMBOL_GPL(replace_page_cache_folio); =20 -noinline int __filemap_add_folio(struct address_space *mapping, - struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp) +static int __filemap_lock_store(struct xa_state *xas, struct folio *folio, + pgoff_t index, gfp_t gfp, void **shadowp) { - XA_STATE(xas, &mapping->i_pages, index); - void *alloced_shadow =3D NULL; - int alloced_order =3D 0; - bool huge; - long nr; - - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); - VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio); - mapping_set_update(&xas, mapping); - - VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio); - xas_set_order(&xas, index, folio_order(folio)); - huge =3D folio_test_hugetlb(folio); - nr =3D folio_nr_pages(folio); - + void *entry, *old, *alloced_shadow =3D NULL; + int order, split_order, alloced_order =3D 0; gfp &=3D GFP_RECLAIM_MASK; - folio_ref_add(folio, nr); - folio->mapping =3D mapping; - folio->index =3D xas.xa_index; =20 for (;;) { - int order =3D -1, split_order =3D 0; - void *entry, *old =3D NULL; + order =3D -1; + split_order =3D 0; + old =3D NULL; =20 - xas_lock_irq(&xas); - xas_for_each_conflict(&xas, entry) { + xas_lock_irq(xas); + xas_for_each_conflict(xas, entry) { old =3D entry; if (!xa_is_value(entry)) { - xas_set_err(&xas, -EEXIST); + xas_set_err(xas, -EEXIST); goto unlock; } /* @@ -887,72 +872,93 @@ noinline int __filemap_add_folio(struct address_space= *mapping, * it will be the first and only entry iterated. */ if (order =3D=3D -1) - order =3D xas_get_order(&xas); + order =3D xas_get_order(xas); } =20 /* entry may have changed before we re-acquire the lock */ if (alloced_order && (old !=3D alloced_shadow || order !=3D alloced_orde= r)) { - xas_destroy(&xas); + xas_destroy(xas); alloced_order =3D 0; } =20 if (old) { if (order > 0 && order > folio_order(folio)) { - /* How to handle large swap entries? */ - BUG_ON(shmem_mapping(mapping)); if (!alloced_order) { split_order =3D order; goto unlock; } - xas_split(&xas, old, order); - xas_reset(&xas); + xas_split(xas, old, order); + xas_reset(xas); } if (shadowp) *shadowp =3D old; } =20 - xas_store(&xas, folio); - if (xas_error(&xas)) - goto unlock; - - mapping->nrpages +=3D nr; - - /* hugetlb pages do not participate in page cache accounting */ - if (!huge) { - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); - if (folio_test_pmd_mappable(folio)) - __lruvec_stat_mod_folio(folio, - NR_FILE_THPS, nr); - } - + xas_store(xas, folio); + if (!xas_error(xas)) + return 0; unlock: - xas_unlock_irq(&xas); + xas_unlock_irq(xas); =20 /* split needed, alloc here and retry. */ if (split_order) { - xas_split_alloc(&xas, old, split_order, gfp); - if (xas_error(&xas)) + xas_split_alloc(xas, old, split_order, gfp); + if (xas_error(xas)) goto error; alloced_shadow =3D old; alloced_order =3D split_order; - xas_reset(&xas); + xas_reset(xas); continue; } =20 - if (!xas_nomem(&xas, gfp)) + if (!xas_nomem(xas, gfp)) break; } =20 - if (xas_error(&xas)) - goto error; - - trace_mm_filemap_add_to_page_cache(folio); - return 0; error: - folio->mapping =3D NULL; - /* Leave page->index set: truncation relies upon it */ - folio_put_refs(folio, nr); - return xas_error(&xas); + return xas_error(xas); +} + +noinline int __filemap_add_folio(struct address_space *mapping, + struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp) +{ + XA_STATE(xas, &mapping->i_pages, index); + bool huge; + long nr; + int ret; + + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio); + mapping_set_update(&xas, mapping); + + VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio); + xas_set_order(&xas, index, folio_order(folio)); + huge =3D folio_test_hugetlb(folio); + nr =3D folio_nr_pages(folio); + + folio_ref_add(folio, nr); + folio->mapping =3D mapping; + folio->index =3D xas.xa_index; + + ret =3D __filemap_lock_store(&xas, folio, index, gfp, shadowp); + if (!ret) { + mapping->nrpages +=3D nr; + /* hugetlb pages do not participate in page cache accounting */ + if (!huge) { + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); + if (folio_test_pmd_mappable(folio)) + __lruvec_stat_mod_folio(folio, + NR_FILE_THPS, nr); + } + xas_unlock_irq(&xas); + trace_mm_filemap_add_to_page_cache(folio); + } else { + folio->mapping =3D NULL; + /* Leave page->index set: truncation relies upon it */ + folio_put_refs(folio, nr); + } + + return ret; } ALLOW_ERROR_INJECTION(__filemap_add_folio, ERRNO); =20 --=20 2.43.0 From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57F4D13CA8F for ; Tue, 26 Mar 2024 19:04:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479867; cv=none; b=mBLV4tbd6d/yK0dpG4lDfuQjsB9TWMc+ZZAOWNRjqmxsqfb1rRmdHKk6uxEaeKFSc5P6wO9w/eqZnlfR9tH29tx80brnQszEGX/dBJSUpJ5qFXehlS1oTz0oFZv1d7SSdoUCEfZ2sFtonBosKu5fZkTgvNwdxhw589zciaA8Nps= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479867; c=relaxed/simple; bh=/Jl9C/gSXTiCfkBx31wAHLY9fnt3jmNTgN8z7ezxDXM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b559qhFHQG8+vz/IJB8D71fDFls1o9U0VjzSpWRsLWyMxx/iAHbGK8qM8++Rqnc+5DOol+2N0SfxPkb1kjhZtI1n1wPBi5FBaQa7TsC+LtXSkNmmbgw3lVnruCyHizKbo/HhuWfkHfAW6w1okY6BIKer/ZWggHXWo1rwub37co0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=XqwT4nCn; arc=none smtp.client-ip=209.85.210.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XqwT4nCn" Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-6e8f765146fso4514509b3a.0 for ; Tue, 26 Mar 2024 12:04:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479865; x=1712084665; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=+iVlYtZOB3fCRmg1hwgLJ8pOWaFPkCTIgs4hiVHBEmU=; b=XqwT4nCniUHaTcXPmoK95cenUVRcE6GZ62WNvTEc9APbyqWqiRX5xbVXm/BmFwc2yi 4fZ4kpLh8uP8kOh4qixWoBblsNMlaOeKDrVdhztJeSFfJfT+b7FbYmBtLMk9yzCNzg9h 7HlUbn5r1WpyQ3HuB4uX0y18jBW5Zoqv1nmsPOl0xsrzX3m5npIdkZKJbYcKLm+AQTGc 3rsPH0lPxqm7mkaH5JrBwg4wAMqG+MbsYZNAPozokfTbD22dafoBIvYrCiHLVCt5r95B FnOTa+zcaiSmsUWIjSXJW65DKQO6R2qM59beQdq/4NbfZgZTBfhWDi5FbyoNn2Yt8+kO DK3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479865; x=1712084665; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=+iVlYtZOB3fCRmg1hwgLJ8pOWaFPkCTIgs4hiVHBEmU=; b=AHxVSbHZmZlVo1RJOct4Ur9J0TUB4BnCYAiSpQZqpUSZyziKztZ0lVHh3sSbQISleY zDNY2e4PHzf+YiwQ26U6qLZAnLO7K1tN6cEUIDkEm+8EI07REtRnGi/6p/U7x5zECwvK EkYcZK5dMvoxqt7eFPIFYuP3izGiP+rKTGI5hnjHBhi994jav587OeCEFH5Eit/8BiyT pOPhqtD9lBuach1BVFSL61BHz0A+5TsB2+QtXynbcGEc1Pla67PLjURYNLV1iNHAuXbW /ZWhYiPiJ6xbKDjyFtJbXmNKvuuDN89tlawn0Jba+syN4S7HRbROdvp3Vnjg+rFuAYZo AmHw== X-Forwarded-Encrypted: i=1; AJvYcCW09dkbCoEO8mIJpJO2h7ymOnfUQqKCBhYkTVp+q8sJXDy4tsYmH45rVKnk2syRUq17wNVxpCiQN3f2vJNXJZFCUFRrHljJ0QuuWyqj X-Gm-Message-State: AOJu0YznQmb7f+eWuyAyQQy+FN+mpBaDg3/4dmsNDaucNalR0B7Bh16G Rw3lF6Y4VLyNM1199uU5wTflRXQOQTZ4x3yqX42OubTqb46gKapy2C1AWXVpoPMY+gV5 X-Google-Smtp-Source: AGHT+IHasTHngVbHXAAz60DxTdM0n3aSiwZ9Kg40P8vfF4TYZxDB4AbSEHDL00iy0NSqMDT5XDhRaw== X-Received: by 2002:a05:6a00:21c9:b0:6e9:a70:aa7a with SMTP id t9-20020a056a0021c900b006e90a70aa7amr3745251pfj.19.1711479865567; Tue, 26 Mar 2024 12:04:25 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.21 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:24 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 02/10] mm/swap: move no readahead swapin code to a stand-alone helper Date: Wed, 27 Mar 2024 02:50:24 +0800 Message-ID: <20240326185032.72159-3-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song Simply move the routine to a standalone function, having a cleaner split and avoid helpers being referenced corss multiple files. Basically no feature change, but the error path is very slightly different. Previously a mem_cgroup_swapin_charge_folio fail will cause direct OOM, now we go through the error checking path in do_swap_pte, if the page is already there, just return as the page fault was handled. Signed-off-by: Kairui Song --- mm/memory.c | 42 +++------------------------------- mm/swap.h | 8 +++++++ mm/swap_state.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 71 insertions(+), 39 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index f2bc6dd15eb8..e42fadc25268 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3937,7 +3937,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) swp_entry_t entry; pte_t pte; vm_fault_t ret =3D 0; - void *shadow =3D NULL; =20 if (!pte_unmap_same(vmf)) goto out; @@ -4001,47 +4000,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) =3D=3D 1) { - /* - * Prevent parallel swapin from proceeding with - * the cache flag. Otherwise, another thread may - * finish swapin first, free the entry, and swapout - * reusing the same entry. It's undetectable as - * pte_same() returns true due to entry reuse. - */ - if (swapcache_prepare(entry)) { - /* Relax a bit to prevent rapid repeated page faults */ - schedule_timeout_uninterruptible(1); + /* skip swapcache and readahead */ + folio =3D swapin_direct(entry, GFP_HIGHUSER_MOVABLE, vmf); + if (PTR_ERR(folio) =3D=3D -EBUSY) goto out; - } need_clear_cache =3D true; - - /* skip swapcache */ - folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, - vma, vmf->address, false); page =3D &folio->page; - if (folio) { - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - - if (mem_cgroup_swapin_charge_folio(folio, - vma->vm_mm, GFP_KERNEL, - entry)) { - ret =3D VM_FAULT_OOM; - goto out_page; - } - mem_cgroup_swapin_uncharge_swap(entry); - - shadow =3D get_shadow_from_swap_cache(entry); - if (shadow) - workingset_refault(folio, shadow); - - folio_add_lru(folio); - - /* To provide entry to swap_read_folio() */ - folio->swap =3D entry; - swap_read_folio(folio, true, NULL); - folio->private =3D NULL; - } } else { page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf); diff --git a/mm/swap.h b/mm/swap.h index fc2f6ade7f80..40e902812cc5 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -55,6 +55,8 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, = gfp_t gfp_flags, bool skip_if_exists); struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); +struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, + struct vm_fault *vmf); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); =20 @@ -87,6 +89,12 @@ static inline struct folio *swap_cluster_readahead(swp_e= ntry_t entry, return NULL; } =20 +static inline struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, + struct vm_fault *vmf) +{ + return NULL; +} + static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mas= k, struct vm_fault *vmf) { diff --git a/mm/swap_state.c b/mm/swap_state.c index bfc7e8c58a6d..0a3fa48b3893 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -879,6 +879,66 @@ static struct folio *swap_vma_readahead(swp_entry_t ta= rg_entry, gfp_t gfp_mask, return folio; } =20 +/** + * swapin_direct - swap in folios skipping swap cache and readahead + * @entry: swap entry of this memory + * @gfp_mask: memory allocation flags + * @vmf: fault information + * + * Returns the struct folio for entry and addr after the swap entry is read + * in. + */ +struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, + struct vm_fault *vmf) +{ + struct vm_area_struct *vma =3D vmf->vma; + struct folio *folio; + void *shadow =3D NULL; + + /* + * Prevent parallel swapin from proceeding with + * the cache flag. Otherwise, another thread may + * finish swapin first, free the entry, and swapout + * reusing the same entry. It's undetectable as + * pte_same() returns true due to entry reuse. + */ + if (swapcache_prepare(entry)) { + /* Relax a bit to prevent rapid repeated page faults */ + schedule_timeout_uninterruptible(1); + return ERR_PTR(-EBUSY); + } + + /* skip swapcache */ + folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, + vma, vmf->address, false); + if (folio) { + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + + if (mem_cgroup_swapin_charge_folio(folio, + vma->vm_mm, GFP_KERNEL, + entry)) { + folio_unlock(folio); + folio_put(folio); + return NULL; + } + mem_cgroup_swapin_uncharge_swap(entry); + + shadow =3D get_shadow_from_swap_cache(entry); + if (shadow) + workingset_refault(folio, shadow); + + folio_add_lru(folio); + + /* To provide entry to swap_read_folio() */ + folio->swap =3D entry; + swap_read_folio(folio, true, NULL); + folio->private =3D NULL; + } + + return folio; +} + /** * swapin_readahead - swap in pages in hope we need them soon * @entry: swap entry of this memory --=20 2.43.0 From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BEC4813CF8E for ; Tue, 26 Mar 2024 19:04:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479872; cv=none; b=eE8zrlCklnCkzOLcbn/6wvZv9k+gT94MA2WcTnWP0gV/kiEQI0QWNevXSVHxADNyEgE9oRa+BNrzgcgTacZ2+75kMGs+I8AMeEGxLIxvpKal+Ys4VB+EfBWQOsddXK8MBsjmxDKH4edubJLgjDDB/98T52TJAuwX84f3W5bMywI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479872; c=relaxed/simple; bh=gqfTmePSIYhjIm5j4qfpQ0dAqx5g46WOKwctYRNdYOk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KwzQpoW+AHnZlLHR8AbRfqDEpiCYuNBNx9zwlnKbPAmNT7Q9H07E4gDmVNnQHMVmWR6W3PuXQyT8Tp83jlW/O3VOSZApUXXPaOzbxors1ub0Fqo3ymX4/ZU/uE0b5khXx+h0PEsDq2U1AtqxBtTISBc6eo7ND7/Ptyyilwi5qi8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HOEY81nE; arc=none smtp.client-ip=209.85.215.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HOEY81nE" Received: by mail-pg1-f182.google.com with SMTP id 41be03b00d2f7-5ce07cf1e5dso3366929a12.2 for ; Tue, 26 Mar 2024 12:04:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479870; x=1712084670; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=TEv2cY48RN4zMeEfANN3o0TZbotXcFB0DQsGIzjU7dM=; b=HOEY81nEpvbkfMQhF3Apfd0MSf8NZFUbDlZfU/f8S2PvoyAhGCc7xVg/NjSaL1NjN/ NwySQ9uP090toDp6fgXJRcdDor7fNmU8g0IiRFsoL/Zk+vCP4bTj5n6mSnOuwUcUwaJg cI6211yfko0lphlNFVJJk3yHKIZlabq8xSWsDAsgdasVt1aUF+ItsEqFpG4DA+S009bV 4y6A7qwkyl+gmWzHjVFyzTCT5DI86DuTkI/xhN1a8nrYzgEROFxoVh/VuO8wi6GnYtK7 pmWlQTElMGKngwmnHJo9tTuvUqv1cGdyTIPW3g6e1ZJT4/VYJO9kNFurCi0VcrEofeu4 cvlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479870; x=1712084670; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=TEv2cY48RN4zMeEfANN3o0TZbotXcFB0DQsGIzjU7dM=; b=dyPTlScG3ubp/L3NtpNUXAiuzhjk806NyZ6tPJAxsPOJ7g+oYya7SjyElcU0Iv2tRP Hdp+/b6RYCseXPaTtmLKcYDLT0hUZFIKxTXODqLl3cQydpsrSDV551qRoIwqD9t5TsA9 L0EbYJhhaZG53GhHQEDluOL/8dzkQ5cz+bA/OVUnTwBn874a2FesmQNlyrJdKnPjRHOd 10l9QWvTJrzgO+YkPfcg/T9NTokND9AGdn4yfKBOEpHLmGs5pa6G5Q/sQraHPbRsKXo2 TMFdx2Ecb5ub6LIw1nCUDgZkOzRkCOi2C14Hqn0ltMQPGuB0tfRBpWmkkGPV3G0QLnev 8GfA== X-Forwarded-Encrypted: i=1; AJvYcCU0C7zbzCFmLNEerfllpsach9Bg3tB121pJnSGokaYbv6YoWNYgDKMtMAxgsJFHGwVGrBrA4/OmF573GgnRwFP0qpZ76mjd5G7+qbvE X-Gm-Message-State: AOJu0Yxpyhnr8FgwSdDTk3rI4Evj6oNrU8hwYzmaNPnaXCQRRL6dCRpB uYeg5SNWimyceSQ6uuOWGMLYwQmLXfaESxg4i/6Ail3hPetC56QE X-Google-Smtp-Source: AGHT+IEzVu/grUxN8VT3A2w/EHOVk9Am7cvRif7ZTTWBWxDgt7/3PIqhlTfiumnGRYgPCPyofkcdEA== X-Received: by 2002:a05:6a21:398c:b0:1a3:dcdf:13cf with SMTP id ad12-20020a056a21398c00b001a3dcdf13cfmr2102446pzc.56.1711479869865; Tue, 26 Mar 2024 12:04:29 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.25 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:29 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 03/10] mm/swap: convert swapin_readahead to return a folio Date: Wed, 27 Mar 2024 02:50:25 +0800 Message-ID: <20240326185032.72159-4-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song Simplify the caller code logic. Signed-off-by: Kairui Song --- mm/memory.c | 8 +++----- mm/swap.h | 4 ++-- mm/swap_state.c | 6 ++---- mm/swapfile.c | 5 +---- 4 files changed, 8 insertions(+), 15 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e42fadc25268..dfdb620a9123 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4005,12 +4005,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (PTR_ERR(folio) =3D=3D -EBUSY) goto out; need_clear_cache =3D true; - page =3D &folio->page; } else { - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - vmf); - if (page) - folio =3D page_folio(page); + folio =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf); swapcache =3D folio; } =20 @@ -4027,6 +4023,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto unlock; } =20 + page =3D folio_file_page(folio, swp_offset(entry)); + /* Had to read the page from swap area: Major fault */ ret =3D VM_FAULT_MAJOR; count_vm_event(PGMAJFAULT); diff --git a/mm/swap.h b/mm/swap.h index 40e902812cc5..aee134907a70 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -57,7 +57,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, g= fp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); -struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, +struct folio *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); =20 static inline unsigned int folio_swap_flags(struct folio *folio) @@ -95,7 +95,7 @@ static inline struct folio *swapin_direct(swp_entry_t ent= ry, gfp_t flag, return NULL; } =20 -static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mas= k, +static inline struct folio *swapin_readahead(swp_entry_t swp, gfp_t gfp_ma= sk, struct vm_fault *vmf) { return NULL; diff --git a/mm/swap_state.c b/mm/swap_state.c index 0a3fa48b3893..2a9c6bdff5ea 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -951,7 +951,7 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gf= p_mask, * it will read ahead blocks by cluster-based(ie, physical disk based) * or vma-based(ie, virtual address based on faulty address) readahead. */ -struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, +struct folio *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_fault *vmf) { struct mempolicy *mpol; @@ -964,9 +964,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t = gfp_mask, swap_cluster_readahead(entry, gfp_mask, mpol, ilx); mpol_cond_put(mpol); =20 - if (!folio) - return NULL; - return folio_file_page(folio, swp_offset(entry)); + return folio; } =20 #ifdef CONFIG_SYSFS diff --git a/mm/swapfile.c b/mm/swapfile.c index 4919423cce76..4dd894395a0f 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1883,7 +1883,6 @@ static int unuse_pte_range(struct vm_area_struct *vma= , pmd_t *pmd, =20 folio =3D swap_cache_get_folio(entry, vma, addr); if (!folio) { - struct page *page; struct vm_fault vmf =3D { .vma =3D vma, .address =3D addr, @@ -1891,10 +1890,8 @@ static int unuse_pte_range(struct vm_area_struct *vm= a, pmd_t *pmd, .pmd =3D pmd, }; =20 - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, + folio =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, &vmf); - if (page) - folio =3D page_folio(page); } if (!folio) { swp_count =3D READ_ONCE(si->swap_map[offset]); --=20 2.43.0 From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DC6813CFB0 for ; Tue, 26 Mar 2024 19:04:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479876; cv=none; b=LLoPtUIcFNd857fTLiCEBke5S32D7fSOG0HDJMX29eGO/lIqoucpHXORn5ISGrEjXyQ0xzsEgLvO6HVCzyB17Lrb59PtUr2nulUGt922r4qiEYCr4tL0dH4udOPz1LVW/G605fRfYUgwtOfa2E8PLzWLkQ/a8OiUXju8vKuDvTs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479876; c=relaxed/simple; bh=eyZTUAHh8OYDAQTYqNh8h4hZVIRE+gV03Q04b8TImjs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oldYAw3H4Gj8qM63ajxY08VfG+0LTzykOoetUdMtS3v2KItxPFgzWBBROM3HAF86BCuC18rzfQymvr2wtNVeA0GBaGQqHxWp17ZEVYObGQ2T0Kv5KEk6qGwx+k1Y6sCEwnnRIRROqzdeYv4uS0K1/DoBd1yX/deXK3BuC/yVcsA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fzVWRiSP; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fzVWRiSP" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-6e6b5432439so4576833b3a.1 for ; Tue, 26 Mar 2024 12:04:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479874; x=1712084674; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=AlnFQa5IuF1FU+kiw8JlfLUfSH7ltUVyYlgQl237ZM0=; b=fzVWRiSPHOleXdKyZUp8F/rVo5I12zVscicmOUR5AONpRolCHIV6+dZhe1IqUon9i5 N2oe/gn5XtzZbpDXHOud4ni91nIN70lY3dxoocWFmYReSr8fXdHP+jNZtWDAwTkAEzQd KHZHLZE1GSRLXAaVMos2YkGvSmUm49wUeBoq9wdcHbb52on40IiLVrs/X6k4Bd8S8vHk r/4fOWYSw/plBz5s/vObAi4XBaa9UN23OQWCq1N2RCJ5YmnuCjEbbpJ18LuaV5DGZ9F7 jKjo6fDYdQ4Pmd6k4ZrFKbAMykccF0wNrZbnxED5TnOWF11MdRxNn4t2JjBR+E8KuCaI o1AQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479874; x=1712084674; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=AlnFQa5IuF1FU+kiw8JlfLUfSH7ltUVyYlgQl237ZM0=; b=m+fqH1+QXrzNqESXCcV9daeeQZm6IuVI5i06dfan3JE32BREBU7SpWm/d7f9utaFw4 fVWWrLJcbgohGkpZCFRDa9sfQJ9lFeqaDGmAwwVlaLEmv+Edj14xLP4VcHVY1XjqTJsA VlEAFp/bBsVy6ThrdsQPHllOC4u3bWDt0QD1jt2GapXyZbRBvG+WEcsMXenH+bKHwPEs IJVELHdLPzxnYt/h4usrJAInaPNw4wGITnEjnXaGRtkINeuO3rZnIxJaFHs2m/JI9GEJ jvoZd6Sd6AU8Qi7J3EhcrfRKPFsLsO3mCqJg1ujAlGFUNLto+owYvoNsY8y5n4AaquwQ 1S0w== X-Forwarded-Encrypted: i=1; AJvYcCV9iTRU0qO+r89AD1bHHEGIN7Uj7oRbj7nAvSHLwd3MW7afeI1kT79AKRwwMJPVJgNvUtgycoPKH3f/VG77G57DfiiahQEci7/7L7DE X-Gm-Message-State: AOJu0Yz9uw0YuFxWEENEBL58O0MqionGKZwkCSnz2IQvd7lJAbzG+5DQ Oqu19CEK9ZYSBcog8WkrsNhK+79xPcQ1R6oe2FAMU5kj0q+KIMrmxG5VPQHt93uoVzZP X-Google-Smtp-Source: AGHT+IG6+WnJXSjAIBqkKDWHn5ts/KmlVMWEdeN8j0WCY8uGY9blEEX6n3YjmXe+budCoU47FIqAOg== X-Received: by 2002:a05:6a00:3d49:b0:6e8:f8a9:490e with SMTP id lp9-20020a056a003d4900b006e8f8a9490emr2484291pfb.5.1711479874359; Tue, 26 Mar 2024 12:04:34 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.30 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:33 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 04/10] mm/swap: remove cache bypass swapin Date: Wed, 27 Mar 2024 02:50:26 +0800 Message-ID: <20240326185032.72159-5-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song We used to have the cache bypass swapin path for better performance, but by removing it, more optimization can be applied and have an even better overall performance and less hackish. And these optimizations are not easily doable or not doable at all without this. This patch simply removes it, and the performance will drop heavily for simple swapin, things won't get this worse for real workloads but still observable. Following commits will fix this and archive a better performance. Swapout/in 30G zero pages from ZRAM (This mostly measures overhead of swap path itself, because zero pages are not compressed but simply recorded in ZRAM, and performance drops more as SWAP device is getting full): Test result of sequential swapin/out: Before (us) After (us) Swapout: 33619409 33624641 Swapin: 32393771 41614858 (-28.4%) Swapout (THP): 7817909 7795530 Swapin (THP) : 32452387 41708471 (-28.4%) Signed-off-by: Kairui Song --- mm/memory.c | 18 ++++------------- mm/swap.h | 10 +++++----- mm/swap_state.c | 53 ++++++++++--------------------------------------- mm/swapfile.c | 13 ------------ 4 files changed, 19 insertions(+), 75 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index dfdb620a9123..357d239ee2f6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3932,7 +3932,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si =3D NULL; rmap_t rmap_flags =3D RMAP_NONE; - bool need_clear_cache =3D false; bool exclusive =3D false; swp_entry_t entry; pte_t pte; @@ -4000,14 +3999,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) =3D=3D 1) { - /* skip swapcache and readahead */ folio =3D swapin_direct(entry, GFP_HIGHUSER_MOVABLE, vmf); - if (PTR_ERR(folio) =3D=3D -EBUSY) - goto out; - need_clear_cache =3D true; } else { folio =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf); - swapcache =3D folio; } =20 if (!folio) { @@ -4023,6 +4017,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto unlock; } =20 + swapcache =3D folio; page =3D folio_file_page(folio, swp_offset(entry)); =20 /* Had to read the page from swap area: Major fault */ @@ -4187,7 +4182,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->orig_pte =3D pte; =20 /* ksm created a completely new copy */ - if (unlikely(folio !=3D swapcache && swapcache)) { + if (unlikely(folio !=3D swapcache)) { folio_add_new_anon_rmap(folio, vma, vmf->address); folio_add_lru_vma(folio, vma); } else { @@ -4201,7 +4196,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); =20 folio_unlock(folio); - if (folio !=3D swapcache && swapcache) { + if (folio !=3D swapcache) { /* * Hold the lock to avoid the swap entry to be reused * until we take the PT lock for the pte_same() check @@ -4227,9 +4222,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); out: - /* Clear the swap cache pin for direct swapin after PTL unlock */ - if (need_clear_cache) - swapcache_clear(si, entry); if (si) put_swap_device(si); return ret; @@ -4240,12 +4232,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_unlock(folio); out_release: folio_put(folio); - if (folio !=3D swapcache && swapcache) { + if (folio !=3D swapcache) { folio_unlock(swapcache); folio_put(swapcache); } - if (need_clear_cache) - swapcache_clear(si, entry); if (si) put_swap_device(si); return ret; diff --git a/mm/swap.h b/mm/swap.h index aee134907a70..ac9573b03432 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -41,7 +41,6 @@ void __delete_from_swap_cache(struct folio *folio, void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); -void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry); struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr); struct folio *filemap_get_incore_folio(struct address_space *mapping, @@ -100,14 +99,15 @@ static inline struct folio *swapin_readahead(swp_entry= _t swp, gfp_t gfp_mask, { return NULL; } - -static inline int swap_writepage(struct page *p, struct writeback_control = *wbc) +static inline struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, + struct vm_fault *vmf); { - return 0; + return NULL; } =20 -static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_= t entry) +static inline int swap_writepage(struct page *p, struct writeback_control = *wbc) { + return 0; } =20 static inline struct folio *swap_cache_get_folio(swp_entry_t entry, diff --git a/mm/swap_state.c b/mm/swap_state.c index 2a9c6bdff5ea..49ef6250f676 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -880,61 +880,28 @@ static struct folio *swap_vma_readahead(swp_entry_t t= arg_entry, gfp_t gfp_mask, } =20 /** - * swapin_direct - swap in folios skipping swap cache and readahead + * swapin_direct - swap in folios skipping readahead * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vmf: fault information * - * Returns the struct folio for entry and addr after the swap entry is read - * in. + * Returns the folio for entry after it is read in. */ struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, struct vm_fault *vmf) { - struct vm_area_struct *vma =3D vmf->vma; + struct mempolicy *mpol; struct folio *folio; - void *shadow =3D NULL; - - /* - * Prevent parallel swapin from proceeding with - * the cache flag. Otherwise, another thread may - * finish swapin first, free the entry, and swapout - * reusing the same entry. It's undetectable as - * pte_same() returns true due to entry reuse. - */ - if (swapcache_prepare(entry)) { - /* Relax a bit to prevent rapid repeated page faults */ - schedule_timeout_uninterruptible(1); - return ERR_PTR(-EBUSY); - } - - /* skip swapcache */ - folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, - vma, vmf->address, false); - if (folio) { - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - - if (mem_cgroup_swapin_charge_folio(folio, - vma->vm_mm, GFP_KERNEL, - entry)) { - folio_unlock(folio); - folio_put(folio); - return NULL; - } - mem_cgroup_swapin_uncharge_swap(entry); - - shadow =3D get_shadow_from_swap_cache(entry); - if (shadow) - workingset_refault(folio, shadow); + bool page_allocated; + pgoff_t ilx; =20 - folio_add_lru(folio); + mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); + folio =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + &page_allocated, false); + mpol_cond_put(mpol); =20 - /* To provide entry to swap_read_folio() */ - folio->swap =3D entry; + if (page_allocated) swap_read_folio(folio, true, NULL); - folio->private =3D NULL; - } =20 return folio; } diff --git a/mm/swapfile.c b/mm/swapfile.c index 4dd894395a0f..ae8d3aa05df7 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3389,19 +3389,6 @@ int swapcache_prepare(swp_entry_t entry) return __swap_duplicate(entry, SWAP_HAS_CACHE); } =20 -void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) -{ - struct swap_cluster_info *ci; - unsigned long offset =3D swp_offset(entry); - unsigned char usage; - - ci =3D lock_cluster_or_swap_info(si, offset); - usage =3D __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE); - unlock_cluster_or_swap_info(si, ci); - if (!usage) - free_swap_slot(entry); -} - struct swap_info_struct *swp_swap_info(swp_entry_t entry) { return swap_type_to_swap_info(swp_type(entry)); --=20 2.43.0 From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A312013CC67 for ; Tue, 26 Mar 2024 19:04:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479881; cv=none; b=t0p9kewK1FmtKZXSLJRdnDfW+hSALLb+/PZBcCa0z30fdKiAZMCmcJlO/pgiQhieBjS1NJ9vorveVCL+w8wxUiOerCuigcc0FZDCCDYizuKeXcMwKZ9Zy58zVkZ9WYs2kpWJcCUFHBSOCOzeJdhkRXUDifeH0EKYj8meBvn/hkk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479881; c=relaxed/simple; bh=BhNSRTRGFLZQR9q1mLGd37SZqvsgnnuAv8LnFnESSus=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=h9UzgOyLGzgj3E+OUKEggPYXJd7GutTQ7aK805agNEtrgQu+57T+AKXv9utOgSzk83A8fDEaPk85s1b4j6hYeVRiVNU9fAE4lThUPwS/wXzU28XSyR7zH0f0cvvZTF0Z8Wk/IgJRt8ibF2sEMSM0A3oz8PptOhNORTFrlPKpLcU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=c+T4LvRl; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="c+T4LvRl" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-5d42e7ab8a9so2795379a12.3 for ; Tue, 26 Mar 2024 12:04:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479879; x=1712084679; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=QbZCOjluZtqwe+IJf5t2wPcomK28PDyLWYCalJheOPc=; b=c+T4LvRlxNKyDu9GePa4TD/uptUmovAfahxSsKUHHOfiUu7LZi/9jAA7Brhr8QMoYy UVXnNf0tc9HLg4Ew7r26uLWsuYCl8Kbz6+KV7T00fvUHbor+FenowYXTixxKkMPva+Nq uf+fqfJ+IkDYn0yLnnTbERaBSYyGTwkK3SYou6GOi6Yq6Y4SPkxAzmPmNs8ZKnjmkHZs wkyrsE3cyZ1xYqu/jQXrXPEI+/VoR3h/CYgYB9Nx1Zkh1I/yXwErzS299w3fiffnwxvK gA1JKMm7bC4HKFXX03BBLPtskvl42mpOEfp0Hf361P8RAgiRtgLQtqDh9XBxv2lrD3TF ji8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479879; x=1712084679; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=QbZCOjluZtqwe+IJf5t2wPcomK28PDyLWYCalJheOPc=; b=lfp5kLPoTxV8e3z3in4TH3ahk4wMgmi2zyu5GaX67gETyLwXs4PLo3gM1i1U06Mkz9 Sb2ABqQMMHi9uVxLNs6AG6jQrUTj1jxS6iWtKYLl3IUnkjCoEtx7EJirWxuQCsr155C4 voazUBZqJUVJZqu9xbHWheshxr4I4cO4PDwIrtiUeEsb6X7pOPqTa5R2KSDns0dAl2mI 2tClOA4tZonKs7MHFQwJqNzUERIHPhwMWjE16fLmBsVgNuLZT2wHUd9w1/12Vm3RwrUk s+Y3eca8vDyTMwiCJwNLHHMP1RcnIcBY0eMvPgJxNOpZJW6srIU/gU22DnmP+OoXPSKF uC1w== X-Forwarded-Encrypted: i=1; AJvYcCXWLTG6Fv1ysrWkU3kya8OhCdeweN4utZIPzFZ4NzwRodTL2oX7CdacqHxIMon0/seqWDTk6AOptmChJDNrJ42gwSYQr0luJeouwK4f X-Gm-Message-State: AOJu0Yy92OHomcEp7Cc2mzaoCI5t20oWmsiMecWu7BZ83MVn/OmHB6w7 wRn6BCTchCymrIph6MnyLEDUT7ODLXhmIrBdNDdm271Qf/OwpSXQ X-Google-Smtp-Source: AGHT+IHPrVL9xajG03QLF+CldgXO1e+DdNsfAc6WD2CDV/JcM0VHVNqqhHe8QUEsP8IyBUI6ApCsOQ== X-Received: by 2002:a05:6a20:9187:b0:1a3:c113:f441 with SMTP id v7-20020a056a20918700b001a3c113f441mr8667699pzd.15.1711479878748; Tue, 26 Mar 2024 12:04:38 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.34 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:38 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 05/10] mm/swap: clean shadow only in unmap path Date: Wed, 27 Mar 2024 02:50:27 +0800 Message-ID: <20240326185032.72159-6-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song After removing the cache bypass swapin, the first thing could be gone is all the clear_shadow_from_swap_cache calls. Currently clear_shadow_from_swap_cache is being called in many paths. It's currently being called by swap_range_free which has two direct callers: - swap_free_cluster, which is only called by put_swap_folio to free up the shadow of a slot cluster. - swap_entry_free, which is only called by swapcache_free_entries to free up shadow of a slot. And these two are very commonly used everywhere in SWAP codes. Notice the shadow is only written by __delete_from_swap_cache after after a successful SWAP out, so clearly we only want to clear shadow after SWAP in (the shadow is used and no longer needed) or Unmap/MADV_FREE. After all swapin is using cached swapin path, clear_shadow_from_swap_cache is not needed for swapin anymore, because we have to insert the folio first, and this already removed the shadow. So we only need to clear the shadow for Unmap/MADV_FREE. All direct/indirect caller of swap_free_cluster and swap_entry_free are listed below: - swap_free_cluster: -> put_swap_folio (Clean the cache flag and try delete shadow, after removing the cache or error handling) -> delete_from_swap_cache -> __remove_mapping -> shmem_writepage -> folio_alloc_swap -> add_to_swap -> __read_swap_cache_async - swap_entry_free -> swapcache_free_entries -> drain_slots_cache_cpu -> free_swap_slot -> put_swap_folio (Already covered above) -> __swap_entry_free / swap_free -> free_swap_and_cache (Called by Unmap/Zap/MADV_FREE) -> madvise_free_single_vma -> unmap_page_range -> shmem_undo_range -> swap_free (Called by swapin path) -> do_swap_page (Swapin path) -> alloc_swapdev_block/free_all_swap_pages () -> try_to_unmap_one (Error handling, no shadow) -> shmem_set_folio_swapin_error (Shadow just gone) -> shmem_swapin_folio (Shmem's do_swap_page) -> unuse_pte (Swapoff, which always use swapcache) So now we only need to call clear_shadow_from_swap_cache in free_swap_and_cache because all swapin/out will went through swap cache now. Previously all above functions could invoke clear_shadow_from_swap_cache in case a cache bypass swapin left a entry with uncleared shadow. Also make clear_shadow_from_swap_cache only clear one entry for simplicity. Test result of sequential swapin/out: Before (us) After (us) Swapout: 33624641 33648529 Swapin: 41614858 40667696 (+2.3%) Swapout (THP): 7795530 7658664 Swapin (THP) : 41708471 40602278 (+2.7%) Signed-off-by: Kairui Song --- mm/swap.h | 6 ++---- mm/swap_state.c | 33 ++++++++------------------------- mm/swapfile.c | 6 ++++-- 3 files changed, 14 insertions(+), 31 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index ac9573b03432..7721ddb3bdbc 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -39,8 +39,7 @@ int add_to_swap_cache(struct folio *folio, swp_entry_t en= try, void __delete_from_swap_cache(struct folio *folio, swp_entry_t entry, void *shadow); void delete_from_swap_cache(struct folio *folio); -void clear_shadow_from_swap_cache(int type, unsigned long begin, - unsigned long end); +void clear_shadow_from_swap_cache(swp_entry_t entry); struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr); struct folio *filemap_get_incore_folio(struct address_space *mapping, @@ -148,8 +147,7 @@ static inline void delete_from_swap_cache(struct folio = *folio) { } =20 -static inline void clear_shadow_from_swap_cache(int type, unsigned long be= gin, - unsigned long end) +static inline void clear_shadow_from_swap_cache(swp_entry_t entry) { } =20 diff --git a/mm/swap_state.c b/mm/swap_state.c index 49ef6250f676..b84e7b0ea4a5 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -245,34 +245,17 @@ void delete_from_swap_cache(struct folio *folio) folio_ref_sub(folio, folio_nr_pages(folio)); } =20 -void clear_shadow_from_swap_cache(int type, unsigned long begin, - unsigned long end) +void clear_shadow_from_swap_cache(swp_entry_t entry) { - unsigned long curr =3D begin; - void *old; - - for (;;) { - swp_entry_t entry =3D swp_entry(type, curr); - struct address_space *address_space =3D swap_address_space(entry); - XA_STATE(xas, &address_space->i_pages, curr); - - xas_set_update(&xas, workingset_update_node); + struct address_space *address_space =3D swap_address_space(entry); + XA_STATE(xas, &address_space->i_pages, swp_offset(entry)); =20 - xa_lock_irq(&address_space->i_pages); - xas_for_each(&xas, old, end) { - if (!xa_is_value(old)) - continue; - xas_store(&xas, NULL); - } - xa_unlock_irq(&address_space->i_pages); + xas_set_update(&xas, workingset_update_node); =20 - /* search the next swapcache until we meet end */ - curr >>=3D SWAP_ADDRESS_SPACE_SHIFT; - curr++; - curr <<=3D SWAP_ADDRESS_SPACE_SHIFT; - if (curr > end) - break; - } + xa_lock_irq(&address_space->i_pages); + if (xa_is_value(xas_load(&xas))) + xas_store(&xas, NULL); + xa_unlock_irq(&address_space->i_pages); } =20 /* diff --git a/mm/swapfile.c b/mm/swapfile.c index ae8d3aa05df7..bafae23c0f26 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -724,7 +724,6 @@ static void add_to_avail_list(struct swap_info_struct *= p) static void swap_range_free(struct swap_info_struct *si, unsigned long off= set, unsigned int nr_entries) { - unsigned long begin =3D offset; unsigned long end =3D offset + nr_entries - 1; void (*swap_slot_free_notify)(struct block_device *, unsigned long); =20 @@ -748,7 +747,6 @@ static void swap_range_free(struct swap_info_struct *si= , unsigned long offset, swap_slot_free_notify(si->bdev, offset); offset++; } - clear_shadow_from_swap_cache(si->type, begin, end); =20 /* * Make sure that try_to_unuse() observes si->inuse_pages reaching 0 @@ -1605,6 +1603,8 @@ bool folio_free_swap(struct folio *folio) /* * Free the swap entry like above, but also try to * free the page cache entry if it is the last user. + * Useful when clearing the swap map and swap cache + * without reading swap content (eg. unmap, MADV_FREE) */ int free_swap_and_cache(swp_entry_t entry) { @@ -1626,6 +1626,8 @@ int free_swap_and_cache(swp_entry_t entry) !swap_page_trans_huge_swapped(p, entry)) __try_to_reclaim_swap(p, swp_offset(entry), TTRS_UNMAPPED | TTRS_FULL); + if (!count) + clear_shadow_from_swap_cache(entry); put_swap_device(p); } return p !=3D NULL; --=20 2.43.0 From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 453B813CC6D for ; Tue, 26 Mar 2024 19:04:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479886; cv=none; b=UdGRndyGgKlHm8bR8qLGke0izkUPOBL70sam9JAKDJ+48uDk1/+hhQe1qJfA+1ytXpHC2dZtCBL8B1LImxgN+eYaOOotUbSvsWaaQTv5xUB1eaqZXLrq7BUr+ag0D4ZNG4klZk0nooLWiChNPUL2tWQH34lAluFo6NyueI86nd8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479886; c=relaxed/simple; bh=6tcr6iMdmPAE6fgxinABbdtbLscebxtcpDh1uab02Z0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l5FlzZ6pcjIO505srSPdfwLOGSKeDIG5xp6Hue5WCX0RuY6lkl4Z+OwtqmYHLCPi71Vyz8yed53Gd9W9+chW37COe49Hn/FX0KI0nXRiA1v5JAc+Rmv+ZWhi+GCiCuToN1PAOfd8orJRVned8dKyTLm1qG13x2WRYSZTln4oPt8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bfcol694; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bfcol694" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6e6f69e850bso5283459b3a.0 for ; Tue, 26 Mar 2024 12:04:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479883; x=1712084683; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ztDoqKIOUdyGA4S9aoIHNjHCO+9b1c8CvnSrz3vjQuA=; b=bfcol694OWmDr04/fe8qdMd0GsOBA/13M4cDkn8QPm+PJpwFxWk1A/6Zh+vAJdQmw/ IVLet5mdjQEipU2XlMWMr2UKhCbsEdFABTeorz7vPaD7dGqIseRHw4IpnRoOi4htkHNf 52+ITGnHZKqhSXwNKthoaZPHgi/hp+HQCu1i6I4Gv9UME0QjVXEnPTPeLoTjKHBWOH6H Q1UGD0cLQLbFA/iaPFwbh+hw8QXemVkeWix1gleYlXyLywEITSY+7aSRLpkG1A1bSv3Y Z3x+MyiHWTwu+HgMIkTaaTcjSgRlFFP4lqlQ8wzBQx/6Osgd7e0dYoaZVxklTxkmhbj6 QjEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479883; x=1712084683; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ztDoqKIOUdyGA4S9aoIHNjHCO+9b1c8CvnSrz3vjQuA=; b=BgJBbf1hYpCbluzHEwrlfPjEZHvmDevnQXJQbJbaX56NlQZp+3/Z+GTVvDh//O6e+k Pg3NCyKXFZg8Iu7BDsnboLegd2AZL0xdGXok6DDb06ErD4gmGWu5FT3TmX1RfbU0B+45 yqll09BaCSlz1R1XGz57cCXwcyZNDSkvOmJtQekXDAesR2wIszbRLhmpq+LSnYaqCkVl dIxHisOI6GOsWbd+9qexgZN7sKNbVQ8ggMJ6HJ2oSA5OWargPdApIH23IZYwshTilDad yJI8JY1R6UxqfINZw7m2eQYP1l1ChX+xDdafvfhc+yGumuttGQlu1SPo5kTNnVEyZcl7 4OWQ== X-Forwarded-Encrypted: i=1; AJvYcCXjkVQQchT+CRjxSMjJZhTevpyRBrAdA9AbjBg/3VKe5z7t2ChqSGR2DiZigJr9Vs83DGs31Y0ZOC1iWRwxQ/f/x3ex74FFNw/p346J X-Gm-Message-State: AOJu0YyO+SwuEzDCF5nmG+haXpTF4qip5bTi5e9/T95kQTTl50j36VYC Z1JOb5u7+jnaWzUdfGZi2fIRA1WsPvNVzsZ9Pwy5HJjCJSmIAwOA X-Google-Smtp-Source: AGHT+IFhZpRE3C2jmFm0uvP6wXg7vusrz0LzQTW8Jr4o04NUgO6I7cuBwzk1y02ru2ic2EfhCl9c0w== X-Received: by 2002:a05:6a00:21d2:b0:6e6:8df5:f77a with SMTP id t18-20020a056a0021d200b006e68df5f77amr2245769pfj.31.1711479883480; Tue, 26 Mar 2024 12:04:43 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.39 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:42 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 06/10] mm/swap: switch to use multi index entries Date: Wed, 27 Mar 2024 02:50:28 +0800 Message-ID: <20240326185032.72159-7-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song From: Kairui Song Since now all explicit shadow clearing is gone and all swapin / swapout path is all using swap cache, switch swap cache to use multi index so swapping out of THP will be faster, also using less memory. Test result of sequential swapin/out of 30G zero page on ZRAM: Before (us) After (us) Swapout: 33648529 33713283 Swapin: 40667696 40954646 Swapout (THP): 7658664 6921176 (+9.7%) Swapin (THP) : 40602278 40891953 And after swapping out 30G with THP, the radix node usage dropped by a lot: Before: radix_tree_node 73728K After: radix_tree_node 7056K (-94%) Signed-off-by: Kairui Song --- mm/filemap.c | 27 +++++++++++++++++ mm/huge_memory.c | 77 +++++++++++++++++++----------------------------- mm/internal.h | 2 ++ mm/swap_state.c | 54 ++++++++++----------------------- 4 files changed, 75 insertions(+), 85 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 0ccdc9e92764..5e8e3fd26b8d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -919,6 +919,33 @@ static int __filemap_lock_store(struct xa_state *xas, = struct folio *folio, return xas_error(xas); } =20 +int __filemap_add_swapcache(struct address_space *mapping, struct folio *f= olio, + pgoff_t index, gfp_t gfp, void **shadowp) +{ + XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio)); + long nr; + int ret; + + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_swapcache(folio), folio); + mapping_set_update(&xas, mapping); + + nr =3D folio_nr_pages(folio); + folio_ref_add(folio, nr); + + ret =3D __filemap_lock_store(&xas, folio, index, gfp, shadowp); + if (likely(!ret)) { + mapping->nrpages +=3D nr; + __node_stat_mod_folio(folio, NR_FILE_PAGES, nr); + __lruvec_stat_mod_folio(folio, NR_SWAPCACHE, nr); + xas_unlock_irq(&xas); + } else { + folio_put_refs(folio, nr); + } + + return ret; +} + noinline int __filemap_add_folio(struct address_space *mapping, struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9859aa4f7553..4fd2f74b94a9 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2886,14 +2886,12 @@ static void __split_huge_page_tail(struct folio *fo= lio, int tail, lru_add_page_tail(head, page_tail, lruvec, list); } =20 -static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned int new_order) +static void __split_huge_page(struct address_space *mapping, struct page *= page, + struct list_head *list, pgoff_t end, unsigned int new_order) { struct folio *folio =3D page_folio(page); struct page *head =3D &folio->page; struct lruvec *lruvec; - struct address_space *swap_cache =3D NULL; - unsigned long offset =3D 0; int i, nr_dropped =3D 0; unsigned int new_nr =3D 1 << new_order; int order =3D folio_order(folio); @@ -2902,12 +2900,6 @@ static void __split_huge_page(struct page *page, str= uct list_head *list, /* complete memcg works before add pages to LRU */ split_page_memcg(head, order, new_order); =20 - if (folio_test_anon(folio) && folio_test_swapcache(folio)) { - offset =3D swp_offset(folio->swap); - swap_cache =3D swap_address_space(folio->swap); - xa_lock(&swap_cache->i_pages); - } - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ lruvec =3D folio_lruvec_lock(folio); =20 @@ -2919,18 +2911,18 @@ static void __split_huge_page(struct page *page, st= ruct list_head *list, if (head[i].index >=3D end) { struct folio *tail =3D page_folio(head + i); =20 - if (shmem_mapping(folio->mapping)) + if (shmem_mapping(mapping)) nr_dropped++; else if (folio_test_clear_dirty(tail)) folio_account_cleaned(tail, - inode_to_wb(folio->mapping->host)); + inode_to_wb(mapping->host)); __filemap_remove_folio(tail, NULL); folio_put(tail); } else if (!PageAnon(page)) { - __xa_store(&folio->mapping->i_pages, head[i].index, + __xa_store(&mapping->i_pages, head[i].index, head + i, 0); - } else if (swap_cache) { - __xa_store(&swap_cache->i_pages, offset + i, + } else if (folio_test_swapcache(folio)) { + __xa_store(&mapping->i_pages, swp_offset(folio->swap) + i, head + i, 0); } } @@ -2948,23 +2940,17 @@ static void __split_huge_page(struct page *page, st= ruct list_head *list, split_page_owner(head, order, new_order); =20 /* See comment in __split_huge_page_tail() */ - if (folio_test_anon(folio)) { + if (mapping) { /* Additional pin to swap cache */ - if (folio_test_swapcache(folio)) { - folio_ref_add(folio, 1 + new_nr); - xa_unlock(&swap_cache->i_pages); - } else { - folio_ref_inc(folio); - } - } else { - /* Additional pin to page cache */ folio_ref_add(folio, 1 + new_nr); - xa_unlock(&folio->mapping->i_pages); + xa_unlock(&mapping->i_pages); + } else { + folio_ref_inc(folio); } local_irq_enable(); =20 if (nr_dropped) - shmem_uncharge(folio->mapping->host, nr_dropped); + shmem_uncharge(mapping->host, nr_dropped); remap_page(folio, nr); =20 if (folio_test_swapcache(folio)) @@ -3043,11 +3029,12 @@ int split_huge_page_to_list_to_order(struct page *p= age, struct list_head *list, struct deferred_split *ds_queue =3D get_deferred_split_queue(folio); /* reset xarray order to new order after split */ XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order); + struct address_space *mapping =3D folio_mapping(folio);; struct anon_vma *anon_vma =3D NULL; - struct address_space *mapping =3D NULL; int extra_pins, ret; pgoff_t end; bool is_hzp; + gfp_t gfp; =20 VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); @@ -3079,7 +3066,6 @@ int split_huge_page_to_list_to_order(struct page *pag= e, struct list_head *list, } } =20 - is_hzp =3D is_huge_zero_page(&folio->page); if (is_hzp) { pr_warn_ratelimited("Called split_huge_page for huge zero page\n"); @@ -3089,6 +3075,17 @@ int split_huge_page_to_list_to_order(struct page *pa= ge, struct list_head *list, if (folio_test_writeback(folio)) return -EBUSY; =20 + if (mapping) { + gfp =3D current_gfp_context(mapping_gfp_mask(mapping) & + GFP_RECLAIM_MASK); + + xas_split_alloc(&xas, folio, folio_order(folio), gfp); + if (xas_error(&xas)) { + ret =3D xas_error(&xas); + goto out; + } + } + if (folio_test_anon(folio)) { /* * The caller does not necessarily hold an mmap_lock that would @@ -3104,33 +3101,19 @@ int split_huge_page_to_list_to_order(struct page *p= age, struct list_head *list, goto out; } end =3D -1; - mapping =3D NULL; anon_vma_lock_write(anon_vma); } else { - gfp_t gfp; - - mapping =3D folio->mapping; - /* Truncated ? */ if (!mapping) { ret =3D -EBUSY; goto out; } =20 - gfp =3D current_gfp_context(mapping_gfp_mask(mapping) & - GFP_RECLAIM_MASK); - if (!filemap_release_folio(folio, gfp)) { ret =3D -EBUSY; goto out; } =20 - xas_split_alloc(&xas, folio, folio_order(folio), gfp); - if (xas_error(&xas)) { - ret =3D xas_error(&xas); - goto out; - } - anon_vma =3D NULL; i_mmap_lock_read(mapping); =20 @@ -3189,7 +3172,9 @@ int split_huge_page_to_list_to_order(struct page *pag= e, struct list_head *list, int nr =3D folio_nr_pages(folio); =20 xas_split(&xas, folio, folio_order(folio)); - if (folio_test_pmd_mappable(folio) && + + if (!folio_test_anon(folio) && + folio_test_pmd_mappable(folio) && new_order < HPAGE_PMD_ORDER) { if (folio_test_swapbacked(folio)) { __lruvec_stat_mod_folio(folio, @@ -3202,7 +3187,7 @@ int split_huge_page_to_list_to_order(struct page *pag= e, struct list_head *list, } } =20 - __split_huge_page(page, list, end, new_order); + __split_huge_page(mapping, page, list, end, new_order); ret =3D 0; } else { spin_unlock(&ds_queue->split_queue_lock); @@ -3218,9 +3203,9 @@ int split_huge_page_to_list_to_order(struct page *pag= e, struct list_head *list, if (anon_vma) { anon_vma_unlock_write(anon_vma); put_anon_vma(anon_vma); - } - if (mapping) + } else { i_mmap_unlock_read(mapping); + } out: xas_destroy(&xas); count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED); diff --git a/mm/internal.h b/mm/internal.h index 7e486f2c502c..b2bbfd3c2b50 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1059,6 +1059,8 @@ struct migration_target_control { */ size_t splice_folio_into_pipe(struct pipe_inode_info *pipe, struct folio *folio, loff_t fpos, size_t size); +int __filemap_add_swapcache(struct address_space *mapping, struct folio *f= olio, + pgoff_t index, gfp_t gfp, void **shadowp); =20 /* * mm/vmalloc.c diff --git a/mm/swap_state.c b/mm/swap_state.c index b84e7b0ea4a5..caf69696f47c 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -90,48 +90,22 @@ int add_to_swap_cache(struct folio *folio, swp_entry_t = entry, { struct address_space *address_space =3D swap_address_space(entry); pgoff_t idx =3D swp_offset(entry); - XA_STATE_ORDER(xas, &address_space->i_pages, idx, folio_order(folio)); - unsigned long i, nr =3D folio_nr_pages(folio); - void *old; - - xas_set_update(&xas, workingset_update_node); + int ret; =20 VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); VM_BUG_ON_FOLIO(!folio_test_swapbacked(folio), folio); =20 - folio_ref_add(folio, nr); folio_set_swapcache(folio); folio->swap =3D entry; =20 - do { - xas_lock_irq(&xas); - xas_create_range(&xas); - if (xas_error(&xas)) - goto unlock; - for (i =3D 0; i < nr; i++) { - VM_BUG_ON_FOLIO(xas.xa_index !=3D idx + i, folio); - if (shadowp) { - old =3D xas_load(&xas); - if (xa_is_value(old)) - *shadowp =3D old; - } - xas_store(&xas, folio); - xas_next(&xas); - } - address_space->nrpages +=3D nr; - __node_stat_mod_folio(folio, NR_FILE_PAGES, nr); - __lruvec_stat_mod_folio(folio, NR_SWAPCACHE, nr); -unlock: - xas_unlock_irq(&xas); - } while (xas_nomem(&xas, gfp)); - - if (!xas_error(&xas)) - return 0; + ret =3D __filemap_add_swapcache(address_space, folio, idx, gfp, shadowp); + if (ret) { + folio_clear_swapcache(folio); + folio->swap.val =3D 0; + } =20 - folio_clear_swapcache(folio); - folio_ref_sub(folio, nr); - return xas_error(&xas); + return ret; } =20 /* @@ -142,7 +116,6 @@ void __delete_from_swap_cache(struct folio *folio, swp_entry_t entry, void *shadow) { struct address_space *address_space =3D swap_address_space(entry); - int i; long nr =3D folio_nr_pages(folio); pgoff_t idx =3D swp_offset(entry); XA_STATE(xas, &address_space->i_pages, idx); @@ -153,11 +126,9 @@ void __delete_from_swap_cache(struct folio *folio, VM_BUG_ON_FOLIO(!folio_test_swapcache(folio), folio); VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); =20 - for (i =3D 0; i < nr; i++) { - void *entry =3D xas_store(&xas, shadow); - VM_BUG_ON_PAGE(entry !=3D folio, entry); - xas_next(&xas); - } + xas_set_order(&xas, idx, folio_order(folio)); + xas_store(&xas, shadow); + folio->swap.val =3D 0; folio_clear_swapcache(folio); address_space->nrpages -=3D nr; @@ -252,6 +223,11 @@ void clear_shadow_from_swap_cache(swp_entry_t entry) =20 xas_set_update(&xas, workingset_update_node); =20 + /* + * On unmap, it may delete a larger order shadow here. It's mostly + * fine since not entirely mapped folios are spiltted on swap out + * and leaves shadows with order 0. + */ xa_lock_irq(&address_space->i_pages); if (xa_is_value(xas_load(&xas))) xas_store(&xas, NULL); --=20 2.43.0 From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BBE9613D260 for ; Tue, 26 Mar 2024 19:04:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479890; cv=none; b=VKkoeEK5NyEacsXawTwYy+TcL5+dib8meritDzZtuWnUIyuNLP8o+VFeHCUvTy6995BQPW3PjfB80NnnNl9XGUWsyGPflXN8fnmmg39PE/6GZrGitYWInL0oiNxJFPfYZI5JRgirlVqI4O+IQEEj+5ErcJDfFkPjQXxq+qnlCSQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479890; c=relaxed/simple; bh=5RKfb4K9zIQp1j7CbzQvEcOddDeo1gEI51gwpTbqHAs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kAgDVloXs6sJWjCZSvF9CjOq7p9ljiJbQceft63c+P64TZb5ROnrdH+msXw3u9VMFUG6Usq7y4pdNS61mKDOwisFwKAbdYnaEntsY0SFuZEwu8BflECkiERu17CdQsBghZ9vEFXBq5osoyzhpaqG8HJwCvax7PgueJRI+s2HZtE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=notoxR69; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="notoxR69" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-6e6b6f86975so3800273b3a.1 for ; Tue, 26 Mar 2024 12:04:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479888; x=1712084688; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=+ERJb6WoHkZLLvZFbb96DVZc9OxyF4ifaYeM2AyqivE=; b=notoxR699wnemiT8w9x8zYWr7cbaJjzwQLXvxPNGV7lCc6LV8DVysddhy9ztKnbQOf 9dMRPMrqM8UU7vLaNzgKXA7tzV0kUx7Re8RYEr/M/yBEGWOuiS2FcU8i4aw3U61yE/hR ihFQMBWNrRR8TZwlrPWQNIZLt932BeQrNoIFjxM1cst1Ty1WldPP5E9+silP3z7E9qJv ntalgLMClty6LPfb52gKGzJ/IO5gvNlO6fVyk+M7rUdnAOEGAzFkaUfMO+CYEPjO0Nzj M5Ijc/zhdtwFGjOtKISsvTGtKgrjSyQxzNH38mA4UT2sEN9Fa5feCerdpDXcNwSHYNTA +VSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479888; x=1712084688; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=+ERJb6WoHkZLLvZFbb96DVZc9OxyF4ifaYeM2AyqivE=; b=TuMXl7+UrWlNXx83NMSsKqhV/yHUmuZ35g4Pbp/HLsMkUyqJmbt/t/TnOMgbYmHg27 wz3V6WNx60ic5vjqU2b5byh/rY+a4dVcvxsD1DP/OLBJdLvMQ+dpg/rzimpUo23RUAXn rhjed5OYMQaseM9sHcQgYjXSCoiyMUlTboa0JqNbOXoZCZzFCQSsSPy6R34JFBaVtWRX N5RZMNGo3Pwrg8f3U2WoanUZFhOEiBts9A9cFOkIheMFki4dpYWZP+HnUxfWJeqxb2r/ foiZ7uNx+9JNYbIrCCkw7kmHHqNm1D7+cGiL2AsdaGCr36cB+E3ZwMqJhFDseNE57AN7 W25Q== X-Forwarded-Encrypted: i=1; AJvYcCXmWEyKnJUIKBwWCcQ3hNMYnlfexe3s9Zc5ydx50srajqpOqJj5FeZ2yNjD8tsX7EdVahJx9GeiKEqWVfQ0IofscVHQdlo7JYCyllcv X-Gm-Message-State: AOJu0Yy4+4Q0UgNy3worMUF99vRKBQ5jFySJVBZ2zZyZkY+dxl400exC bRfPF53K2WFgLaR+oFh/slX5W/skvxaLKmKQuQwgLlptko6AXlYy X-Google-Smtp-Source: AGHT+IFI5c1XPfK9gSzn3T3e+978jVi9idobPChm0l6H2mPesS3FLokLOM//jRLMf4TOY19ha0SPPg== X-Received: by 2002:a05:6a00:701e:b0:6ea:c04c:71cb with SMTP id lf30-20020a056a00701e00b006eac04c71cbmr1408795pfb.3.1711479888011; Tue, 26 Mar 2024 12:04:48 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.43 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:47 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 07/10] mm/swap: rename __read_swap_cache_async to swap_cache_alloc_or_get Date: Wed, 27 Mar 2024 02:50:29 +0800 Message-ID: <20240326185032.72159-8-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song __read_swap_cache_async is widely used to allocate and ensure a folio is in swapcache, or get the folio if a folio is already there. Rename it to better present the usage. Signed-off-by: Kairui Song --- mm/swap.h | 2 +- mm/swap_state.c | 22 +++++++++++----------- mm/swapfile.c | 2 +- mm/zswap.c | 2 +- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 7721ddb3bdbc..5fbbc4a42787 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -48,7 +48,7 @@ struct folio *filemap_get_incore_folio(struct address_spa= ce *mapping, struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, struct swap_iocb **plug); -struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_flags, +struct folio *swap_cache_alloc_or_get(swp_entry_t entry, gfp_t gfp_flags, struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated, bool skip_if_exists); struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/swap_state.c b/mm/swap_state.c index caf69696f47c..cd1a16afcd9f 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -385,7 +385,7 @@ struct folio *filemap_get_incore_folio(struct address_s= pace *mapping, return folio; } =20 -struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, +struct folio *swap_cache_alloc_or_get(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated, bool skip_if_exists) { @@ -443,12 +443,12 @@ struct folio *__read_swap_cache_async(swp_entry_t ent= ry, gfp_t gfp_mask, goto fail_put_swap; =20 /* - * Protect against a recursive call to __read_swap_cache_async() + * Protect against a recursive call to swap_cache_alloc_or_get() * on the same entry waiting forever here because SWAP_HAS_CACHE * is set but the folio is not the swap cache yet. This can * happen today if mem_cgroup_swapin_charge_folio() below * triggers reclaim through zswap, which may call - * __read_swap_cache_async() in the writeback path. + * swap_cache_alloc_or_get() in the writeback path. */ if (skip_if_exists) goto fail_put_swap; @@ -457,7 +457,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry= , gfp_t gfp_mask, * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another - * __read_swap_cache_async(), which has set SWAP_HAS_CACHE + * swap_cache_alloc_or_get(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */ schedule_timeout_uninterruptible(1); @@ -505,7 +505,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry= , gfp_t gfp_mask, * the swap entry is no longer in use. * * get/put_swap_device() aren't needed to call this function, because - * __read_swap_cache_async() call them and swap_read_folio() holds the + * swap_cache_alloc_or_get() call them and swap_read_folio() holds the * swap cache folio lock. */ struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, @@ -518,7 +518,7 @@ struct folio *read_swap_cache_async(swp_entry_t entry, = gfp_t gfp_mask, struct folio *folio; =20 mpol =3D get_vma_policy(vma, addr, 0, &ilx); - folio =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, &page_allocated, false); mpol_cond_put(mpol); =20 @@ -634,7 +634,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry,= gfp_t gfp_mask, blk_start_plug(&plug); for (offset =3D start_offset; offset <=3D end_offset ; offset++) { /* Ok, do the async read-ahead now */ - folio =3D __read_swap_cache_async( + folio =3D swap_cache_alloc_or_get( swp_entry(swp_type(entry), offset), gfp_mask, mpol, ilx, &page_allocated, false); if (!folio) @@ -653,7 +653,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry,= gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - folio =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, &page_allocated, false); if (unlikely(page_allocated)) { zswap_folio_swapin(folio); @@ -809,7 +809,7 @@ static struct folio *swap_vma_readahead(swp_entry_t tar= g_entry, gfp_t gfp_mask, continue; pte_unmap(pte); pte =3D NULL; - folio =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, &page_allocated, false); if (!folio) continue; @@ -829,7 +829,7 @@ static struct folio *swap_vma_readahead(swp_entry_t tar= g_entry, gfp_t gfp_mask, lru_add_drain(); skip: /* The folio was likely read above, so no need for plugging here */ - folio =3D __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx, + folio =3D swap_cache_alloc_or_get(targ_entry, gfp_mask, mpol, targ_ilx, &page_allocated, false); if (unlikely(page_allocated)) { zswap_folio_swapin(folio); @@ -855,7 +855,7 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gf= p_mask, pgoff_t ilx; =20 mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); - folio =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, + folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, &page_allocated, false); mpol_cond_put(mpol); =20 diff --git a/mm/swapfile.c b/mm/swapfile.c index bafae23c0f26..332ce4e578e8 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1249,7 +1249,7 @@ static unsigned char __swap_entry_free_locked(struct = swap_info_struct *p, * CPU1 CPU2 * do_swap_page() * ... swapoff+swapon - * __read_swap_cache_async() + * swap_cache_alloc_or_get() * swapcache_prepare() * __swap_duplicate() * // check swap_map diff --git a/mm/zswap.c b/mm/zswap.c index 9dec853647c8..e4d96816be70 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1126,7 +1126,7 @@ static int zswap_writeback_entry(struct zswap_entry *= entry, =20 /* try to allocate swap cache folio */ mpol =3D get_task_policy(current); - folio =3D __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, + folio =3D swap_cache_alloc_or_get(swpentry, GFP_KERNEL, mpol, NO_INTERLEAVE_INDEX, &folio_was_allocated, true); if (!folio) return -ENOMEM; --=20 2.43.0 From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FC0113D2A3 for ; Tue, 26 Mar 2024 19:04:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479895; cv=none; b=MxJw24gfwSCEQ53DTHJ9y6x23rSA/+GHC3NpweUQoNirjDFdbsh1l0AfdDIzDhpqwcbiUp00B6Lapm9Wl0XbpTdsik9Mt1R8CnSEAU749Sr2Wy7vUpv30Tdr3PVJKriaI1VbLxFiV6TFHb65GkrBghDKKGHNa2hVh1qXW5j6zpo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479895; c=relaxed/simple; bh=f0zVv/g7JQMheKUGudEgr3gtKJTObnU/co0x7BzWjZE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nLZCUjamoKZuJY3szXzREsHjYnadG4cAscTx/3VWeVIPP13BmqaaMZWGGAR7j/kd5lSf+jXFaIGL+/92kJ5IXN0YStdqfBmTuRMxwHSp02BUhtKsuGAxm3O5opR6Xd0JaFVXgnKvsj+jtiAt5lqmE1b/Mph+xX9PHfymdFxqMc4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=O7xIR/58; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="O7xIR/58" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-5e152c757a5so3019939a12.2 for ; Tue, 26 Mar 2024 12:04:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479893; x=1712084693; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=eNwcqhEjkeWS6nrrKT5ddXoMcSlzEQZsZcHDot/JHBw=; b=O7xIR/58cuSNMZiCpqR9hvQO9+AJBV0rGxMx5oIMTr4+pL5qkQSC3VPUs5occZ+zE+ 6H1mxwe5X8tiZfqDKl6FrUbk8Z1oIIBVmVzX0vG6rv+W7g5t6S3r8DkbbQ0FXwz4wjvy uz39HZxsRkeqOROnV3ZLguulTrZn8uBLZU2x/6hdykTotHxImvSzMtnEW3xPHUIGpYi9 69SRUxyTxxJccn/WP7JwMVCJLXc4x7AfvrDmigEPEwRFimEmSd/Tct9ikTOEVzLyBXdW ncxVF46LLLPikkEzgwsRLs4dDHHojK6GZuzlC7VkviHTbGCqj7SWsbiHaMRq4kReSQhR BS5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479893; x=1712084693; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=eNwcqhEjkeWS6nrrKT5ddXoMcSlzEQZsZcHDot/JHBw=; b=GG4wupQXDPV5PYkCAusAXqFBJW+nJ9aCsdNQqvDgWEBEnKo5SYW2EAFOU2vNs5TN1K 66w/DbxKpHi/BFHtWP3NmGWeZJjAQnqB5pkjJsq8/2TTvOXyX3xIVHhDsmE4yui+8Pdv 0opySw95uEU6FI7JkJI9uSzA47CTjs3c0dhyOvG/0mZM8VSCFR5AjmMrjzoRwYkLBMIU a8NLcC/wTNPKBmhRkvyTFmTofypxCod88PUIQk+WdzINhgOqa7ZshmB6yKhIB3CSrz/s Ix9ztoa/1yE1nF0Tyys47xT3w2mS8IJsqydafblINBuxN8vY/Sw+9qXguEdC4kFHwvBr JJ0g== X-Forwarded-Encrypted: i=1; AJvYcCVxAKeQ0GKe6M8F8/mci8fyD8vSmkf5ySG238+4WoiK7ZkMnY9szz7U94lonZrGPGLoM5/IwsEF88WHe9DtI2lxk2H/LoyaqEY27o/o X-Gm-Message-State: AOJu0YxrgqRiAHLII3b8LzDHxO1K1jLR3H/TClKJR+sB28JHr3bCwPHf htKMf+MiPGxBCFFf+mfTZijyzFb12xXsTeu2LWQJkn76HkKt0K1r X-Google-Smtp-Source: AGHT+IE1uRgBJe04k5mjQVIO8SkGfv0fgqgAL+aS6S8PniSAjL5L4mwgIBGMDQTd4YDhvGNyjqSaCQ== X-Received: by 2002:a05:6a20:77a4:b0:1a3:6ed2:ee27 with SMTP id c36-20020a056a2077a400b001a36ed2ee27mr498166pzg.16.1711479892445; Tue, 26 Mar 2024 12:04:52 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.48 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:51 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 08/10] mm/swap: use swap cache as a synchronization layer Date: Wed, 27 Mar 2024 02:50:30 +0800 Message-ID: <20240326185032.72159-9-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song Rework how swapins are synchronized. Instead of spinning on the swap map, simply use swap cache insert as the synchronization point. Winner will insert a new locked folio, loser will just get the locked folio and wait on unlock. This way we don't need any extra mechanism and have a unified way to ensure all swapin are race free. Swap map is not removed, the HAS_CACHE bit is still set but updated with a particular order and stay eventually consistent with xarray state (it works fine with slot cache reservation), it will mostly be used for fast cache state look up. Two helpers now can be used to add a folio to swapcache: - swap_cache_add_or_get for adding a folio with entry being used (swapin). - swap_cache_add_wait for adding a folio with a freed entry (swapout). swap_cache_add_or_get add a folio to swap cache, it return NULL if folio is already swapped in or hitting OOM, it follows these steps: 1. Caller must ensure the folio is new allocated, this helper lock the folio. 2. Try to add the folio to Xarray (add_to_swap_cache). 3. If (2) success, try set SWAP_HAS_CACHE with swapcache_prepare. This step will now only fail if the entry is freed, which indicate the folio is swapped in by someone else, and if so, revert above steps and return NULL. 4. If (2) failed, try look up and return the locked folio. If a folio is returned, caller should try lock the folio and check if PG_swapcache is still set. If not, racer is hitting OOM or the folio is already swapped in, this can be tell easily (by checking page table for page table). Caller can bail out or retry conditionally. 5. If (4) failed to get a folio, the folio should have been swapped in by someone else, or racer is hitting OOM. And swap_cache_add_wait is for adding a folio with a freed entry to swap cache (for swapout path). Because swap_cache_add_or_get will revert quickly if it accidentally added a folio with freed entry to swapcache, so swap_cache_add_wait will simply wait on race. To remove a folio from swap cache, one have to following these steps: 1. First start by acquiring folio lock. 2. Check if PG_swapcache is still set, if not, this folio is removed alread= y. 3. Call put_swap_folio() to clear SWAP_HAS_CACHE flags in SWAP map first, do this before removing folio from Xarray to ensure insertions can successfully update SWAP map. 4. Remove folio from Xarray by __delete_from_swap_cache. 5. Clear folio flag PG_swapcache, unlock and put it. Or just call delete_from_swap_cache after checking the folio is still PG_swapcache set. Note between step 3 and step 4, an entry may get loaded into swap slot cache, but this is OK because swapout will uses swap_cache_add_wait which wait for step 4. By using swap cache as the synchronization for swapin/swapout, this help removed a lot of hacks or fixes for the synchronization: schedule_timeout_uninterruptible(1) introduced by (just wait on folio): - commit 13ddaf26be32 ("mm/swap: fix race when skipping swapcache") - commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead") skip_if_exist introduced by (now calls always return, it never waits inside= ): - commit a65b0e7607cc ("zswap: make shrinking memcg-aware") and the swapoff workaround by (swap map is now consistent with xarray, and slot cache is disabled, so only need to check in swapoff now): - commit ba81f8384254 ("mm/swap: skip readahead only when swap slot cache i= s enabled") Test result of sequential swapin/out of 30G zero page on ZRAM: Before (us) After (us) Swapout: 33713283 33827215 Swapin: 40954646 39466754 (+3.7%) Swapout (THP): 6921176 6917709 Swapin (THP) : 40891953 39566916 (+3.3%) Signed-off-by: Kairui Song --- mm/shmem.c | 5 +- mm/swap.h | 18 ++-- mm/swap_state.c | 217 +++++++++++++++++++++++++----------------------- mm/swapfile.c | 13 ++- mm/vmscan.c | 2 +- mm/zswap.c | 2 +- 6 files changed, 132 insertions(+), 125 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 0aad0d9a621b..51e4593f9e2e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1512,9 +1512,8 @@ static int shmem_writepage(struct page *page, struct = writeback_control *wbc) if (list_empty(&info->swaplist)) list_add(&info->swaplist, &shmem_swaplist); =20 - if (add_to_swap_cache(folio, swap, - __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN, - NULL) =3D=3D 0) { + if (!swap_cache_add_wait(folio, swap, + __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN)) { shmem_recalc_inode(inode, 0, 1); swap_shmem_alloc(swap); shmem_delete_from_page_cache(folio, swp_to_radix_entry(swap)); diff --git a/mm/swap.h b/mm/swap.h index 5fbbc4a42787..be2d1642b5d9 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -34,23 +34,20 @@ extern struct address_space *swapper_spaces[]; void show_swap_cache_info(void); bool add_to_swap(struct folio *folio); void *get_shadow_from_swap_cache(swp_entry_t entry); -int add_to_swap_cache(struct folio *folio, swp_entry_t entry, - gfp_t gfp, void **shadowp); void __delete_from_swap_cache(struct folio *folio, swp_entry_t entry, void *shadow); void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(swp_entry_t entry); +int swap_cache_add_wait(struct folio *folio, swp_entry_t entry, gfp_t gfp); struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr); struct folio *filemap_get_incore_folio(struct address_space *mapping, pgoff_t index); - struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, struct swap_iocb **plug); struct folio *swap_cache_alloc_or_get(swp_entry_t entry, gfp_t gfp_flags, - struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated, - bool skip_if_exists); + struct mempolicy *mpol, pgoff_t ilx, bool *folio_allocated); struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, @@ -109,6 +106,11 @@ static inline int swap_writepage(struct page *p, struc= t writeback_control *wbc) return 0; } =20 +static inline int swap_cache_add_wait(struct folio *folio, swp_entry_t ent= ry, gfp_t gfp) +{ + return -1; +} + static inline struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr) { @@ -132,12 +134,6 @@ static inline void *get_shadow_from_swap_cache(swp_ent= ry_t entry) return NULL; } =20 -static inline int add_to_swap_cache(struct folio *folio, swp_entry_t entry, - gfp_t gfp_mask, void **shadowp) -{ - return -1; -} - static inline void __delete_from_swap_cache(struct folio *folio, swp_entry_t entry, void *shadow) { diff --git a/mm/swap_state.c b/mm/swap_state.c index cd1a16afcd9f..b5ea13295e17 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -85,8 +85,8 @@ void *get_shadow_from_swap_cache(swp_entry_t entry) * add_to_swap_cache resembles filemap_add_folio on swapper_space, * but sets SwapCache flag and private instead of mapping and index. */ -int add_to_swap_cache(struct folio *folio, swp_entry_t entry, - gfp_t gfp, void **shadowp) +static int add_to_swap_cache(struct folio *folio, swp_entry_t entry, + gfp_t gfp, void **shadowp) { struct address_space *address_space =3D swap_address_space(entry); pgoff_t idx =3D swp_offset(entry); @@ -169,14 +169,16 @@ bool add_to_swap(struct folio *folio) /* * Add it to the swap cache. */ - err =3D add_to_swap_cache(folio, entry, - __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN, NULL); - if (err) + err =3D swap_cache_add_wait(folio, entry, + __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN); + if (err) { /* - * add_to_swap_cache() doesn't return -EEXIST, so we can safely - * clear SWAP_HAS_CACHE flag. + * swap_cache_add_wait() doesn't return -EEXIST, so we can + * safely clear SWAP_HAS_CACHE flag. */ goto fail; + } + /* * Normally the folio will be dirtied in unmap because its * pte should be dirty. A special case is MADV_FREE page. The @@ -208,11 +210,12 @@ void delete_from_swap_cache(struct folio *folio) swp_entry_t entry =3D folio->swap; struct address_space *address_space =3D swap_address_space(entry); =20 + put_swap_folio(folio, entry); + xa_lock_irq(&address_space->i_pages); __delete_from_swap_cache(folio, entry, NULL); xa_unlock_irq(&address_space->i_pages); =20 - put_swap_folio(folio, entry); folio_ref_sub(folio, folio_nr_pages(folio)); } =20 @@ -385,119 +388,123 @@ struct folio *filemap_get_incore_folio(struct addre= ss_space *mapping, return folio; } =20 -struct folio *swap_cache_alloc_or_get(swp_entry_t entry, gfp_t gfp_mask, - struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated, - bool skip_if_exists) +/* + * Try add a new folio, return NULL if the entry is swapped in by someone + * else or hitting OOM. + */ +static struct folio *swap_cache_add_or_get(struct folio *folio, + swp_entry_t entry, gfp_t gfp_mask) { - struct swap_info_struct *si; - struct folio *folio; + int ret =3D 0; void *shadow =3D NULL; + struct address_space *address_space =3D swap_address_space(entry); =20 - *new_page_allocated =3D false; - si =3D get_swap_device(entry); - if (!si) - return NULL; - - for (;;) { - int err; - /* - * First check the swap cache. Since this is normally - * called after swap_cache_get_folio() failed, re-calling - * that would confuse statistics. - */ - folio =3D filemap_get_folio(swap_address_space(entry), - swp_offset(entry)); - if (!IS_ERR(folio)) - goto got_folio; - - /* - * Just skip read ahead for unused swap slot. - * During swap_off when swap_slot_cache is disabled, - * we have to handle the race between putting - * swap entry in swap cache and marking swap slot - * as SWAP_HAS_CACHE. That's done in later part of code or - * else swap_off will be aborted if we return NULL. - */ - if (!swap_swapcount(si, entry) && swap_slot_cache_enabled) - goto fail_put_swap; - - /* - * Get a new folio to read into from swap. Allocate it now, - * before marking swap_map SWAP_HAS_CACHE, when -EEXIST will - * cause any racers to loop around until we add it to cache. - */ - folio =3D (struct folio *)alloc_pages_mpol(gfp_mask, 0, - mpol, ilx, numa_node_id()); - if (!folio) - goto fail_put_swap; - - /* - * Swap entry may have been freed since our caller observed it. - */ - err =3D swapcache_prepare(entry); - if (!err) - break; - - folio_put(folio); - if (err !=3D -EEXIST) - goto fail_put_swap; - - /* - * Protect against a recursive call to swap_cache_alloc_or_get() - * on the same entry waiting forever here because SWAP_HAS_CACHE - * is set but the folio is not the swap cache yet. This can - * happen today if mem_cgroup_swapin_charge_folio() below - * triggers reclaim through zswap, which may call - * swap_cache_alloc_or_get() in the writeback path. - */ - if (skip_if_exists) - goto fail_put_swap; + /* If folio is NULL, simply go lookup the swapcache */ + if (folio) { + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + ret =3D add_to_swap_cache(folio, entry, gfp_mask, &shadow); + if (ret) + __folio_clear_locked(folio); + } =20 - /* - * We might race against __delete_from_swap_cache(), and - * stumble across a swap_map entry whose SWAP_HAS_CACHE - * has not yet been cleared. Or race against another - * swap_cache_alloc_or_get(), which has set SWAP_HAS_CACHE - * in swap_map, but not yet added its folio to swap cache. - */ - schedule_timeout_uninterruptible(1); + if (!folio || ret) { + /* If the folio is already added, return it untouched. */ + folio =3D filemap_get_folio(address_space, swp_offset(entry)); + /* If not, either the entry have been freed or we are OOM. */ + if (IS_ERR(folio)) + return NULL; + return folio; } =20 /* - * The swap entry is ours to swap in. Prepare the new folio. + * The folio is now added to swap cache, try update the swap map + * to ensure the entry is still valid. If we accidentally added + * a stalled entry, undo the add. */ + ret =3D swapcache_prepare(entry); + if (unlikely(ret)) + goto fail_delete_cache; =20 - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - + /* Charge and shadow check */ if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry)) - goto fail_unlock; - - /* May fail (-ENOMEM) if XArray node allocation failed. */ - if (add_to_swap_cache(folio, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) - goto fail_unlock; - + goto fail_put_flag; mem_cgroup_swapin_uncharge_swap(entry); - if (shadow) workingset_refault(folio, shadow); =20 - /* Caller will initiate read into locked folio */ + /* Return new added folio locked */ folio_add_lru(folio); - *new_page_allocated =3D true; -got_folio: - put_swap_device(si); return folio; =20 -fail_unlock: +fail_put_flag: put_swap_folio(folio, entry); +fail_delete_cache: + xa_lock_irq(&address_space->i_pages); + __delete_from_swap_cache(folio, entry, shadow); + xa_unlock_irq(&address_space->i_pages); + folio_ref_sub(folio, folio_nr_pages(folio)); folio_unlock(folio); - folio_put(folio); -fail_put_swap: - put_swap_device(si); + return NULL; } =20 +/* + * Try to add a folio to swap cache, caller must ensure entry is freed. + * May block if swap_cache_alloc_or_get accidently loaded a freed entry + * and it will be removed very soon, so just wait and retry. + */ +int swap_cache_add_wait(struct folio *folio, swp_entry_t entry, gfp_t gfp) +{ + int ret; + struct folio *wait_folio; + + for (;;) { + ret =3D add_to_swap_cache(folio, entry, gfp, NULL); + if (ret !=3D -EEXIST) + break; + wait_folio =3D filemap_get_folio(swap_address_space(entry), + swp_offset(entry)); + if (!IS_ERR(wait_folio)) { + folio_wait_locked(wait_folio); + folio_put(wait_folio); + } + } + + return ret; +} + +struct folio *swap_cache_alloc_or_get(swp_entry_t entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, bool *folio_allocated) +{ + struct folio *folio, *swapcache =3D NULL; + struct swap_info_struct *si; + + /* Prevent swapoff from happening to us */ + si =3D get_swap_device(entry); + if (!si) + goto out_no_device; + + /* We are very likely the first user, alloc and try add to the swapcache.= */ + folio =3D (struct folio *)alloc_pages_mpol(gfp_mask, 0, mpol, ilx, + numa_node_id()); + swapcache =3D swap_cache_add_or_get(folio, entry, gfp_mask); + if (swapcache !=3D folio) { + folio_put(folio); + goto out_no_alloc; + } + + put_swap_device(si); + *folio_allocated =3D true; + return swapcache; + +out_no_alloc: + put_swap_device(si); +out_no_device: + *folio_allocated =3D false; + return swapcache; +} + /* * Locate a page of swap in physical memory, reserving swap cache space * and reading the disk if it is not already cached. @@ -519,7 +526,7 @@ struct folio *read_swap_cache_async(swp_entry_t entry, = gfp_t gfp_mask, =20 mpol =3D get_vma_policy(vma, addr, 0, &ilx); folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, - &page_allocated, false); + &page_allocated); mpol_cond_put(mpol); =20 if (page_allocated) @@ -636,7 +643,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry,= gfp_t gfp_mask, /* Ok, do the async read-ahead now */ folio =3D swap_cache_alloc_or_get( swp_entry(swp_type(entry), offset), - gfp_mask, mpol, ilx, &page_allocated, false); + gfp_mask, mpol, ilx, &page_allocated); if (!folio) continue; if (page_allocated) { @@ -654,7 +661,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry,= gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, - &page_allocated, false); + &page_allocated); if (unlikely(page_allocated)) { zswap_folio_swapin(folio); swap_read_folio(folio, false, NULL); @@ -810,7 +817,7 @@ static struct folio *swap_vma_readahead(swp_entry_t tar= g_entry, gfp_t gfp_mask, pte_unmap(pte); pte =3D NULL; folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, - &page_allocated, false); + &page_allocated); if (!folio) continue; if (page_allocated) { @@ -830,7 +837,7 @@ static struct folio *swap_vma_readahead(swp_entry_t tar= g_entry, gfp_t gfp_mask, skip: /* The folio was likely read above, so no need for plugging here */ folio =3D swap_cache_alloc_or_get(targ_entry, gfp_mask, mpol, targ_ilx, - &page_allocated, false); + &page_allocated); if (unlikely(page_allocated)) { zswap_folio_swapin(folio); swap_read_folio(folio, false, NULL); @@ -856,7 +863,7 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gf= p_mask, =20 mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, - &page_allocated, false); + &page_allocated); mpol_cond_put(mpol); =20 if (page_allocated) diff --git a/mm/swapfile.c b/mm/swapfile.c index 332ce4e578e8..8225091d42b6 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -149,9 +149,10 @@ static int __try_to_reclaim_swap(struct swap_info_stru= ct *si, * in usual operations. */ if (folio_trylock(folio)) { - if ((flags & TTRS_ANYWAY) || + if (folio_test_swapcache(folio) && + ((flags & TTRS_ANYWAY) || ((flags & TTRS_UNMAPPED) && !folio_mapped(folio)) || - ((flags & TTRS_FULL) && mem_cgroup_swap_full(folio))) + ((flags & TTRS_FULL) && mem_cgroup_swap_full(folio)))) ret =3D folio_free_swap(folio); folio_unlock(folio); } @@ -1344,7 +1345,8 @@ void swap_free(swp_entry_t entry) } =20 /* - * Called after dropping swapcache to decrease refcnt to swap entries. + * Called before dropping swapcache, free the entry and ensure + * new insertion will success. */ void put_swap_folio(struct folio *folio, swp_entry_t entry) { @@ -1897,13 +1899,15 @@ static int unuse_pte_range(struct vm_area_struct *v= ma, pmd_t *pmd, } if (!folio) { swp_count =3D READ_ONCE(si->swap_map[offset]); - if (swp_count =3D=3D 0 || swp_count =3D=3D SWAP_MAP_BAD) + if (swap_count(swp_count) =3D=3D 0 || swp_count =3D=3D SWAP_MAP_BAD) continue; return -ENOMEM; } =20 folio_lock(folio); folio_wait_writeback(folio); + if (!folio_test_swapcache(folio)) + goto free_folio; ret =3D unuse_pte(vma, pmd, addr, entry, folio); if (ret < 0) { folio_unlock(folio); @@ -1912,6 +1916,7 @@ static int unuse_pte_range(struct vm_area_struct *vma= , pmd_t *pmd, } =20 folio_free_swap(folio); +free_folio: folio_unlock(folio); folio_put(folio); } while (addr +=3D PAGE_SIZE, addr !=3D end); diff --git a/mm/vmscan.c b/mm/vmscan.c index 3ef654addd44..c3db39393428 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -732,10 +732,10 @@ static int __remove_mapping(struct address_space *map= ping, struct folio *folio, =20 if (reclaimed && !mapping_exiting(mapping)) shadow =3D workingset_eviction(folio, target_memcg); + put_swap_folio(folio, swap); __delete_from_swap_cache(folio, swap, shadow); mem_cgroup_swapout(folio, swap); xa_unlock_irq(&mapping->i_pages); - put_swap_folio(folio, swap); } else { void (*free_folio)(struct folio *); =20 diff --git a/mm/zswap.c b/mm/zswap.c index e4d96816be70..c80e33c74235 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1127,7 +1127,7 @@ static int zswap_writeback_entry(struct zswap_entry *= entry, /* try to allocate swap cache folio */ mpol =3D get_task_policy(current); folio =3D swap_cache_alloc_or_get(swpentry, GFP_KERNEL, mpol, - NO_INTERLEAVE_INDEX, &folio_was_allocated, true); + NO_INTERLEAVE_INDEX, &folio_was_allocated); if (!folio) return -ENOMEM; =20 --=20 2.43.0 From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD15013CC41 for ; Tue, 26 Mar 2024 19:04:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479899; cv=none; b=JLerG1PkXn00n2IAxI12g0qSHkSr3sICF3HWw6ElorEyuN59xdOzthCp+Qb9KySCDOIN8nCyWqkrkdt+3GNm8FT+ZSwqy0mzSfnyiXdElCjpU7j61vM+rZ+jjsxmE1w15/X3RwLbO3os1yT7JHId1ASNtgnDKynfJDl/pHF5BJk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479899; c=relaxed/simple; bh=LEqO9glT06R8lUGEzWv4WdTlSBATZXDH0+cOLd8idhc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=c3gS+iLaMGxX3Mc115Z0j9Q+1nEMmKQ+8rCKzD45vJVErmwEJVBLvgJWkTBbGwMINH9U9K1l9GYUQ66Lc8/ewBYLneiURdSW6kaLfcxY28cGHffrXZ3uxYj/exj8/dXT6FDGXKxXeQ6qfi0Y4tNFdECIjUXsFhoPn3r6uVWRkBo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Ze0imY2Q; arc=none smtp.client-ip=209.85.215.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Ze0imY2Q" Received: by mail-pg1-f172.google.com with SMTP id 41be03b00d2f7-5cddc5455aeso3546709a12.1 for ; Tue, 26 Mar 2024 12:04:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479897; x=1712084697; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=FiBJhEEfHQ3r2jOMm8KhSAJcpn1B/ejykzOxerrHt3Q=; b=Ze0imY2Q/BHWjZhJQ3mbCzX2D/U9N3wpxlaIBxCuRbYc+kysje6aIN3ceMuxiRf7uJ kRr/eI5Dk109pjEQnKVVCDxClus88hu1EQk/5589MS542z8WMAtoy3nrmYbcmf0mrb60 tbkCMUq4vEEGEoY/grF1TWAnipkV0/TaylHNcy3h2rjTJdiebEuS76n/k8tRFQAYYovy U67zGYhjSbb6g5I9UM1kl38Nm+Y9m1vM767hvEgnXg2HaviobiufwJt3EkhidJyd08hv /FRYOyDHs1jrf2A4/L7FhN3jBZPcz2oRaWBud/fvC0dtgQZBI2v0T/Ebei+5ngmjpxKI 0h5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479897; x=1712084697; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=FiBJhEEfHQ3r2jOMm8KhSAJcpn1B/ejykzOxerrHt3Q=; b=TFE9ag9RfWlaQGhWK6Nk1daSQ5EOx6s6+VW7t1SusAA1/EMU6odni28tHKfB5cA0Ie zwFd6huHvv4DNQ94p/0KDqAJ+OqB5mvKRctJL12nL3jKFXMN/BhGnF3OQkbhIcdmntxr DL4kMCJ0zQ69L3C2t+UNbb3i/ITyQo3hpbLOUCUxuKimwdSPyDFh3Z5Efa96mzj/hdfu 9/U8N7Mg6ZxWhMlQmfNl6pT4MghxB3GNg5T9HkdKdn3a0uvhi2xqtNBqijXdQIEXoAmt KtPsqWKQ9ppUnJiE8e494pGtP+GNQoMVDauAbigfIRuWU2HMHGbVld5iVvzcwFSC7AiF Y75A== X-Forwarded-Encrypted: i=1; AJvYcCVbootu81hlkiPrL6sAgbcl8YfIoJligXFVQlDEWwiEInz1YIq8oY7kcZlRvGIvGm7gqSlwfKOViHtCjoYJiKViCBJQQSWc5A3droQz X-Gm-Message-State: AOJu0YwqAmiMKaaQR5Fe9hKht7aRU87Y9fo9F3Z9FsW7JP8o71kwZjwn yYoX/witnD9AF8EQw6fwkW1/KdAvwuUKn/+umOipSIre9s5szV2s X-Google-Smtp-Source: AGHT+IFAystA0TApX/vqWOxEy6ce1dqGKGk+EmA27apR47RybsEuxh6FntAjcGODdXvlJspRq1GUwA== X-Received: by 2002:a05:6a21:3512:b0:1a3:df1a:271e with SMTP id zc18-20020a056a21351200b001a3df1a271emr752917pzb.19.1711479896808; Tue, 26 Mar 2024 12:04:56 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.52 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:56 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 09/10] mm/swap: delay the swap cache lookup for swapin Date: Wed, 27 Mar 2024 02:50:31 +0800 Message-ID: <20240326185032.72159-10-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song From: Kairui Song Currently we do a swap cache lookup first, then call into the ordinary swapin path. But all swapin path will call swap_cache_add_or_get, which will do a swap cache lookup again on race, because the first lookup is racy and could miss the swap cache. If the race happened (could be frequent on busy device), caller have no way of knowing that, not be able to distinguish minor / major fault, and the first lookup is redundant. So try to do swapcache lookup and readahead update late, defer it to swap_cache_alloc_or_get, and make it faster by avoiding lookup if HAS_CACHE flag is not set. This will be less accurate but the later look up will always ensure we never miss a existing swap cache. This provides 100% accuracy swap cache usage info for callers, improve minor / major page fault info, and also improve performance. Test result of sequential swapin/out of 30G zero page on ZRAM: Before (us) After (us) Swapout: 33827215 33853883 Swapin: 39466754 38336519 (+2.9%) Swapout (THP): 6917709 6814619 Swapin (THP) : 39566916 38383367 (+3.0%) Signed-off-by: Kairui Song --- mm/memory.c | 45 ++++++++---------- mm/shmem.c | 39 +++++++--------- mm/swap.h | 16 +++++-- mm/swap_state.c | 122 +++++++++++++++++++++++++++++------------------- mm/swapfile.c | 32 +++++++------ 5 files changed, 141 insertions(+), 113 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 357d239ee2f6..774a912eb46d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3932,6 +3932,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si =3D NULL; rmap_t rmap_flags =3D RMAP_NONE; + bool folio_allocated =3D false; bool exclusive =3D false; swp_entry_t entry; pte_t pte; @@ -3991,35 +3992,29 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!si)) goto out; =20 - folio =3D swap_cache_get_folio(entry, vma, vmf->address); - if (folio) - page =3D folio_file_page(folio, swp_offset(entry)); - swapcache =3D folio; + if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) =3D= =3D 1) { + folio =3D swapin_direct(entry, GFP_HIGHUSER_MOVABLE, vmf, &folio_allocat= ed); + } else { + folio =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf, &folio_allo= cated); + } =20 if (!folio) { - if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && - __swap_count(entry) =3D=3D 1) { - folio =3D swapin_direct(entry, GFP_HIGHUSER_MOVABLE, vmf); - } else { - folio =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf); - } - - if (!folio) { - /* - * Back out if somebody else faulted in this pte - * while we released the pte lock. - */ - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); - if (likely(vmf->pte && - pte_same(ptep_get(vmf->pte), vmf->orig_pte))) - ret =3D VM_FAULT_OOM; - goto unlock; - } + /* + * Back out if somebody else faulted in this pte + * while we released the pte lock. + */ + vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, + vmf->address, &vmf->ptl); + if (likely(vmf->pte && + pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + ret =3D VM_FAULT_OOM; + goto unlock; + } =20 - swapcache =3D folio; - page =3D folio_file_page(folio, swp_offset(entry)); + swapcache =3D folio; + page =3D folio_file_page(folio, swp_offset(entry)); =20 + if (folio_allocated) { /* Had to read the page from swap area: Major fault */ ret =3D VM_FAULT_MAJOR; count_vm_event(PGMAJFAULT); diff --git a/mm/shmem.c b/mm/shmem.c index 51e4593f9e2e..7884bbe28731 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1570,20 +1570,6 @@ static inline struct mempolicy *shmem_get_sbmpol(str= uct shmem_sb_info *sbinfo) static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *i= nfo, pgoff_t index, unsigned int order, pgoff_t *ilx); =20 -static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) -{ - struct mempolicy *mpol; - pgoff_t ilx; - struct folio *folio; - - mpol =3D shmem_get_pgoff_policy(info, index, 0, &ilx); - folio =3D swap_cluster_readahead(swap, gfp, mpol, ilx); - mpol_cond_put(mpol); - - return folio; -} - /* * Make sure huge_gfp is always more limited than limit_gfp. * Some of the flags set permissions, while others set limitations. @@ -1857,9 +1843,12 @@ static int shmem_swapin_folio(struct inode *inode, p= goff_t index, { struct address_space *mapping =3D inode->i_mapping; struct shmem_inode_info *info =3D SHMEM_I(inode); + bool folio_allocated =3D false; struct swap_info_struct *si; struct folio *folio =3D NULL; + struct mempolicy *mpol; swp_entry_t swap; + pgoff_t ilx; int error; =20 VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); @@ -1878,22 +1867,28 @@ static int shmem_swapin_folio(struct inode *inode, = pgoff_t index, } =20 /* Look it up and read it in.. */ - folio =3D swap_cache_get_folio(swap, NULL, 0); + folio =3D swap_cache_try_get(swap); if (!folio) { - /* Or update major stats only when swapin succeeds?? */ - if (fault_type) { - *fault_type |=3D VM_FAULT_MAJOR; - count_vm_event(PGMAJFAULT); - count_memcg_event_mm(fault_mm, PGMAJFAULT); - } /* Here we actually start the io */ - folio =3D shmem_swapin_cluster(swap, gfp, info, index); + mpol =3D shmem_get_pgoff_policy(info, index, 0, &ilx); + folio =3D swap_cluster_readahead(swap, gfp, mpol, ilx, &folio_allocated); + mpol_cond_put(mpol); if (!folio) { error =3D -ENOMEM; goto failed; } + + /* Update major stats only when swapin succeeds */ + if (folio_allocated && fault_type) { + *fault_type |=3D VM_FAULT_MAJOR; + count_vm_event(PGMAJFAULT); + count_memcg_event_mm(fault_mm, PGMAJFAULT); + } } =20 + if (!folio_allocated) + swap_cache_update_ra(folio, NULL, 0); + /* We have to do this with folio locked to prevent races */ folio_lock(folio); if (!folio_test_swapcache(folio) || diff --git a/mm/swap.h b/mm/swap.h index be2d1642b5d9..bd872b157950 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -39,7 +39,8 @@ void __delete_from_swap_cache(struct folio *folio, void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(swp_entry_t entry); int swap_cache_add_wait(struct folio *folio, swp_entry_t entry, gfp_t gfp); -struct folio *swap_cache_get_folio(swp_entry_t entry, +struct folio *swap_cache_try_get(swp_entry_t entry); +void swap_cache_update_ra(struct folio *folio, struct vm_area_struct *vma, unsigned long addr); struct folio *filemap_get_incore_folio(struct address_space *mapping, pgoff_t index); @@ -49,16 +50,18 @@ struct folio *read_swap_cache_async(swp_entry_t entry, = gfp_t gfp_mask, struct folio *swap_cache_alloc_or_get(swp_entry_t entry, gfp_t gfp_flags, struct mempolicy *mpol, pgoff_t ilx, bool *folio_allocated); struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, - struct mempolicy *mpol, pgoff_t ilx); + struct mempolicy *mpol, pgoff_t ilx, bool *folio_allocated); struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf); + struct vm_fault *vmf, bool *folio_allocated); struct folio *swapin_readahead(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf); + struct vm_fault *vmf, bool *folio_allocated); =20 static inline unsigned int folio_swap_flags(struct folio *folio) { return swp_swap_info(folio->swap)->flags; } + +bool __swap_has_cache(swp_entry_t entry); #else /* CONFIG_SWAP */ struct swap_iocb; static inline void swap_read_folio(struct folio *folio, bool do_poll, @@ -151,5 +154,10 @@ static inline unsigned int folio_swap_flags(struct fol= io *folio) { return 0; } + +static inline bool __swap_has_cache(swp_entry_t entry); +{ + return false; +} #endif /* CONFIG_SWAP */ #endif /* _MM_SWAP_H */ diff --git a/mm/swap_state.c b/mm/swap_state.c index b5ea13295e17..cf178dd1131a 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -300,54 +300,54 @@ static inline bool swap_use_vma_readahead(void) } =20 /* - * Lookup a swap entry in the swap cache. A found folio will be returned - * unlocked and with its refcount incremented - we rely on the kernel - * lock getting page table operations atomic even if we drop the folio - * lock before returning. - * - * Caller must lock the swap device or hold a reference to keep it valid. + * Try get the swap cache, bail out quickly if swapcache bit is not set. */ -struct folio *swap_cache_get_folio(swp_entry_t entry, - struct vm_area_struct *vma, unsigned long addr) +struct folio *swap_cache_try_get(swp_entry_t entry) { struct folio *folio; =20 - folio =3D filemap_get_folio(swap_address_space(entry), swp_offset(entry)); - if (!IS_ERR(folio)) { - bool vma_ra =3D swap_use_vma_readahead(); - bool readahead; - - /* - * At the moment, we don't support PG_readahead for anon THP - * so let's bail out rather than confusing the readahead stat. - */ - if (unlikely(folio_test_large(folio))) + if (__swap_has_cache(entry)) { + folio =3D filemap_get_folio(swap_address_space(entry), + swp_offset(entry)); + if (!IS_ERR(folio)) return folio; + } =20 - readahead =3D folio_test_clear_readahead(folio); - if (vma && vma_ra) { - unsigned long ra_val; - int win, hits; - - ra_val =3D GET_SWAP_RA_VAL(vma); - win =3D SWAP_RA_WIN(ra_val); - hits =3D SWAP_RA_HITS(ra_val); - if (readahead) - hits =3D min_t(int, hits + 1, SWAP_RA_HITS_MAX); - atomic_long_set(&vma->swap_readahead_info, - SWAP_RA_VAL(addr, win, hits)); - } + return NULL; +} =20 - if (readahead) { - count_vm_event(SWAP_RA_HIT); - if (!vma || !vma_ra) - atomic_inc(&swapin_readahead_hits); - } - } else { - folio =3D NULL; +void swap_cache_update_ra(struct folio *folio, struct vm_area_struct *vma, + unsigned long addr) +{ + bool vma_ra =3D swap_use_vma_readahead(); + bool readahead; + + /* + * At the moment, we don't support PG_readahead for anon THP + * so let's bail out rather than confusing the readahead stat. + */ + if (unlikely(folio_test_large(folio))) + return; + + readahead =3D folio_test_clear_readahead(folio); + if (vma && vma_ra) { + unsigned long ra_val; + int win, hits; + + ra_val =3D GET_SWAP_RA_VAL(vma); + win =3D SWAP_RA_WIN(ra_val); + hits =3D SWAP_RA_HITS(ra_val); + if (readahead) + hits =3D min_t(int, hits + 1, SWAP_RA_HITS_MAX); + atomic_long_set(&vma->swap_readahead_info, + SWAP_RA_VAL(addr, win, hits)); } =20 - return folio; + if (readahead) { + count_vm_event(SWAP_RA_HIT); + if (!vma || !vma_ra) + atomic_inc(&swapin_readahead_hits); + } } =20 /** @@ -485,6 +485,11 @@ struct folio *swap_cache_alloc_or_get(swp_entry_t entr= y, gfp_t gfp_mask, if (!si) goto out_no_device; =20 + /* First do a racy check if cache is already loaded. */ + swapcache =3D swap_cache_try_get(entry); + if (swapcache) + goto out_no_alloc; + /* We are very likely the first user, alloc and try add to the swapcache.= */ folio =3D (struct folio *)alloc_pages_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); @@ -614,7 +619,8 @@ static unsigned long swapin_nr_pages(unsigned long offs= et) * are fairly likely to have been swapped out from the same node. */ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct mempolicy *mpol, pgoff_t ilx) + struct mempolicy *mpol, pgoff_t ilx, + bool *folio_allocated) { struct folio *folio; unsigned long entry_offset =3D swp_offset(entry); @@ -644,6 +650,10 @@ struct folio *swap_cluster_readahead(swp_entry_t entry= , gfp_t gfp_mask, folio =3D swap_cache_alloc_or_get( swp_entry(swp_type(entry), offset), gfp_mask, mpol, ilx, &page_allocated); + if (offset =3D=3D entry_offset) { + *folio_allocated =3D page_allocated; + folio_allocated =3D NULL; + } if (!folio) continue; if (page_allocated) { @@ -666,6 +676,8 @@ struct folio *swap_cluster_readahead(swp_entry_t entry,= gfp_t gfp_mask, zswap_folio_swapin(folio); swap_read_folio(folio, false, NULL); } + if (folio_allocated) + *folio_allocated =3D page_allocated; return folio; } =20 @@ -779,7 +791,8 @@ static void swap_ra_info(struct vm_fault *vmf, * */ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_= mask, - struct mempolicy *mpol, pgoff_t targ_ilx, struct vm_fault *vmf) + struct mempolicy *mpol, pgoff_t targ_ilx, + struct vm_fault *vmf, bool *folio_allocated) { struct blk_plug plug; struct swap_iocb *splug =3D NULL; @@ -818,6 +831,10 @@ static struct folio *swap_vma_readahead(swp_entry_t ta= rg_entry, gfp_t gfp_mask, pte =3D NULL; folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, &page_allocated); + if (i =3D=3D ra_info.offset) { + *folio_allocated =3D page_allocated; + folio_allocated =3D NULL; + } if (!folio) continue; if (page_allocated) { @@ -842,6 +859,8 @@ static struct folio *swap_vma_readahead(swp_entry_t tar= g_entry, gfp_t gfp_mask, zswap_folio_swapin(folio); swap_read_folio(folio, false, NULL); } + if (folio_allocated) + *folio_allocated =3D page_allocated; return folio; } =20 @@ -854,20 +873,21 @@ static struct folio *swap_vma_readahead(swp_entry_t t= arg_entry, gfp_t gfp_mask, * Returns the folio for entry after it is read in. */ struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf) + struct vm_fault *vmf, bool *folio_allocated) { struct mempolicy *mpol; struct folio *folio; - bool page_allocated; pgoff_t ilx; =20 mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, - &page_allocated); + folio_allocated); mpol_cond_put(mpol); =20 - if (page_allocated) + if (*folio_allocated) swap_read_folio(folio, true, NULL); + else if (folio) + swap_cache_update_ra(folio, vmf->vma, vmf->address); =20 return folio; } @@ -885,18 +905,22 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t = gfp_mask, * or vma-based(ie, virtual address based on faulty address) readahead. */ struct folio *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf) + struct vm_fault *vmf, bool *folio_allocated) { struct mempolicy *mpol; - pgoff_t ilx; struct folio *folio; + bool allocated; + pgoff_t ilx; =20 mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); folio =3D swap_use_vma_readahead() ? - swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf) : - swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf, &allocated) : + swap_cluster_readahead(entry, gfp_mask, mpol, ilx, &allocated); mpol_cond_put(mpol); =20 + if (!*folio_allocated && folio) + swap_cache_update_ra(folio, vmf->vma, vmf->address); + return folio; } =20 diff --git a/mm/swapfile.c b/mm/swapfile.c index 8225091d42b6..ddcf2ff91c39 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1455,6 +1455,15 @@ int __swap_count(swp_entry_t entry) return swap_count(si->swap_map[offset]); } =20 +bool __swap_has_cache(swp_entry_t entry) +{ + pgoff_t offset =3D swp_offset(entry); + struct swap_info_struct *si =3D swp_swap_info(entry); + unsigned char count =3D READ_ONCE(si->swap_map[offset]); + + return swap_count(count) && (count & SWAP_HAS_CACHE); +} + /* * How many references to @entry are currently swapped out? * This does not give an exact answer when swap count is continued, @@ -1862,10 +1871,18 @@ static int unuse_pte_range(struct vm_area_struct *v= ma, pmd_t *pmd, struct folio *folio; unsigned long offset; unsigned char swp_count; + bool folio_allocated; swp_entry_t entry; int ret; pte_t ptent; =20 + struct vm_fault vmf =3D { + .vma =3D vma, + .address =3D addr, + .real_address =3D addr, + .pmd =3D pmd, + }; + if (!pte++) { pte =3D pte_offset_map(pmd, addr); if (!pte) @@ -1884,19 +1901,8 @@ static int unuse_pte_range(struct vm_area_struct *vm= a, pmd_t *pmd, offset =3D swp_offset(entry); pte_unmap(pte); pte =3D NULL; - - folio =3D swap_cache_get_folio(entry, vma, addr); - if (!folio) { - struct vm_fault vmf =3D { - .vma =3D vma, - .address =3D addr, - .real_address =3D addr, - .pmd =3D pmd, - }; - - folio =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - &vmf); - } + folio =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, + &vmf, &folio_allocated); if (!folio) { swp_count =3D READ_ONCE(si->swap_map[offset]); if (swap_count(swp_count) =3D=3D 0 || swp_count =3D=3D SWAP_MAP_BAD) --=20 2.43.0 From nobody Mon Feb 9 07:19:27 2026 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 350E613D514 for ; Tue, 26 Mar 2024 19:05:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479904; cv=none; b=LijGsZQFTNnaiINlbA/HV/+8VZGR5l0l4GcH0/0BpVz2X19XLqUXI1749YJN914zUw7o/Le0xxQk86xPuc6niK6lNaMwL0wXbO4Txxu1qYM8YSCF7Riy4PVxdAaumw3FYopDIRbyDD4yVADYMXQJF3LF8k16jjdVhtGHdK1bpj8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711479904; c=relaxed/simple; bh=zup3Hox8pcGlMo1byKXDE2MR4LTc5jra6TlpSxEJ8Pc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hWIV3/Eyjv1AZRYeu322kfr5zQVWSe+DMT2iZzy5Zeyy8IfLXw3WqkLEf60Mv+lyNXCVUXBCpMkOTTLQcek6SL8nJhA1IJrUiiJe3l+MROTpycxeMInkOzK5EWdYgkjbDAwtjKv6DX2cE/uUfpm5iZwVFU1TC6W9hH/LtVWO+sY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Joeb5x+y; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Joeb5x+y" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-5cedfc32250so3765153a12.0 for ; Tue, 26 Mar 2024 12:05:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479901; x=1712084701; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=tWXkY/VwUtjvuoCiK/XtYUVIg6+3qA3wRWNZvOL3LW8=; b=Joeb5x+yynIhaelotZsvl1zNvO0MkM6cBhOo3EAN0wcvQa0NGJq5UJ/oXsT3IMKGAU QWtEdjvqrJXIjfw/sQYSD76n7QPi8mGKWHUvw9Nlbd+e2dcTsq+JRA7JezXj0J3D+PhQ S6ZG+ydIUdmimPEqVfOaUlIOe6P3ML3xFrfWAD6OF/5QkmRxnjhqm2klE2NCLVVa6cr1 XIa8HpqbhXw+PjU93jorpRiGqoGcdTwIrlX+Rq5owVWSEpu4rxIWmW1xrupDupCKXexo k/rpKBm3InwcsH1Q9ufP6xFzIcxWLVXaqMjTsrwgZvFEULGw6vPcAaIU/CIs2pp5i18D K4fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479901; x=1712084701; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=tWXkY/VwUtjvuoCiK/XtYUVIg6+3qA3wRWNZvOL3LW8=; b=i6cXgWFFq/JojUnTYSdW4/4iYxE6VXNXp4cUKAbx3O3h/NfH7vmyuL//Jym3Ms4fb3 acP/E9TAhUYC/U9MuJfNJb0z0uwLczweqSPWsjofRvMLcVkFTEdzEuflZIiHBWoaTQP3 o8MX6nJLFWiQAu2tJVNr3C7BnB+jN0CM4J4B8LTqksW2rSRttHXOUXJiEbzbVWyBLG63 /eJ6Wb8angnSxkktFK2RtlxvJaQAH4ZcsVZpjW9H/ctv+6Yi9okmcqjHzLKcMWxAcXnW is/XnBe23Z1qFRNqf/Eqy+EazmHJcaTitY3cBrMBZFYDJgLK30PWmoj5c7zZ63oIjKZd TDdg== X-Forwarded-Encrypted: i=1; AJvYcCX+AkNTHbkkYUgV2waSVhZw2YJxF2M9XC6Xi1P5f1Cm4qOLyBh1mavjkPOaFvT9a5xvJHEPeWC+C7d+PBwwEz8Kyax6qqqMg13r+wHZ X-Gm-Message-State: AOJu0Yy90eJ+IqwU3OplZ8eNXsKtbe9728pK48mZ4lmmJbsrlMQfDWvw diox6zpuJ52haUSFSwHqbZiSxWin3Y0i1ZKl4qxP799K4uVJAl7k X-Google-Smtp-Source: AGHT+IH8mmais3geKY69O252RCFhbpsrIur2CeiaE4pHWZEEsNx+hS+y2YV8F+RXgoCHWzKftRyqmg== X-Received: by 2002:a05:6a21:3284:b0:1a3:68ff:5805 with SMTP id yt4-20020a056a21328400b001a368ff5805mr599508pzb.44.1711479901200; Tue, 26 Mar 2024 12:05:01 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.57 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:05:00 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 10/10] mm/swap: optimize synchronous swapin Date: Wed, 27 Mar 2024 02:50:32 +0800 Message-ID: <20240326185032.72159-11-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240326185032.72159-1-ryncsn@gmail.com> References: <20240326185032.72159-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song Interestingly the major performance overhead of synchronous is actually from the workingset nodes update, that's because synchronous swap in keeps adding single folios into a xa_node, making the node no longer a shadow node and have to be removed from shadow_nodes, then remove the folio very shortly and making the node a shadow node again, so it has to add back to the shadow_nodes. Mark synchronous swapin folio with a special bit in swap entry embedded in folio->swap, as we still have some usable bits there. Skip workingset node update on insertion of such folio because it will be removed very quickly, and will trigger the update ensuring the workingset info is eventual consensus. Test result of sequential swapin/out of 30G zero page on ZRAM: Before (us) After (us) Swapout: 33853883 33886008 Swapin: 38336519 32465441 (+15.4%) Swapout (THP): 6814619 6899938 Swapin (THP) : 38383367 33193479 (+13.6%) Signed-off-by: Kairui Song --- include/linux/swapops.h | 5 +++- mm/filemap.c | 16 +++++++++--- mm/memory.c | 34 ++++++++++++++---------- mm/swap.h | 15 +++++++++++ mm/swap_state.c | 57 ++++++++++++++++++++++++----------------- mm/vmscan.c | 6 +++++ mm/workingset.c | 2 +- 7 files changed, 92 insertions(+), 43 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 48b700ba1d18..ebc0c3e4668d 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -25,7 +25,10 @@ * swp_entry_t's are *never* stored anywhere in their arch-dependent forma= t. */ #define SWP_TYPE_SHIFT (BITS_PER_XA_VALUE - MAX_SWAPFILES_SHIFT) -#define SWP_OFFSET_MASK ((1UL << SWP_TYPE_SHIFT) - 1) +#define SWP_CACHE_FLAG_BITS 1 +#define SWP_CACHE_SYNCHRONOUS BIT(SWP_TYPE_SHIFT - 1) +#define SWP_OFFSET_BITS (SWP_TYPE_SHIFT - SWP_CACHE_FLAG_BITS) +#define SWP_OFFSET_MASK (BIT(SWP_OFFSET_BITS) - 1) =20 /* * Definitions only for PFN swap entries (see is_pfn_swap_entry()). To diff --git a/mm/filemap.c b/mm/filemap.c index 5e8e3fd26b8d..ac24cc65d1da 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -923,12 +923,20 @@ int __filemap_add_swapcache(struct address_space *map= ping, struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp) { XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio)); + bool synchronous =3D swap_cache_test_synchronous(folio); long nr; int ret; =20 VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_swapcache(folio), folio); - mapping_set_update(&xas, mapping); + + /* + * Skip node update for synchronous folio insertion, it will be + * updated on folio deletion very soon, avoid repeated LRU locking. + */ + if (!synchronous) + xas_set_update(&xas, workingset_update_node); + xas_set_lru(&xas, &shadow_nodes); =20 nr =3D folio_nr_pages(folio); folio_ref_add(folio, nr); @@ -936,8 +944,10 @@ int __filemap_add_swapcache(struct address_space *mapp= ing, struct folio *folio, ret =3D __filemap_lock_store(&xas, folio, index, gfp, shadowp); if (likely(!ret)) { mapping->nrpages +=3D nr; - __node_stat_mod_folio(folio, NR_FILE_PAGES, nr); - __lruvec_stat_mod_folio(folio, NR_SWAPCACHE, nr); + if (!synchronous) { + __node_stat_mod_folio(folio, NR_FILE_PAGES, nr); + __lruvec_stat_mod_folio(folio, NR_SWAPCACHE, nr); + } xas_unlock_irq(&xas); } else { folio_put_refs(folio, nr); diff --git a/mm/memory.c b/mm/memory.c index 774a912eb46d..bb40202b4f29 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3933,6 +3933,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct swap_info_struct *si =3D NULL; rmap_t rmap_flags =3D RMAP_NONE; bool folio_allocated =3D false; + bool synchronous_io =3D false; bool exclusive =3D false; swp_entry_t entry; pte_t pte; @@ -4032,18 +4033,19 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (ret & VM_FAULT_RETRY) goto out_release; =20 - if (swapcache) { - /* - * Make sure folio_free_swap() or swapoff did not release the - * swapcache from under us. The page pin, and pte_same test - * below, are not enough to exclude that. Even if it is still - * swapcache, we need to check that the page's swap has not - * changed. - */ - if (unlikely(!folio_test_swapcache(folio) || - page_swap_entry(page).val !=3D entry.val)) - goto out_page; + /* + * Make sure folio_free_swap() or swapoff did not release the + * swapcache from under us. The page pin, and pte_same test + * below, are not enough to exclude that. Even if it is still + * swapcache, we need to check that the page's swap has not + * changed. + */ + if (unlikely(!folio_test_swapcache(folio) || + (page_swap_entry(page).val & ~SWP_CACHE_SYNCHRONOUS) !=3D entry.val= )) + goto out_page; =20 + synchronous_io =3D swap_cache_test_synchronous(folio); + if (!synchronous_io) { /* * KSM sometimes has to copy on read faults, for example, if * page->index of !PageKSM() pages would be nonlinear inside the @@ -4105,9 +4107,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ if (!folio_test_ksm(folio)) { exclusive =3D pte_swp_exclusive(vmf->orig_pte); - if (folio !=3D swapcache) { + if (synchronous_io || folio !=3D swapcache) { /* - * We have a fresh page that is not exposed to the + * We have a fresh page that is not sharable through the * swapcache -> certainly exclusive. */ exclusive =3D true; @@ -4148,7 +4150,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * yet. */ swap_free(entry); - if (should_try_to_free_swap(folio, vma, vmf->flags)) + if (synchronous_io) + delete_from_swap_cache(folio); + else if (should_try_to_free_swap(folio, vma, vmf->flags)) folio_free_swap(folio); =20 inc_mm_counter(vma->vm_mm, MM_ANONPAGES); @@ -4223,6 +4227,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) out_nomap: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); + if (synchronous_io) + delete_from_swap_cache(folio); out_page: folio_unlock(folio); out_release: diff --git a/mm/swap.h b/mm/swap.h index bd872b157950..9d106eebddbd 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -31,6 +31,21 @@ extern struct address_space *swapper_spaces[]; (&swapper_spaces[swp_type(entry)][swp_offset(entry) \ >> SWAP_ADDRESS_SPACE_SHIFT]) =20 +static inline void swap_cache_mark_synchronous(struct folio *folio) +{ + folio->swap.val |=3D SWP_CACHE_SYNCHRONOUS; +} + +static inline bool swap_cache_test_synchronous(struct folio *folio) +{ + return folio->swap.val & SWP_CACHE_SYNCHRONOUS; +} + +static inline void swap_cache_clear_synchronous(struct folio *folio) +{ + folio->swap.val &=3D ~SWP_CACHE_SYNCHRONOUS; +} + void show_swap_cache_info(void); bool add_to_swap(struct folio *folio); void *get_shadow_from_swap_cache(swp_entry_t entry); diff --git a/mm/swap_state.c b/mm/swap_state.c index cf178dd1131a..b0b1b5391ac1 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -86,7 +86,7 @@ void *get_shadow_from_swap_cache(swp_entry_t entry) * but sets SwapCache flag and private instead of mapping and index. */ static int add_to_swap_cache(struct folio *folio, swp_entry_t entry, - gfp_t gfp, void **shadowp) + gfp_t gfp, bool synchronous, void **shadowp) { struct address_space *address_space =3D swap_address_space(entry); pgoff_t idx =3D swp_offset(entry); @@ -98,11 +98,12 @@ static int add_to_swap_cache(struct folio *folio, swp_e= ntry_t entry, =20 folio_set_swapcache(folio); folio->swap =3D entry; - + if (synchronous) + swap_cache_mark_synchronous(folio); ret =3D __filemap_add_swapcache(address_space, folio, idx, gfp, shadowp); if (ret) { - folio_clear_swapcache(folio); folio->swap.val =3D 0; + folio_clear_swapcache(folio); } =20 return ret; @@ -129,11 +130,13 @@ void __delete_from_swap_cache(struct folio *folio, xas_set_order(&xas, idx, folio_order(folio)); xas_store(&xas, shadow); =20 - folio->swap.val =3D 0; folio_clear_swapcache(folio); address_space->nrpages -=3D nr; - __node_stat_mod_folio(folio, NR_FILE_PAGES, -nr); - __lruvec_stat_mod_folio(folio, NR_SWAPCACHE, -nr); + if (!swap_cache_test_synchronous(folio)) { + __node_stat_mod_folio(folio, NR_FILE_PAGES, -nr); + __lruvec_stat_mod_folio(folio, NR_SWAPCACHE, -nr); + } + folio->swap.val =3D 0; } =20 /** @@ -393,7 +396,7 @@ struct folio *filemap_get_incore_folio(struct address_s= pace *mapping, * else or hitting OOM. */ static struct folio *swap_cache_add_or_get(struct folio *folio, - swp_entry_t entry, gfp_t gfp_mask) + swp_entry_t entry, gfp_t gfp_mask, bool synchronous) { int ret =3D 0; void *shadow =3D NULL; @@ -403,7 +406,7 @@ static struct folio *swap_cache_add_or_get(struct folio= *folio, if (folio) { __folio_set_locked(folio); __folio_set_swapbacked(folio); - ret =3D add_to_swap_cache(folio, entry, gfp_mask, &shadow); + ret =3D add_to_swap_cache(folio, entry, gfp_mask, synchronous, &shadow); if (ret) __folio_clear_locked(folio); } @@ -460,7 +463,7 @@ int swap_cache_add_wait(struct folio *folio, swp_entry_= t entry, gfp_t gfp) struct folio *wait_folio; =20 for (;;) { - ret =3D add_to_swap_cache(folio, entry, gfp, NULL); + ret =3D add_to_swap_cache(folio, entry, gfp, false, NULL); if (ret !=3D -EEXIST) break; wait_folio =3D filemap_get_folio(swap_address_space(entry), @@ -493,7 +496,7 @@ struct folio *swap_cache_alloc_or_get(swp_entry_t entry= , gfp_t gfp_mask, /* We are very likely the first user, alloc and try add to the swapcache.= */ folio =3D (struct folio *)alloc_pages_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); - swapcache =3D swap_cache_add_or_get(folio, entry, gfp_mask); + swapcache =3D swap_cache_add_or_get(folio, entry, gfp_mask, false); if (swapcache !=3D folio) { folio_put(folio); goto out_no_alloc; @@ -875,21 +878,27 @@ static struct folio *swap_vma_readahead(swp_entry_t t= arg_entry, gfp_t gfp_mask, struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, struct vm_fault *vmf, bool *folio_allocated) { - struct mempolicy *mpol; - struct folio *folio; - pgoff_t ilx; - - mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); - folio =3D swap_cache_alloc_or_get(entry, gfp_mask, mpol, ilx, - folio_allocated); - mpol_cond_put(mpol); - - if (*folio_allocated) + struct folio *folio =3D NULL, *swapcache; + /* First do a racy check if cache is already loaded. */ + swapcache =3D swap_cache_try_get(entry); + if (unlikely(swapcache)) + goto out; + folio =3D vma_alloc_folio(gfp_mask, 0, vmf->vma, vmf->address, false); + swapcache =3D swap_cache_add_or_get(folio, entry, gfp_mask, true); + if (!swapcache) + goto out_nocache; + if (swapcache =3D=3D folio) { swap_read_folio(folio, true, NULL); - else if (folio) - swap_cache_update_ra(folio, vmf->vma, vmf->address); - - return folio; + *folio_allocated =3D true; + return folio; + } +out: + swap_cache_update_ra(swapcache, vmf->vma, vmf->address); +out_nocache: + if (folio) + folio_put(folio); + *folio_allocated =3D false; + return swapcache; } =20 /** diff --git a/mm/vmscan.c b/mm/vmscan.c index c3db39393428..e71b049fee01 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1228,6 +1228,12 @@ static unsigned int shrink_folio_list(struct list_he= ad *folio_list, if (!add_to_swap(folio)) goto activate_locked_split; } + } else if (swap_cache_test_synchronous(folio)) { + /* + * We see a folio being swapped in but not activated either + * due to missing shadow or lived too short, active it. + */ + goto activate_locked; } } else if (folio_test_swapbacked(folio) && folio_test_large(folio)) { diff --git a/mm/workingset.c b/mm/workingset.c index f2a0ecaf708d..83a0b409be0f 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -753,7 +753,7 @@ static enum lru_status shadow_lru_isolate(struct list_h= ead *item, */ if (WARN_ON_ONCE(!node->nr_values)) goto out_invalid; - if (WARN_ON_ONCE(node->count !=3D node->nr_values)) + if (WARN_ON_ONCE(node->count !=3D node->nr_values && mapping->host !=3D N= ULL)) goto out_invalid; xa_delete_node(node, workingset_update_node); __inc_lruvec_kmem_state(node, WORKINGSET_NODERECLAIM); --=20 2.43.0