From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DD84C5AD4C for ; Sun, 19 Nov 2023 19:48:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229959AbjKSTsL (ORCPT ); Sun, 19 Nov 2023 14:48:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229665AbjKSTsJ (ORCPT ); Sun, 19 Nov 2023 14:48:09 -0500 Received: from mail-il1-x133.google.com (mail-il1-x133.google.com [IPv6:2607:f8b0:4864:20::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E008211A for ; Sun, 19 Nov 2023 11:48:05 -0800 (PST) Received: by mail-il1-x133.google.com with SMTP id e9e14a558f8ab-357cc880bd8so14138705ab.0 for ; Sun, 19 Nov 2023 11:48:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423285; x=1701028085; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=tM0GQjuVnFv7RufWXzCKq9li8NCfwru2okUxp/uvL+E=; b=bQbQ3YTvLIP9LH2t1GMs9zqgejkdaGNPxoDVKKh73N5sqhbbjr8T6V8/JKDnQabOOh /qRdKJtbeBuVKz6qiLkgB1i/i+neW8S75xSXJ3l6Unn82Qj/mkeUawLjWCa6Jae8iE6Q d47bG2IU4g7wCrGZayiNRT+8QdmukBz5hAlQbze6YsyWinY/5XOIImih14QN9GLMcA6N yJpz0WDO4TpISDbkVkKip/X64Q2twM4B/o1Yz7Ep5t13eEoLwJoljNgQBOVvvFBUi2t0 oIzwf5HvzXOT4ZWxSHh+ggG3pSD624M+HRrNDoPEmrogKKVAPm3xTaQ4w6xNohGPbxvV n2LQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423285; x=1701028085; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=tM0GQjuVnFv7RufWXzCKq9li8NCfwru2okUxp/uvL+E=; b=DfHFiiH72e5TOK7Zq8n4zpmve9Q34oPdBKUOZuBIfId2Y2a+HWlXlgDSrWq6ZYnnIV mj9vsl2cYP2B0Miz6hN1eMewBmNXG63IPjuFpbz4UmeE3KmPO5rMiEStgioDox55xdq2 8faPFb7xR14lUllobk+WJGTI3wxmfa3SS/N5E3OzEcJKD62cyItlVh98v3gdvnY2UR00 qxcY1C0W4p3Td4+fdUMTULPBHCcy0uHvcDxsc62+2PskX18UZZFDbBqYtyPvlbYx/zUT rWo0uqzsT8tCfZV5LpcqYUKHK0mpJSmpUHMbTkDrURVCScq2qPxBctMVjPKws/5CfFAx uqfg== X-Gm-Message-State: AOJu0YxsnnAxhFbBWO2h8Rd2pgVIY1p03Tgv+pKpX5nFF7PpL6YR1yM2 RwsPqK7R9cTzGT5EHvaBh7I= X-Google-Smtp-Source: AGHT+IF1UkKRv/ipdxof7nKQAoicjYZ8F2kSET5NL02/aAVKfiSk21ZO3YlPkVn68ZksUna/3gG5lw== X-Received: by 2002:a92:c26d:0:b0:35b:695:c3c8 with SMTP id h13-20020a92c26d000000b0035b0695c3c8mr353889ild.9.1700423285183; Sun, 19 Nov 2023 11:48:05 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.02 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:04 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 01/24] mm/swap: fix a potential undefined behavior issue Date: Mon, 20 Nov 2023 03:47:17 +0800 Message-ID: <20231119194740.94101-2-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song When folio is NULL, taking the address of its struct member is an undefined behavior, the UB is caused by applying -> operator to a pointer not pointing to any object. Although in practice this won't lead to a real issue, still better to fix it, also makes the code less error-prone, when folio is NULL, page is also NULL, instead of a meanless offset value. Signed-off-by: Kairui Song --- mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index e27e2e5beb3f..70ffa867b1be 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3861,7 +3861,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) /* skip swapcache */ folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->address, false); - page =3D &folio->page; if (folio) { __folio_set_locked(folio); __folio_set_swapbacked(folio); @@ -3879,6 +3878,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) workingset_refault(folio, shadow); =20 folio_add_lru(folio); + page =3D &folio->page; =20 /* To provide entry to swap_readpage() */ folio->swap =3D entry; --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F0F5C5AE5C for ; Sun, 19 Nov 2023 19:48:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231317AbjKSTsP (ORCPT ); Sun, 19 Nov 2023 14:48:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229665AbjKSTsM (ORCPT ); Sun, 19 Nov 2023 14:48:12 -0500 Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com [IPv6:2607:f8b0:4864:20::1136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3749111A for ; Sun, 19 Nov 2023 11:48:09 -0800 (PST) Received: by mail-yw1-x1136.google.com with SMTP id 00721157ae682-5c9d850a0dbso6658997b3.2 for ; Sun, 19 Nov 2023 11:48:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423288; x=1701028088; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=cduqWiCMIKVDQKzalcbjrj5u/UZrGRNgI9eE9WPHu3A=; b=Y9sAFqwPPdpUQXqbpYpZjJ/bJFuU5qndB3eaipJ94VTJIKWoYM50djbzA8SIXrJp0E IQB+DXITZVEmKmCwo/wFMp1N+qG6z0btZOGJVXKggy+SgcgidRpVGvdBG4xhV4FE7AkJ l4rixv+zfi2XOjZY8M7Z/7CO8AVI+XwvjCe6aBACRlKbXd72bHRyilUuKRc5lIYCDts1 8HNy+kHE3SV6ZJvDbF9sGSrswumW19xFIu7juY64z337qf3doIndP5V1a1OyCwmyU5sA PF5ITZ1VZkihNjhzdlLL6Df5ZWehP+u2rGd4pmS2P/flRVc9zWrd6aZlwiFW9dJQcnAQ +PUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423288; x=1701028088; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=cduqWiCMIKVDQKzalcbjrj5u/UZrGRNgI9eE9WPHu3A=; b=Yj3eNh5jUBAYPYvV47npF8G3aojkyElxcmw0nAUeXqjHu5dzbTX2PMrdkcx8dIHSSa DQ0RoMYLhigpSvwDTto6btWVqAQu5aF1DNESgT9vPsnP+70h2E4pdZH0CJgkBE2M0WN3 ovdRVzX20sXBeKJd/zgJ95aZbOXKf2ETuzlYrNyIW1Znf8BM2T4mMSNsKpDSNDDEy874 cUNjRuizQH3mwx4GLTgWf9TgFuWRJ2BNrTTKx1W/WgTWarnrKQ0/x//JHMqXhf1eqjeb X7BHIz/pejB41bTRlSi9uKHxb+Xr95lsA9x7pvN8eYOXUqJ04jlO8a5+BEsoMzxqdtq4 itSw== X-Gm-Message-State: AOJu0YzS44t4sk1xM3JMDaCRHfjvwppsUbVPIeIer9h9MyGrUF+pRknw eq7Qc1mj36D36QQzv3rlvv0= X-Google-Smtp-Source: AGHT+IE1y2y3wIZaEr+VeLd45/7RB5OVC0un5QZpdb1Dva1zs1cgZibYuz6x5YXuFZ86VBTCpW+iXw== X-Received: by 2002:a81:4841:0:b0:5be:94a6:d919 with SMTP id v62-20020a814841000000b005be94a6d919mr6198288ywa.25.1700423288412; Sun, 19 Nov 2023 11:48:08 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.05 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:07 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 02/24] mm/swapfile.c: add back some comment Date: Mon, 20 Nov 2023 03:47:18 +0800 Message-ID: <20231119194740.94101-3-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song Some useful comments were dropped in commit b56a2d8af914 ('mm: rid swapoff of quadratic complexity'), add them back. Signed-off-by: Kairui Song --- mm/swapfile.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/swapfile.c b/mm/swapfile.c index 4bc70f459164..756104ebd585 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1879,6 +1879,17 @@ static int unuse_pte_range(struct vm_area_struct *vm= a, pmd_t *pmd, folio =3D page_folio(page); } if (!folio) { + /* + * The entry could have been freed, and will not + * be reused since swapoff() already disabled + * allocation from here, or alloc_page() failed. + * + * We don't hold lock here, so the swap entry could be + * SWAP_MAP_BAD (when the cluster is discarding). + * Instead of fail out, We can just skip the swap + * entry because swapoff will wait for discarding + * finish anyway. + */ swp_count =3D READ_ONCE(si->swap_map[offset]); if (swp_count =3D=3D 0 || swp_count =3D=3D SWAP_MAP_BAD) continue; --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29155C072A2 for ; Sun, 19 Nov 2023 19:48:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231485AbjKSTsV (ORCPT ); Sun, 19 Nov 2023 14:48:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231493AbjKSTsS (ORCPT ); Sun, 19 Nov 2023 14:48:18 -0500 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AB84196 for ; Sun, 19 Nov 2023 11:48:12 -0800 (PST) Received: by mail-pf1-x436.google.com with SMTP id d2e1a72fcca58-6bd32d1a040so3864878b3a.3 for ; Sun, 19 Nov 2023 11:48:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423292; x=1701028092; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=NgugyapInVThiGa6OPAw9VMpbPbUYFH6tRNCt397cNY=; b=GObJgcqdCADqRZUeGgzpzML598kfsRjsx1/ApIwBRkwyRZfv4JTJNzzVMRhEGnYOus AZIRet3CKOI5UCSlWU96VblTv2eoDprSaflA2IPFtEpQBS907win5ArQrJ2GtII08ELF 8Tf7VnsLtZNTyUJo4U0mYdFaQA7K11AgoSzwUjRIo6dq8XcT4Ny9Hn/E1IfhswM0AZ/y d5QLcXh8DgXNbdepUWhVDW+pNPyVPkVn3pYZdS4vySb6TYSwD1kN3nvKoTALAl3UT0fR Nf2IVyLuNlYmS33XSWQCC2cAr9yIrEPYBfB8dejDBsOqsSePzR6Js7Gk/KgfE4dJ6lvw qbcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423292; x=1701028092; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=NgugyapInVThiGa6OPAw9VMpbPbUYFH6tRNCt397cNY=; b=ns/c3LVNwQPPucnVHnC5CZD5Ik3PBZEoAfA/LI7JONYthWx9wmeQGmOLCI6D/0Ke0P yRvbURQuiWteqxuKROTOdsXMCIXWyba8YYjSaiQitW/UUvdTLEknMKib1QpSnSfcpZsn +kUObfviycudohqu881n9YXMTJIQChyXCIOfZVP2yrC0LOllPNYZhoTOAKEh0NYcsTTs shPTNW0KqUnyHpLoDS6p/tIk+9cPm13eYWpM4jzfUlbmk8wq93RyGR+6uGbvf9aC4ubv GfHrfailACG3KyDJdQt4+M/z60rPjxfknj1viPMOgyUr40E11oBKH0OBjSJJXMnn8Bq7 r9vA== X-Gm-Message-State: AOJu0YyCmEi39P8k0/lAmouc+Tr2deMjjpDWh4yXeGMDolWRjdJND91n JGdMP3aQkeq6gjIWfxTqFHgsFreWPs1kb56R X-Google-Smtp-Source: AGHT+IEheipTHeYRZdBVyN0mV2ARTS7XP+zRN1OT8UJpXXSOat4LuPJfFp26youNgVdlUjeMC3A1hA== X-Received: by 2002:a05:6a20:e104:b0:18a:181b:146b with SMTP id kr4-20020a056a20e10400b0018a181b146bmr5052162pzb.29.1700423291675; Sun, 19 Nov 2023 11:48:11 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.08 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:11 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 03/24] mm/swap: move no readahead swapin code to a stand alone helper Date: Mon, 20 Nov 2023 03:47:19 +0800 Message-ID: <20231119194740.94101-4-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song No feature change, simply move the routine to a standalone function to be used later. The error path handling is copied from the "out_page" label, to make the code change minimized for easier reviewing. Signed-off-by: Kairui Song --- mm/memory.c | 33 +++++---------------------------- mm/swap.h | 2 ++ mm/swap_state.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 55 insertions(+), 28 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 70ffa867b1be..fba4a5229163 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3794,7 +3794,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) swp_entry_t entry; pte_t pte; vm_fault_t ret =3D 0; - void *shadow =3D NULL; =20 if (!pte_unmap_same(vmf)) goto out; @@ -3858,33 +3857,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) =3D=3D 1) { - /* skip swapcache */ - folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, - vma, vmf->address, false); - if (folio) { - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - - if (mem_cgroup_swapin_charge_folio(folio, - vma->vm_mm, GFP_KERNEL, - entry)) { - ret =3D VM_FAULT_OOM; - goto out_page; - } - mem_cgroup_swapin_uncharge_swap(entry); - - shadow =3D get_shadow_from_swap_cache(entry); - if (shadow) - workingset_refault(folio, shadow); - - folio_add_lru(folio); - page =3D &folio->page; - - /* To provide entry to swap_readpage() */ - folio->swap =3D entry; - swap_readpage(page, true, NULL); - folio->private =3D NULL; - } + /* skip swapcache and readahead */ + page =3D swapin_no_readahead(entry, GFP_HIGHUSER_MOVABLE, + vmf); + if (page) + folio =3D page_folio(page); } else { page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf); diff --git a/mm/swap.h b/mm/swap.h index 73c332ee4d91..ea4be4791394 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -56,6 +56,8 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gf= p_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); +struct page *swapin_no_readahead(swp_entry_t entry, gfp_t flag, + struct vm_fault *vmf); =20 static inline unsigned int folio_swap_flags(struct folio *folio) { diff --git a/mm/swap_state.c b/mm/swap_state.c index 85d9e5806a6a..ac4fa404eaa7 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -853,6 +853,54 @@ static struct page *swap_vma_readahead(swp_entry_t tar= g_entry, gfp_t gfp_mask, return page; } =20 +/** + * swapin_no_readahead - swap in pages skipping swap cache and readahead + * @entry: swap entry of this memory + * @gfp_mask: memory allocation flags + * @vmf: fault information + * + * Returns the struct page for entry and addr after the swap entry is read + * in. + */ +struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask, + struct vm_fault *vmf) +{ + struct vm_area_struct *vma =3D vmf->vma; + struct page *page =3D NULL; + struct folio *folio; + void *shadow =3D NULL; + + folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, + vma, vmf->address, false); + if (folio) { + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + + if (mem_cgroup_swapin_charge_folio(folio, + vma->vm_mm, GFP_KERNEL, + entry)) { + folio_unlock(folio); + folio_put(folio); + return NULL; + } + mem_cgroup_swapin_uncharge_swap(entry); + + shadow =3D get_shadow_from_swap_cache(entry); + if (shadow) + workingset_refault(folio, shadow); + + folio_add_lru(folio); + + /* To provide entry to swap_readpage() */ + folio->swap =3D entry; + page =3D &folio->page; + swap_readpage(page, true, NULL); + folio->private =3D NULL; + } + + return page; +} + /** * swapin_readahead - swap in pages in hope we need them soon * @entry: swap entry of this memory --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FB32C5AE5C for ; Sun, 19 Nov 2023 19:48:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231523AbjKSTsX (ORCPT ); Sun, 19 Nov 2023 14:48:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230475AbjKSTsU (ORCPT ); Sun, 19 Nov 2023 14:48:20 -0500 Received: from mail-ot1-x335.google.com (mail-ot1-x335.google.com [IPv6:2607:f8b0:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4EDED5D for ; Sun, 19 Nov 2023 11:48:15 -0800 (PST) Received: by mail-ot1-x335.google.com with SMTP id 46e09a7af769-6ce2cf67be2so2082085a34.2 for ; Sun, 19 Nov 2023 11:48:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423295; x=1701028095; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=xjzoOYVfaQZliXw2umUqNXWj2wEYGvE1fLa1+gbge5g=; b=IusQLIKDW1ybycdFgJM22Bx23qUm7uOyWwwZrWxeWOz54H0+166QH1cupFkrJ7z3qU PKVbvp8SDoKUR63g3SMFujLfAAB0PkeY7X/R0ugnhUve16dPomwhOgnL8eZjHl7nFOxo McGvs6UkJCeQ5zdZBZLBdIdu770XpqgpoqREtIcC77A/Q/A6an29HV7BEXk6J+/BLmTC VkH+s4IpSReoXwzBFlgK4wcThRzXOgmucaS/agfqEUVQxrvL29D8wJ7p4jiTRRy8ylNK 7VeOoXaKKnQiT2xHZpcgSbwxJvE69Ena15wcbOkK6o6vmoIDXJYe9vL7hxcczurKFa1+ yrew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423295; x=1701028095; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=xjzoOYVfaQZliXw2umUqNXWj2wEYGvE1fLa1+gbge5g=; b=wJgwmfXTB+WolAYtw0dtjmnvzSbO+Ua/M2C6WUk7NtFGmUUl5pk8z/YlzSCi7hSk6F EVYYqOOPTkkMNtIQvQINE2uVEHLx7UQ9JImgD7oHs6ve5VOxJr0waqhcXRRJSfEWcHwe fx2KBq5Jl58cKsRz+dTaBlhIf7EhZ2x6RiCx7p+AAc5lf/huaCAnu1XkIZ2fCMhY4IKl 0t52lkHTa5ZcQJJAdDe3qOgGu9QWGdC3QreuPxDJDOgpWUkDgGpBKRDTXpvI47CZIFPv rRO5KLjh5y3V6EynpcRVWLMM49RBh/mn7pcSYF7T1CWznuKBhIdOIY5Lu82eF83SikLZ 5eww== X-Gm-Message-State: AOJu0YyTrmwdZDSR+nShLug3Z+4TycM3VtBPe9nhXusLhw0MsEW7hXvm RdXm9Ft3YWdKh53ajIMRS3I= X-Google-Smtp-Source: AGHT+IFQHR1l2ZzyKI4Xa9NdFiIiGKaudZGQta2YkNoMRhm/wHLbj2a5Jpev7P/tpTPzm9ZFNWym6A== X-Received: by 2002:a05:6870:3b85:b0:1f9:36fe:fd0e with SMTP id gi5-20020a0568703b8500b001f936fefd0emr1616531oab.47.1700423294901; Sun, 19 Nov 2023 11:48:14 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.12 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:14 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 04/24] mm/swap: avoid setting page lock bit and doing extra unlock check Date: Mon, 20 Nov 2023 03:47:20 +0800 Message-ID: <20231119194740.94101-5-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song When swapping in a page, mem_cgroup_swapin_charge_folio is called for new allocated folio, nothing else is referencing the folio so no need to set the lock bit. This avoided doing unlock check on error path. Signed-off-by: Kairui Song --- mm/swap_state.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index ac4fa404eaa7..45dd8b7c195d 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -458,6 +458,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry,= gfp_t gfp_mask, mpol, ilx, numa_node_id()); if (!folio) goto fail_put_swap; + if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry)) + goto fail_put_folio; =20 /* * Swap entry may have been freed since our caller observed it. @@ -483,13 +485,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry= , gfp_t gfp_mask, /* * The swap entry is ours to swap in. Prepare the new page. */ - __folio_set_locked(folio); __folio_set_swapbacked(folio); =20 - if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry)) - goto fail_unlock; - /* May fail (-ENOMEM) if XArray node allocation failed. */ if (add_to_swap_cache(folio, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) goto fail_unlock; @@ -510,6 +508,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry,= gfp_t gfp_mask, fail_unlock: put_swap_folio(folio, entry); folio_unlock(folio); +fail_put_folio: folio_put(folio); fail_put_swap: put_swap_device(si); @@ -873,16 +872,15 @@ struct page *swapin_no_readahead(swp_entry_t entry, g= fp_t gfp_mask, folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->address, false); if (folio) { - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - - if (mem_cgroup_swapin_charge_folio(folio, - vma->vm_mm, GFP_KERNEL, - entry)) { - folio_unlock(folio); + if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, + GFP_KERNEL, entry)) { folio_put(folio); return NULL; } + + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + mem_cgroup_swapin_uncharge_swap(entry); =20 shadow =3D get_shadow_from_swap_cache(entry); --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38ACBC5AE5B for ; Sun, 19 Nov 2023 19:48:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231621AbjKSTsf (ORCPT ); Sun, 19 Nov 2023 14:48:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231603AbjKSTs0 (ORCPT ); Sun, 19 Nov 2023 14:48:26 -0500 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A59FD4C for ; Sun, 19 Nov 2023 11:48:19 -0800 (PST) Received: by mail-pf1-x436.google.com with SMTP id d2e1a72fcca58-6b87c1edfd5so2937275b3a.1 for ; Sun, 19 Nov 2023 11:48:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423298; x=1701028098; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ZPY5phyUcHn1HYB2k83NkwocYwAsatgYc16U9qxXR/Y=; b=ThMc3rQjeBc3YRBPY3lazI6ZtrlWg6OFwSUsgJ98uViolM4G9UyMkniVR0nipcv976 jpksn9aq2OMWEsUIgW2yl+CLwRVRxrQY3aHDnL+IyOAFATJRTvmKKNCcnHJk/Iwu8Lnc GwILHIAPriFqtgjEMkhyV8Je9zTxiSBqJemUSDYfrYuHbOZBWNrkJfU+pNC4G27o9Jkh oq187xX7M4l7kjHkEKatUul8NPsbCtPlYqo8Lv1OJpebh31lAt0PJrh2RTh5lTcaE2Ia XCWQiNDJK3GOOL9S2pusKxL8g7qPCpZ0sXvBPYA40puwG+u7BLzvn5hbp5Q0Sj8SuL+k eVrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423298; x=1701028098; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ZPY5phyUcHn1HYB2k83NkwocYwAsatgYc16U9qxXR/Y=; b=b/TlLi+Clipcox/wIVm4zXDqlhdPf/YQCZza8Gv/8vCLSfsL9O23s/t9Winyoa8wjC kpxtmyMbrJF56tPHEldCfKrIoUgnd7r7cGiOwRFp0IGpGoMjwCDp8w+J2zP+nYAbVg99 UndSfCIaHUH9pDG+BxDR4k/zrP9eE2I9mvdyS8SJgEpFkkYLgBxydgc2zbzhdkf7cddU JVaol55SOrImIHp1flHNqpWL6ylRVe9nJtPNMN2u4F3UG79pPSG4zqC3tGIocIKKJBvv JqviRmldK2PpB9unYKiyWSpn1Z93xvQN3d/M9J3rA6G1LT82wNPjEjhbayBllZNfzwM3 jt7Q== X-Gm-Message-State: AOJu0Yy1cq9vhTUMcAsxELBL8sXZgT+obonFotBxgxrizh9zwMtI7CYX 417NIpT/DEuVTv2AyZweBk8= X-Google-Smtp-Source: AGHT+IFSrhZjCOK5OLW/ySLi0SEQSRf2Ywf78h0lYs0h5fiCtYrCKswNp0hMG0Figvjc/xBHlwsMUw== X-Received: by 2002:aa7:9ddd:0:b0:6c4:dc5b:5b2b with SMTP id g29-20020aa79ddd000000b006c4dc5b5b2bmr3806580pfq.20.1700423298129; Sun, 19 Nov 2023 11:48:18 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.15 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:17 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 05/24] mm/swap: move readahead policy checking into swapin_readahead Date: Mon, 20 Nov 2023 03:47:21 +0800 Message-ID: <20231119194740.94101-6-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song This makes swapin_readahead a main entry for swapin pages, prepare for optimizations in later commits. This also makes swapoff able to make use of readahead checking based on entry. Swapping off a 10G ZRAM (lzo-rle) is faster: Before: time swapoff /dev/zram0 real 0m12.337s user 0m0.001s sys 0m12.329s After: time swapoff /dev/zram0 real 0m9.728s user 0m0.001s sys 0m9.719s And what's more, because now swapoff will also make use of no-readahead swapin helper, this also fixed a bug for no-readahead case (eg. ZRAM): when a process that swapped out some memory previously was moved to a new cgroup, and the original cgroup is dead, swapoff the swap device will make the swapped in pages accounted into the process doing the swapoff instead of the new cgroup the process was moved to. This can be easily reproduced by: - Setup a ramdisk (eg. ZRAM) swap. - Create memory cgroup A, B and C. - Spawn process P1 in cgroup A and make it swap out some pages. - Move process P1 to memory cgroup B. - Destroy cgroup A. - Do a swapoff in cgroup C. - Swapped in pages is accounted into cgroup C. This patch will fix it make the swapped in pages accounted in cgroup B. The same bug exists for readahead path too, we'll fix it in later commits. Signed-off-by: Kairui Song --- mm/memory.c | 22 +++++++--------------- mm/swap.h | 6 ++---- mm/swap_state.c | 33 ++++++++++++++++++++++++++------- mm/swapfile.c | 2 +- 4 files changed, 36 insertions(+), 27 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index fba4a5229163..f4237a2e3b93 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3792,6 +3792,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) rmap_t rmap_flags =3D RMAP_NONE; bool exclusive =3D false; swp_entry_t entry; + bool swapcached; pte_t pte; vm_fault_t ret =3D 0; =20 @@ -3855,22 +3856,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) swapcache =3D folio; =20 if (!folio) { - if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && - __swap_count(entry) =3D=3D 1) { - /* skip swapcache and readahead */ - page =3D swapin_no_readahead(entry, GFP_HIGHUSER_MOVABLE, - vmf); - if (page) - folio =3D page_folio(page); + page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, + vmf, &swapcached); + if (page) { + folio =3D page_folio(page); + if (swapcached) + swapcache =3D folio; } else { - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - vmf); - if (page) - folio =3D page_folio(page); - swapcache =3D folio; - } - - if (!folio) { /* * Back out if somebody else faulted in this pte * while we released the pte lock. diff --git a/mm/swap.h b/mm/swap.h index ea4be4791394..f82d43d7b52a 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -55,9 +55,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, g= fp_t gfp_mask, struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf); -struct page *swapin_no_readahead(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf); + struct vm_fault *vmf, bool *swapcached); =20 static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -89,7 +87,7 @@ static inline struct page *swap_cluster_readahead(swp_ent= ry_t entry, } =20 static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mas= k, - struct vm_fault *vmf) + struct vm_fault *vmf, bool *swapcached) { return NULL; } diff --git a/mm/swap_state.c b/mm/swap_state.c index 45dd8b7c195d..fd0047ae324e 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -316,6 +316,11 @@ void free_pages_and_swap_cache(struct encoded_page **p= ages, int nr) release_pages(pages, nr); } =20 +static inline bool swap_use_no_readahead(struct swap_info_struct *si, swp_= entry_t entry) +{ + return data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) = =3D=3D 1; +} + static inline bool swap_use_vma_readahead(void) { return READ_ONCE(enable_vma_readahead) && !atomic_read(&nr_rotate_swap); @@ -861,8 +866,8 @@ static struct page *swap_vma_readahead(swp_entry_t targ= _entry, gfp_t gfp_mask, * Returns the struct page for entry and addr after the swap entry is read * in. */ -struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf) +static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask, + struct vm_fault *vmf) { struct vm_area_struct *vma =3D vmf->vma; struct page *page =3D NULL; @@ -904,6 +909,8 @@ struct page *swapin_no_readahead(swp_entry_t entry, gfp= _t gfp_mask, * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vmf: fault information + * @swapcached: pointer to a bool used as indicator if the + * page is swapped in through swapcache. * * Returns the struct page for entry and addr, after queueing swapin. * @@ -912,17 +919,29 @@ struct page *swapin_no_readahead(swp_entry_t entry, g= fp_t gfp_mask, * or vma-based(ie, virtual address based on faulty address) readahead. */ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf) + struct vm_fault *vmf, bool *swapcached) { struct mempolicy *mpol; - pgoff_t ilx; struct page *page; + pgoff_t ilx; + bool cached; =20 mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); - page =3D swap_use_vma_readahead() ? - swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf) : - swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + if (swap_use_no_readahead(swp_swap_info(entry), entry)) { + page =3D swapin_no_readahead(entry, gfp_mask, vmf); + cached =3D false; + } else if (swap_use_vma_readahead()) { + page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); + cached =3D true; + } else { + page =3D swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + cached =3D true; + } mpol_cond_put(mpol); + + if (swapcached) + *swapcached =3D cached; + return page; } =20 diff --git a/mm/swapfile.c b/mm/swapfile.c index 756104ebd585..0142bfc71b81 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1874,7 +1874,7 @@ static int unuse_pte_range(struct vm_area_struct *vma= , pmd_t *pmd, }; =20 page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - &vmf); + &vmf, NULL); if (page) folio =3D page_folio(page); } --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F0B7C5AD4C for ; Sun, 19 Nov 2023 19:48:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231570AbjKSTsh (ORCPT ); Sun, 19 Nov 2023 14:48:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231624AbjKSTsa (ORCPT ); Sun, 19 Nov 2023 14:48:30 -0500 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43D08D67 for ; Sun, 19 Nov 2023 11:48:22 -0800 (PST) Received: by mail-pf1-x436.google.com with SMTP id d2e1a72fcca58-6c33ab26dddso3169833b3a.0 for ; Sun, 19 Nov 2023 11:48:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423301; x=1701028101; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=UpGTVr/h0gRLNFrOGvqL9Asvs1D6zvF8lBMhGssTB0o=; b=XnNMl8Ga2yZtHo/rmlT8dejCkZca0iXOSyw88mV4ZxNzqnt7obnsKqxL67gT9F7mP4 stNe9RLFHjpiZPchL4n3ojUiIQaXnbHn2Oo4tcAJPfvaozdUkhcMm5kVseBWYD4PatrX it/4SaVCte/NV+BqH0IOwZECAaDm2sw6al/DdYT7mLNPYv5eavbyWLFIAHD/yv1kIxWd zOmT4ECj5Z9JBfMDdRF4tKdF/ogC6yfZwab4FSsjOp/Ht4cZ3KC9T5CV445o3L0s5sVW 0Msu/X32pj7swmQbSUuzeJJeVh9uO1ytJtp/0dcE0k3QMT9q1DJF3xi+0/gnS4R/kATu BbUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423301; x=1701028101; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=UpGTVr/h0gRLNFrOGvqL9Asvs1D6zvF8lBMhGssTB0o=; b=nuJJ1y1Qm3xvE17ZW5aCs0+rt97ONmOzYjE16hHOrIFHsiubPCdfmSdNmbSrDrnHvT 8r4XZocPlYrut1stIvNVCZxzFzuz8m1kdNV8agyFj7dzCLzgk8NtZHY6TwuwNtZPiAcC tamCIWZ0sOoSj8XTOFqdn2VU1c/lo8ubtM/4ptn/t0PuJwUb13wBZmnyMM32SbP41s5q ieYzgz0kesnmxNZiHTqz1lP5XrVaWmEvd09/j6/mxBK1JYe++r7skTiXPA2tMU4ASchE 0KUPQyFiIcASeKeAC62AU/hvJJzjUFN8GxnUABMHC37Ke/h3teNcq9s0T12h5plowK06 5XsQ== X-Gm-Message-State: AOJu0YyzaIzwjM9x3vxbPNNVRFync1bFzFeiCqEMsO+lxFTMWAL4jL8+ 1LWtc0LCozmpbaehmdu7s1cTjUOUP/3EYKfH X-Google-Smtp-Source: AGHT+IHfs1/JMlpqrq4TKTabAC0wuT2f0zXPPKJXJ4FxpJjeabGpgVADWj7Pztf6px1yswvAICZfaA== X-Received: by 2002:a05:6a20:b797:b0:187:8eca:8dc6 with SMTP id fh23-20020a056a20b79700b001878eca8dc6mr2490060pzb.34.1700423301416; Sun, 19 Nov 2023 11:48:21 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.18 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:20 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 06/24] swap: rework swapin_no_readahead arguments Date: Mon, 20 Nov 2023 03:47:22 +0800 Message-ID: <20231119194740.94101-7-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song Make it use alloc_pages_mpol instead of vma_alloc_folio, and accept mm_struct directly as an argument instead of taking a vmf as argument. Make its arguments similar to swap_{cluster,vma}_readahead, to make the code more aligned. Also prepare for following commits which will skip vmf for certain swapin paths. Signed-off-by: Kairui Song --- mm/swap_state.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index fd0047ae324e..ff6756f2e8e4 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -867,17 +867,17 @@ static struct page *swap_vma_readahead(swp_entry_t ta= rg_entry, gfp_t gfp_mask, * in. */ static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf) + struct mempolicy *mpol, pgoff_t ilx, + struct mm_struct *mm) { - struct vm_area_struct *vma =3D vmf->vma; - struct page *page =3D NULL; struct folio *folio; + struct page *page; void *shadow =3D NULL; =20 - folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, - vma, vmf->address, false); + page =3D alloc_pages_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); + folio =3D (struct folio *)page; if (folio) { - if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, + if (mem_cgroup_swapin_charge_folio(folio, mm, GFP_KERNEL, entry)) { folio_put(folio); return NULL; @@ -896,7 +896,6 @@ static struct page *swapin_no_readahead(swp_entry_t ent= ry, gfp_t gfp_mask, =20 /* To provide entry to swap_readpage() */ folio->swap =3D entry; - page =3D &folio->page; swap_readpage(page, true, NULL); folio->private =3D NULL; } @@ -928,7 +927,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t = gfp_mask, =20 mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); if (swap_use_no_readahead(swp_swap_info(entry), entry)) { - page =3D swapin_no_readahead(entry, gfp_mask, vmf); + page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm= ); cached =3D false; } else if (swap_use_vma_readahead()) { page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 942B8C5AD4C for ; Sun, 19 Nov 2023 19:48:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231639AbjKSTss (ORCPT ); Sun, 19 Nov 2023 14:48:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231564AbjKSTsh (ORCPT ); Sun, 19 Nov 2023 14:48:37 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1666810EC for ; Sun, 19 Nov 2023 11:48:25 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-6cb66fbc63dso521503b3a.0 for ; Sun, 19 Nov 2023 11:48:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423304; x=1701028104; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=65vQj2xJm5p5mk5SbjAKMeJnwT21tT7eyXmu2H+Etu0=; b=B2qyu2tHHOjSYZHMRZ1A/7KsSk+HF7D9RY6pMMFaA3xA7VmeEeQXegg0xgu2IhPvJl uuw/J7qKy27Ew+eaLv9nZPNQKm+I2LoasjIwWE3NV/3YQHs8/tjWCnB0Vk7xJoPmkwSG wPmhJ8XfDAHUf0VG44SCngQRGazrHxflE2bVngV1ONwqQ/XcwawtdHCwjVNES5cdoQ0G pMjdgO6j76wbQI5M1n+5PeCPj1SAEjkWq63I4VR4NUMRfQIik/wA+LsabbkrlQgEhFdz M4U1IV6cbf4Bn5rZvZfv+huCUtl/P6USk35yCCoP7v6odo7ycHcaypk/rRWLMfl+9fX/ rcww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423304; x=1701028104; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=65vQj2xJm5p5mk5SbjAKMeJnwT21tT7eyXmu2H+Etu0=; b=rgWWBdtHKwDllUR8IOgH2vQ1LPTqjtIkIogYaY3pFkhHm32Jl2wsCm8JpUJj32bqv7 OHfmfd/mKe6qAWUwDkuA/xW/Ija72/u1CGWXKYFhDkCoGp4DFDCreql99fAyNpQKPj5s ZzauF492NA9nBCkNrQ2FJyqGw1atIUeu/hVsCYBGdJj4Q/cud4meXs5Qp0GAQtgwE4dJ FZ9RR9lUXhv7kZimFc7zUFSsissXlEIyZNTpcR6U6fTSRxS5lAx5qoIBHaTi+Vlcd1PE ZEvIvdJBhs1cgoBBvbyNSTuS5BcXRkiNoJjf/V8GEyP6FBiyOD4RGqh+X7Bjh9Iz541K PW7g== X-Gm-Message-State: AOJu0Yy4OYyNny2ENBUyAfMgK+3l0Kd7fY+6GlJhuj7uVcCaxgoydn75 c0aJ1IK3/ywsZRlSQFP0oGo= X-Google-Smtp-Source: AGHT+IHHw1felGXoqcj71N9Zw1zsM8vWNO/fCDW6ep+Q60u+G2qJ+qT6hdQV7yWi9c9XQgH0pPdoJg== X-Received: by 2002:a05:6a00:1390:b0:6cb:8abd:39b5 with SMTP id t16-20020a056a00139000b006cb8abd39b5mr2493633pfg.1.1700423304666; Sun, 19 Nov 2023 11:48:24 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.21 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:24 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 07/24] mm/swap: move swap_count to header to be shared Date: Mon, 20 Nov 2023 03:47:23 +0800 Message-ID: <20231119194740.94101-8-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song No feature change, prepare for later commits. Signed-off-by: Kairui Song --- mm/swap.h | 5 +++++ mm/swapfile.c | 5 ----- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index f82d43d7b52a..a9a654af791e 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -61,6 +61,11 @@ static inline unsigned int folio_swap_flags(struct folio= *folio) { return page_swap_info(&folio->page)->flags; } + +static inline unsigned char swap_count(unsigned char ent) +{ + return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED flag */ +} #else /* CONFIG_SWAP */ struct swap_iocb; static inline void swap_readpage(struct page *page, bool do_poll, diff --git a/mm/swapfile.c b/mm/swapfile.c index 0142bfc71b81..a8ae472ed2b6 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -114,11 +114,6 @@ static struct swap_info_struct *swap_type_to_swap_info= (int type) return READ_ONCE(swap_info[type]); /* rcu_dereference() */ } =20 -static inline unsigned char swap_count(unsigned char ent) -{ - return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED flag */ -} - /* Reclaim the swap entry anyway if possible */ #define TTRS_ANYWAY 0x1 /* --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 042BDC54FB9 for ; Sun, 19 Nov 2023 19:48:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231710AbjKSTs4 (ORCPT ); Sun, 19 Nov 2023 14:48:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231600AbjKSTsn (ORCPT ); Sun, 19 Nov 2023 14:48:43 -0500 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80435D45 for ; Sun, 19 Nov 2023 11:48:28 -0800 (PST) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-6b2018a11efso3881298b3a.0 for ; Sun, 19 Nov 2023 11:48:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423308; x=1701028108; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=/Iv3a0RkKO4tJSHfzH299Wh8nqjqW5sZox/cVI//TM8=; b=TfQ3+1EvjgS8cNADoaSCxk1RMNz27Q2YIcqO8OiMF+oW9o0ikJ4nXGGF17AzeDFaa6 +rWmHBxTqxHuzPbIb1SfZtZUkixsBU9pPq1b59jlXNSlLaoKb9ISAZp/30pcLNazFK0h Mzo9AuYMrTf3wUQY1N7urp2LElyKDZfpORxu8Sa5TmHgaAexu3qwL8NjKu5wIcZ4jUvL TatZNoDWnmYQgjj5DvaKgj8pyELvBLE+3S8LnH+ekrOZ6b7Cgm52gSovQnFyZzuB2C2C Dt2o/aLSbXnT6Hzg9gsLIOGDnbCfQtoiOe5IOTIY6VP3SlpOeRwgXV03oqLctxfRmkbM zX4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423308; x=1701028108; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=/Iv3a0RkKO4tJSHfzH299Wh8nqjqW5sZox/cVI//TM8=; b=L0zmYDNp9ECwyGOlZsqUalZlHeFv/vjnAwAnza7DmLpXKGYBlphM4gfGujoie5/Tmk zWwQl988sdHCocDK1iGCXQetjnkEdocewxexKgek6ftduSVthZy7G5YhXVq8k/FbWPOi IbgOCMKNYpXxU7Wbn5oi5ocSmHuy1QHMoyjqeo/LBip/HHDLtDUhyD0FjFdKiu68Dq30 j+9EUQa0vjFmyMBJwcTDE2QIZMK+ulzaNAHYLBtv8Ee0BrTPc0t64I11j8n2BW4ygtq5 HUSu3BAcZ1R1cFPDzrHZQwgq5auoTlsrV+jtEsrj35RLhQ/fOmu+JFIoOETFPskxJ64F lbTg== X-Gm-Message-State: AOJu0YzKjU+vKJ/uT5mvPnqbcEpxQy7zsjcJ+R1VqK7SKwqh34czLLlx 7/o3qmrVKV39/Vi8YU0F/3U= X-Google-Smtp-Source: AGHT+IHOC/pH4egukb5+EX3DwlCGnZ4HWVCMhNwTp5WbUoyceBak2FygNQZocPIPIHn77Rx48Jh+bw== X-Received: by 2002:a05:6a20:3d84:b0:17b:426f:829 with SMTP id s4-20020a056a203d8400b0017b426f0829mr7488318pzi.37.1700423307883; Sun, 19 Nov 2023 11:48:27 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.25 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:27 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 08/24] mm/swap: check readahead policy per entry Date: Mon, 20 Nov 2023 03:47:24 +0800 Message-ID: <20231119194740.94101-9-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song Currently VMA readahead is globally disabled when any rotate disk is used as swap backend. So multiple swap devices are enabled, if a slower hard disk is set as a low priority fallback, and a high performance SSD is used and high priority swap device, vma readahead is disabled globally. The SSD swap device performance will drop by a lot. Check readahead policy per entry to avoid such problem. Signed-off-by: Kairui Song --- mm/swap_state.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index ff6756f2e8e4..fb78f7f18ed7 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -321,9 +321,9 @@ static inline bool swap_use_no_readahead(struct swap_in= fo_struct *si, swp_entry_ return data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) = =3D=3D 1; } =20 -static inline bool swap_use_vma_readahead(void) +static inline bool swap_use_vma_readahead(struct swap_info_struct *si) { - return READ_ONCE(enable_vma_readahead) && !atomic_read(&nr_rotate_swap); + return data_race(si->flags & SWP_SOLIDSTATE) && READ_ONCE(enable_vma_read= ahead); } =20 /* @@ -341,7 +341,7 @@ struct folio *swap_cache_get_folio(swp_entry_t entry, =20 folio =3D filemap_get_folio(swap_address_space(entry), swp_offset(entry)); if (!IS_ERR(folio)) { - bool vma_ra =3D swap_use_vma_readahead(); + bool vma_ra =3D swap_use_vma_readahead(swp_swap_info(entry)); bool readahead; =20 /* @@ -920,16 +920,18 @@ static struct page *swapin_no_readahead(swp_entry_t e= ntry, gfp_t gfp_mask, struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_fault *vmf, bool *swapcached) { + struct swap_info_struct *si; struct mempolicy *mpol; struct page *page; pgoff_t ilx; bool cached; =20 + si =3D swp_swap_info(entry); mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); - if (swap_use_no_readahead(swp_swap_info(entry), entry)) { + if (swap_use_no_readahead(si, entry)) { page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm= ); cached =3D false; - } else if (swap_use_vma_readahead()) { + } else if (swap_use_vma_readahead(si)) { page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); cached =3D true; } else { --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18C61C5AD4C for ; Sun, 19 Nov 2023 19:49:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231764AbjKSTs7 (ORCPT ); Sun, 19 Nov 2023 14:48:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231614AbjKSTsq (ORCPT ); Sun, 19 Nov 2023 14:48:46 -0500 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8BEBD4F for ; Sun, 19 Nov 2023 11:48:31 -0800 (PST) Received: by mail-pg1-x536.google.com with SMTP id 41be03b00d2f7-517ab9a4a13so2840780a12.1 for ; Sun, 19 Nov 2023 11:48:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423311; x=1701028111; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=7ikH9nPhVAiKzaSXcyZub9ryWf7e3smOr0BN4n2CRJU=; b=Qs3l1A/9czxQrrdc1WwtlfYX4eberyGOUowS2hKfGCiHaDX6SpF+9XAVYMsAvTs/eU Pd3l5wy9As8VQTpSIPuZdkKWG9+yqXZTpceiKeVQ9sjh55U5v3rN2Tbra7DdMreB/qTR IiwAa9BgqJxCNY181WuTdRe/kguYEsgBO/x8rorQ2z/mPQxQPqn4eAn2sELpMyhZaZ6k kdR+9ZPCcnsDL0gGRCxcTQ6s0EKe/+KZwqhqPaIK3fRzphTCTBpV5yRtqNsIbToe5/C9 KBT+IyMs7LZBA+KyWtMXLMwj/pWltTgRzM+cOP4mmVmPuCOkFB95skafO/E+GPoZ2Q1v IH1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423311; x=1701028111; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=7ikH9nPhVAiKzaSXcyZub9ryWf7e3smOr0BN4n2CRJU=; b=IyhZVIjjmFIGJk+0rKEOo4vSwX5WxhwYS7TNtiIavRfNkAg585rcVl77FN6/Bc2VHt JaTe7LXM6QtA1flosLnMBd6OuYEp8U/ZJnfPEGQB3ZTslKz2ngX/iotB/dZfEXLGz8EU 71/wfpH187IHm3fVcO2cF/G/fVBGHnm6EHOZ8IWeoVUuUfVBeQ7LfZAo8kqJac2nDE+g 8PzVJ3uHTrRHhbmyyJmsouuwMUthJULYGHOoUA2C0zGNm0yi3YMyaQ40cxC3xOI/kran i8UBxjnatQPWJv6/izHSh0l8lVpY50AsB5282NSEyNUnsq0yUYFX9iO15IxKGJatVaA2 D8Ng== X-Gm-Message-State: AOJu0YwAdpKS9Lpn1DLn7ROlCJt4vQNnMG34rc5MSMGULL6kpQ9wQC8T bDsT+wMx37yLHcfZrrYGZm8= X-Google-Smtp-Source: AGHT+IH1zxFvtExW3yVn2ffiwyDes1ojQa3pxYWmcyzSBulOiTzqABkc4AI7t2dHHB2qRMn2sfkZdw== X-Received: by 2002:a05:6a20:552a:b0:188:1125:88bd with SMTP id ko42-20020a056a20552a00b00188112588bdmr6787703pzb.43.1700423311039; Sun, 19 Nov 2023 11:48:31 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.28 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:30 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 09/24] mm/swap: inline __swap_count Date: Mon, 20 Nov 2023 03:47:25 +0800 Message-ID: <20231119194740.94101-10-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song There is only one caller in swap subsystem now, where it can be inline smoothly, avoid the memory access and function call overheads. Signed-off-by: Kairui Song --- include/linux/swap.h | 6 ------ mm/swap_state.c | 6 +++--- mm/swapfile.c | 8 -------- 3 files changed, 3 insertions(+), 17 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 2401990d954d..64a37819a9b3 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -485,7 +485,6 @@ int swap_type_of(dev_t device, sector_t offset); int find_first_swap(dev_t *device); extern unsigned int count_swap_pages(int, int); extern sector_t swapdev_block(int, pgoff_t); -extern int __swap_count(swp_entry_t entry); extern int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry); extern int swp_swapcount(swp_entry_t entry); extern struct swap_info_struct *page_swap_info(struct page *); @@ -559,11 +558,6 @@ static inline void put_swap_folio(struct folio *folio,= swp_entry_t swp) { } =20 -static inline int __swap_count(swp_entry_t entry) -{ - return 0; -} - static inline int swap_swapcount(struct swap_info_struct *si, swp_entry_t = entry) { return 0; diff --git a/mm/swap_state.c b/mm/swap_state.c index fb78f7f18ed7..d87c20f9f7ec 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -316,9 +316,9 @@ void free_pages_and_swap_cache(struct encoded_page **pa= ges, int nr) release_pages(pages, nr); } =20 -static inline bool swap_use_no_readahead(struct swap_info_struct *si, swp_= entry_t entry) +static inline bool swap_use_no_readahead(struct swap_info_struct *si, pgof= f_t offset) { - return data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) = =3D=3D 1; + return data_race(si->flags & SWP_SYNCHRONOUS_IO) && swap_count(si->swap_m= ap[offset]) =3D=3D 1; } =20 static inline bool swap_use_vma_readahead(struct swap_info_struct *si) @@ -928,7 +928,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t = gfp_mask, =20 si =3D swp_swap_info(entry); mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); - if (swap_use_no_readahead(si, entry)) { + if (swap_use_no_readahead(si, swp_offset(entry))) { page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm= ); cached =3D false; } else if (swap_use_vma_readahead(si)) { diff --git a/mm/swapfile.c b/mm/swapfile.c index a8ae472ed2b6..e15a6c464a38 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1431,14 +1431,6 @@ void swapcache_free_entries(swp_entry_t *entries, in= t n) spin_unlock(&p->lock); } =20 -int __swap_count(swp_entry_t entry) -{ - struct swap_info_struct *si =3D swp_swap_info(entry); - pgoff_t offset =3D swp_offset(entry); - - return swap_count(si->swap_map[offset]); -} - /* * How many references to @entry are currently swapped out? * This does not give an exact answer when swap count is continued, --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86125C54FB9 for ; Sun, 19 Nov 2023 19:49:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231688AbjKSTtM (ORCPT ); Sun, 19 Nov 2023 14:49:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231670AbjKSTsx (ORCPT ); Sun, 19 Nov 2023 14:48:53 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC189D7A for ; Sun, 19 Nov 2023 11:48:34 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-6cb7951d713so352280b3a.1 for ; Sun, 19 Nov 2023 11:48:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423314; x=1701028114; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=C/w+GrPz4ktaqTeTQ4DmVKbN9IqMk3W+D3u9IVSxuMQ=; b=hWeuClbwt9dh9tON8JEyhbawHTrAbClV9oMYSRZp+VU1ZqxrXjtOSlT5yTqQ2SwFr1 bFG9WkEqFygF8GH4CsSzOhyaq5QiUjeUN9w6BKZs8ne0PaC8LxqNd1d6FLPJa5MnUEPX SucgF4XXsth99wKI0POSRLrdtijBlSgrqPvjvBlPgg7ZWy/L+ZDWDYFxYbYtZ0WX3r9o mDS4IosmsloEDJu7no+e+KllNv6tR6c9p/+zcABzir6eL5XcH5BoKz4ngTYdmt8wXvQq rIiWHerw3FOjOrdZq9A/sk1/OpwVCgo3g3RsRtEZxN+NQ7eXyGUT3mx45HF+FWKL0x5o abfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423314; x=1701028114; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=C/w+GrPz4ktaqTeTQ4DmVKbN9IqMk3W+D3u9IVSxuMQ=; b=U0JYih8qxa8GHrkBp8HIf1dn4UH+tUajW6JwYTUtXBqKtzS85GFcio0kxGrlnGgoJC YpLE5pRL2wzPczaC+TZfAtiUV0GrICS7uyO4nirBPaPqBwTmtqIrRWDUvvUGBHU4+vi5 P3Fo+jcX0q3uGaOyben5T00KSEITXYApPuPpb9m0P4Xm4LlinNUy4xAb4GyUNeu4Fbcc fb2l48Da/H9zj8jlRS5wh4v8b0DDuPAD68Wo57h1t276aOXA1etAz0G2E7/uNVUvRUtm QX8TJKm8bcfZKrV1lBO0YbEi4Knhsd9xu4k/0Aerlk0fWrdpmdP3MI/yXQk3kZjgspeT 4qTg== X-Gm-Message-State: AOJu0YwzhEngVfgAyDJAbUxtRXijO2ErFEkjRRQTk1oRce7TMSSahzVq kCviY83GJsS11m0f3WsxQow= X-Google-Smtp-Source: AGHT+IEspsOv+B33e2KenQSEEY708IvBCJ1YaqljuBAeYVz5GS0a0k4ga/MiJ9KapiAUwb4LUSztpg== X-Received: by 2002:a05:6a00:2e99:b0:6c9:88ee:1d87 with SMTP id fd25-20020a056a002e9900b006c988ee1d87mr19403560pfb.17.1700423314274; Sun, 19 Nov 2023 11:48:34 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.31 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:33 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 10/24] mm/swap: remove nr_rotate_swap and related code Date: Mon, 20 Nov 2023 03:47:26 +0800 Message-ID: <20231119194740.94101-11-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song No longer needed after we switched to per entry swap readhead policy. Signed-off-by: Kairui Song --- include/linux/swap.h | 1 - mm/swapfile.c | 11 ----------- 2 files changed, 12 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 64a37819a9b3..cc83fb884757 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -454,7 +454,6 @@ extern void free_pages_and_swap_cache(struct encoded_pa= ge **, int); /* linux/mm/swapfile.c */ extern atomic_long_t nr_swap_pages; extern long total_swap_pages; -extern atomic_t nr_rotate_swap; extern bool has_usable_swap(void); =20 /* Swap 50% full? Release swapcache more aggressively.. */ diff --git a/mm/swapfile.c b/mm/swapfile.c index e15a6c464a38..01c3f53b6521 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -104,8 +104,6 @@ static DECLARE_WAIT_QUEUE_HEAD(proc_poll_wait); /* Activity counter to indicate that a swapon or swapoff has occurred */ static atomic_t proc_poll_event =3D ATOMIC_INIT(0); =20 -atomic_t nr_rotate_swap =3D ATOMIC_INIT(0); - static struct swap_info_struct *swap_type_to_swap_info(int type) { if (type >=3D MAX_SWAPFILES) @@ -2486,9 +2484,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, special= file) if (p->flags & SWP_CONTINUED) free_swap_count_continuations(p); =20 - if (!p->bdev || !bdev_nonrot(p->bdev)) - atomic_dec(&nr_rotate_swap); - mutex_lock(&swapon_mutex); spin_lock(&swap_lock); spin_lock(&p->lock); @@ -2990,7 +2985,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) struct swap_cluster_info *cluster_info =3D NULL; struct page *page =3D NULL; struct inode *inode =3D NULL; - bool inced_nr_rotate_swap =3D false; =20 if (swap_flags & ~SWAP_FLAGS_VALID) return -EINVAL; @@ -3112,9 +3106,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) cluster =3D per_cpu_ptr(p->percpu_cluster, cpu); cluster_set_null(&cluster->index); } - } else { - atomic_inc(&nr_rotate_swap); - inced_nr_rotate_swap =3D true; } =20 error =3D swap_cgroup_swapon(p->type, maxpages); @@ -3218,8 +3209,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialf= ile, int, swap_flags) spin_unlock(&swap_lock); vfree(swap_map); kvfree(cluster_info); - if (inced_nr_rotate_swap) - atomic_dec(&nr_rotate_swap); if (swap_file) filp_close(swap_file, NULL); out: --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1C94C072A2 for ; Sun, 19 Nov 2023 19:49:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229584AbjKSTtT (ORCPT ); Sun, 19 Nov 2023 14:49:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231728AbjKSTs6 (ORCPT ); Sun, 19 Nov 2023 14:48:58 -0500 Received: from mail-io1-xd32.google.com (mail-io1-xd32.google.com [IPv6:2607:f8b0:4864:20::d32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EBCBD7F for ; Sun, 19 Nov 2023 11:48:38 -0800 (PST) Received: by mail-io1-xd32.google.com with SMTP id ca18e2360f4ac-7a93b7fedc8so175626939f.1 for ; Sun, 19 Nov 2023 11:48:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423317; x=1701028117; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=Vy77DasHH4iZw3fxXV06AQqlBWBp94UgYk7CDc75jBU=; b=jmI9Qr/YCZXj/KwbpX3E7Cms6SpwqzungcU2z0+6qyeldu6v/wBxEalqyKFDN5PYIk FFmB3z+erBwNko1TXY+0rHl277kJYbHPxn0GlsnkhBkxj93F5i60QZJNTy47Fb7kU+zR 0UyLC9wlR8pHX8o7kfEpGC7KKrwSLneu4PMriwcWZSSOd30Ib5MXo7kaCbyKDEaOsP3u x8jIf2arU/JqbxIYsAzCEW6c1paImgaHXM/itra9VtgHq4rt2Zcs9KLFqbiDdqafT/1H o+oi6hlqhGxPH7tCXBkRpu+HN5q1se2JOa/WCd9ZKs6XWa4xH54ujElKbKZ6nqG8TIDw DUKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423317; x=1701028117; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Vy77DasHH4iZw3fxXV06AQqlBWBp94UgYk7CDc75jBU=; b=IqBH1MemFrg7sB8fpU0GgP1DFuUjwu69KfuXM0NEHUNGVCj2qWX/PtJBACYfH3bEB8 VBD2Oeuhf+19kMWLONrUKDajP1AW0VCzzVgFgpBJO8uBaIchKIdL6oxttoOty2nMqdVm 6Pop5HAQABvbq2jm9j3e38u1VvSFX/wi7ofM762EI1o3CgE/KJSSYV4jckpORSMo4955 oGcuTbsGPfzKutEpBStS2UdBum6pb9E0rcsVZc1gGRCk4wrcBPrs8xOsOYwBA58JZ+Qc L2mN9UprzOplDSEMgvx8T9bYTlvrHu+EI5mpNXCuHd+aZ9+LYLlbkokNDOH9inRht8rv /91w== X-Gm-Message-State: AOJu0YzOc544/ZnCCQrsjI2ErhGhKT7+0pi+9tkLs6wbFfE8o/ern/5o n/ilic4q9GOZaGQhTG0zfto= X-Google-Smtp-Source: AGHT+IECybsdgtqaEHEB6jhiCW9wgMllnD9l2cqLKV8BEgk8bfpyCeOF7+1eRGrvZhfSvcvWoreVgg== X-Received: by 2002:a92:cbc2:0:b0:35a:f493:5667 with SMTP id s2-20020a92cbc2000000b0035af4935667mr5462158ilq.20.1700423317470; Sun, 19 Nov 2023 11:48:37 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.34 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:36 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 11/24] mm/swap: also handle swapcache lookup in swapin_readahead Date: Mon, 20 Nov 2023 03:47:27 +0800 Message-ID: <20231119194740.94101-12-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song No feature change, just prepare for later commits. Signed-off-by: Kairui Song --- mm/memory.c | 61 +++++++++++++++++++++++-------------------------- mm/swap.h | 10 ++++++-- mm/swap_state.c | 26 +++++++++++++-------- mm/swapfile.c | 30 +++++++++++------------- 4 files changed, 66 insertions(+), 61 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index f4237a2e3b93..22af9f3e8c75 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3786,13 +3786,13 @@ static vm_fault_t handle_pte_marker(struct vm_fault= *vmf) vm_fault_t do_swap_page(struct vm_fault *vmf) { struct vm_area_struct *vma =3D vmf->vma; - struct folio *swapcache, *folio =3D NULL; + struct folio *swapcache =3D NULL, *folio =3D NULL; + enum swap_cache_result cache_result; struct page *page; struct swap_info_struct *si =3D NULL; rmap_t rmap_flags =3D RMAP_NONE; bool exclusive =3D false; swp_entry_t entry; - bool swapcached; pte_t pte; vm_fault_t ret =3D 0; =20 @@ -3850,42 +3850,37 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!si)) goto out; =20 - folio =3D swap_cache_get_folio(entry, vma, vmf->address); - if (folio) - page =3D folio_file_page(folio, swp_offset(entry)); - swapcache =3D folio; - - if (!folio) { - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - vmf, &swapcached); - if (page) { - folio =3D page_folio(page); - if (swapcached) - swapcache =3D folio; - } else { + page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, + vmf, &cache_result); + if (page) { + folio =3D page_folio(page); + if (cache_result !=3D SWAP_CACHE_HIT) { + /* Had to read the page from swap area: Major fault */ + ret =3D VM_FAULT_MAJOR; + count_vm_event(PGMAJFAULT); + count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); + } + if (cache_result !=3D SWAP_CACHE_BYPASS) + swapcache =3D folio; + if (PageHWPoison(page)) { /* - * Back out if somebody else faulted in this pte - * while we released the pte lock. + * hwpoisoned dirty swapcache pages are kept for killing + * owner processes (which may be unknown at hwpoison time) */ - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); - if (likely(vmf->pte && - pte_same(ptep_get(vmf->pte), vmf->orig_pte))) - ret =3D VM_FAULT_OOM; - goto unlock; + ret =3D VM_FAULT_HWPOISON; + goto out_release; } - - /* Had to read the page from swap area: Major fault */ - ret =3D VM_FAULT_MAJOR; - count_vm_event(PGMAJFAULT); - count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); - } else if (PageHWPoison(page)) { + } else { /* - * hwpoisoned dirty swapcache pages are kept for killing - * owner processes (which may be unknown at hwpoison time) + * Back out if somebody else faulted in this pte + * while we released the pte lock. */ - ret =3D VM_FAULT_HWPOISON; - goto out_release; + vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, + vmf->address, &vmf->ptl); + if (likely(vmf->pte && + pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + ret =3D VM_FAULT_OOM; + goto unlock; } =20 ret |=3D folio_lock_or_retry(folio, vmf); diff --git a/mm/swap.h b/mm/swap.h index a9a654af791e..ac9136eee690 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -30,6 +30,12 @@ extern struct address_space *swapper_spaces[]; (&swapper_spaces[swp_type(entry)][swp_offset(entry) \ >> SWAP_ADDRESS_SPACE_SHIFT]) =20 +enum swap_cache_result { + SWAP_CACHE_HIT, + SWAP_CACHE_MISS, + SWAP_CACHE_BYPASS, +}; + void show_swap_cache_info(void); bool add_to_swap(struct folio *folio); void *get_shadow_from_swap_cache(swp_entry_t entry); @@ -55,7 +61,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, g= fp_t gfp_mask, struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf, bool *swapcached); + struct vm_fault *vmf, enum swap_cache_result *result); =20 static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -92,7 +98,7 @@ static inline struct page *swap_cluster_readahead(swp_ent= ry_t entry, } =20 static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mas= k, - struct vm_fault *vmf, bool *swapcached) + struct vm_fault *vmf, enum swap_cache_result *result) { return NULL; } diff --git a/mm/swap_state.c b/mm/swap_state.c index d87c20f9f7ec..e96d63bf8a22 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -908,8 +908,7 @@ static struct page *swapin_no_readahead(swp_entry_t ent= ry, gfp_t gfp_mask, * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vmf: fault information - * @swapcached: pointer to a bool used as indicator if the - * page is swapped in through swapcache. + * @result: a return value to indicate swap cache usage. * * Returns the struct page for entry and addr, after queueing swapin. * @@ -918,30 +917,39 @@ static struct page *swapin_no_readahead(swp_entry_t e= ntry, gfp_t gfp_mask, * or vma-based(ie, virtual address based on faulty address) readahead. */ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf, bool *swapcached) + struct vm_fault *vmf, enum swap_cache_result *result) { + enum swap_cache_result cache_result; struct swap_info_struct *si; struct mempolicy *mpol; + struct folio *folio; struct page *page; pgoff_t ilx; - bool cached; + + folio =3D swap_cache_get_folio(entry, vmf->vma, vmf->address); + if (folio) { + page =3D folio_file_page(folio, swp_offset(entry)); + cache_result =3D SWAP_CACHE_HIT; + goto done; + } =20 si =3D swp_swap_info(entry); mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); if (swap_use_no_readahead(si, swp_offset(entry))) { page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm= ); - cached =3D false; + cache_result =3D SWAP_CACHE_BYPASS; } else if (swap_use_vma_readahead(si)) { page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); - cached =3D true; + cache_result =3D SWAP_CACHE_MISS; } else { page =3D swap_cluster_readahead(entry, gfp_mask, mpol, ilx); - cached =3D true; + cache_result =3D SWAP_CACHE_MISS; } mpol_cond_put(mpol); =20 - if (swapcached) - *swapcached =3D cached; +done: + if (result) + *result =3D cache_result; =20 return page; } diff --git a/mm/swapfile.c b/mm/swapfile.c index 01c3f53b6521..b6d57fff5e21 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1822,13 +1822,21 @@ static int unuse_pte_range(struct vm_area_struct *v= ma, pmd_t *pmd, =20 si =3D swap_info[type]; do { - struct folio *folio; + struct page *page; unsigned long offset; unsigned char swp_count; + struct folio *folio =3D NULL; swp_entry_t entry; int ret; pte_t ptent; =20 + struct vm_fault vmf =3D { + .vma =3D vma, + .address =3D addr, + .real_address =3D addr, + .pmd =3D pmd, + }; + if (!pte++) { pte =3D pte_offset_map(pmd, addr); if (!pte) @@ -1847,22 +1855,10 @@ static int unuse_pte_range(struct vm_area_struct *v= ma, pmd_t *pmd, offset =3D swp_offset(entry); pte_unmap(pte); pte =3D NULL; - - folio =3D swap_cache_get_folio(entry, vma, addr); - if (!folio) { - struct page *page; - struct vm_fault vmf =3D { - .vma =3D vma, - .address =3D addr, - .real_address =3D addr, - .pmd =3D pmd, - }; - - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - &vmf, NULL); - if (page) - folio =3D page_folio(page); - } + page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, + &vmf, NULL); + if (page) + folio =3D page_folio(page); if (!folio) { /* * The entry could have been freed, and will not --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BE2FC54FB9 for ; Sun, 19 Nov 2023 19:49:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231778AbjKSTtV (ORCPT ); Sun, 19 Nov 2023 14:49:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231644AbjKSTtB (ORCPT ); Sun, 19 Nov 2023 14:49:01 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32D16D6E for ; Sun, 19 Nov 2023 11:48:41 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-6c4cf0aea06so3616039b3a.0 for ; Sun, 19 Nov 2023 11:48:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423320; x=1701028120; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ppA1yq5tSXPW7Cplga4R0sTrIzgEEoc6Te1dihtlR90=; b=WSg1wBRpJmuTftOP9HXoBZo5KW4YFAwao7tV1HOTnZUWYybr4Vw5Vtm8iGjBbLHDt2 rUKfm49N5LoVnKrQIy5c0pHG3t6wT0gAj7cuP6A6ADVwvDxGXFrj31z5JFGjVIg5U5yo lB6veVIkWScSgwLVHu02G+rK23A3ex5VL95WJ/x9lhj0CJPwp5p6/R6bmTNYvAPsdOki lMK/ho/QlpYtFnndd6ATcsy2Ex/Hprfx77X9Y0zbWKFrWYKJKkSsElGJdJq+yfn+47zO TmGadhtt3i17FSs4vnDZilPH4Dc8nQkdCGU/sJZt3ukcEcnC7rkUckv/PQpbFYYhVcxz OTmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423320; x=1701028120; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ppA1yq5tSXPW7Cplga4R0sTrIzgEEoc6Te1dihtlR90=; b=USGLHsPSwhnMNI+myGSlSaTSIYB6Vr2iYkUfedA0UciFpWaoX8AbbbDLeyuuVlu6rp lmdRy6NfKpmDsZHGMc70MBb+UEJLmzRnS+iOjBkGQQavmnY9BxXPON06YC8Q7fdOx037 N2ltz6APhtLpqBk9O8tdEFxjui08584qbIY5RoJY6EmVA7zY0T2OGBa9dBO7wT1G1G6c W2B/s1w35CTERmUhdEEqpq5WqNVv4svI6MtedxzSSglXbM5Nz/UJU+9leOBlFc0PFhaC 0GHGIKDQ6++RhAYMBvU58v+T8bvmAibgixhJu8Mx9v51hy7DQM6Zfoa9ADoqNVyNl1dk gfQw== X-Gm-Message-State: AOJu0Yzj+pwdJRlgqOTS36CH6eTqT4UilA050mB6wrb2aAaXciXwXCDZ uijzgzsFF1dVn1kZcJ8XQQ0= X-Google-Smtp-Source: AGHT+IEBKHSE8kWnJJK8rM1dJTQqf5O5zE26MX7FuQ337U8kKzHMmekWO97Hh8Tc9mKAekA3sDGJiw== X-Received: by 2002:a05:6a00:1ca9:b0:6cb:a434:b58f with SMTP id y41-20020a056a001ca900b006cba434b58fmr619405pfw.33.1700423320649; Sun, 19 Nov 2023 11:48:40 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.37 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:40 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 12/24] mm/swap: simplify arguments for swap_cache_get_folio Date: Mon, 20 Nov 2023 03:47:28 +0800 Message-ID: <20231119194740.94101-13-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song There are only two caller now, simplify the arguments. Signed-off-by: Kairui Song --- mm/shmem.c | 2 +- mm/swap.h | 2 +- mm/swap_state.c | 15 +++++++-------- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 0d1ce70bce38..72239061c655 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1875,7 +1875,7 @@ static int shmem_swapin_folio(struct inode *inode, pg= off_t index, } =20 /* Look it up and read it in.. */ - folio =3D swap_cache_get_folio(swap, NULL, 0); + folio =3D swap_cache_get_folio(swap, NULL); if (!folio) { /* Or update major stats only when swapin succeeds?? */ if (fault_type) { diff --git a/mm/swap.h b/mm/swap.h index ac9136eee690..e43e965f123f 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -47,7 +47,7 @@ void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); struct folio *swap_cache_get_folio(swp_entry_t entry, - struct vm_area_struct *vma, unsigned long addr); + struct vm_fault *vmf); struct folio *filemap_get_incore_folio(struct address_space *mapping, pgoff_t index); =20 diff --git a/mm/swap_state.c b/mm/swap_state.c index e96d63bf8a22..91461e26a8cc 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -334,8 +334,7 @@ static inline bool swap_use_vma_readahead(struct swap_i= nfo_struct *si) * * Caller must lock the swap device or hold a reference to keep it valid. */ -struct folio *swap_cache_get_folio(swp_entry_t entry, - struct vm_area_struct *vma, unsigned long addr) +struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_fault *vmf) { struct folio *folio; =20 @@ -352,22 +351,22 @@ struct folio *swap_cache_get_folio(swp_entry_t entry, return folio; =20 readahead =3D folio_test_clear_readahead(folio); - if (vma && vma_ra) { + if (vmf && vma_ra) { unsigned long ra_val; int win, hits; =20 - ra_val =3D GET_SWAP_RA_VAL(vma); + ra_val =3D GET_SWAP_RA_VAL(vmf->vma); win =3D SWAP_RA_WIN(ra_val); hits =3D SWAP_RA_HITS(ra_val); if (readahead) hits =3D min_t(int, hits + 1, SWAP_RA_HITS_MAX); - atomic_long_set(&vma->swap_readahead_info, - SWAP_RA_VAL(addr, win, hits)); + atomic_long_set(&vmf->vma->swap_readahead_info, + SWAP_RA_VAL(vmf->address, win, hits)); } =20 if (readahead) { count_vm_event(SWAP_RA_HIT); - if (!vma || !vma_ra) + if (!vmf || !vma_ra) atomic_inc(&swapin_readahead_hits); } } else { @@ -926,7 +925,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t = gfp_mask, struct page *page; pgoff_t ilx; =20 - folio =3D swap_cache_get_folio(entry, vmf->vma, vmf->address); + folio =3D swap_cache_get_folio(entry, vmf); if (folio) { page =3D folio_file_page(folio, swp_offset(entry)); cache_result =3D SWAP_CACHE_HIT; --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C1A6C54FB9 for ; Sun, 19 Nov 2023 19:49:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231671AbjKSTt2 (ORCPT ); Sun, 19 Nov 2023 14:49:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231661AbjKSTtJ (ORCPT ); Sun, 19 Nov 2023 14:49:09 -0500 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B842CD49 for ; Sun, 19 Nov 2023 11:48:44 -0800 (PST) Received: by mail-oi1-x231.google.com with SMTP id 5614622812f47-3b2ec5ee2e4so2610802b6e.3 for ; Sun, 19 Nov 2023 11:48:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423324; x=1701028124; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=5hBTip8l+rVWgXZvgeAFfyX4NMH7GqXDBsHUYVWi3uA=; b=UtyXFX0e8/Ek/UERAhjAlWMzQLzTdg1YsOffvyqWYEG5x7xqYEA9coS/BQHM1Cx7ZG tI/6PmGbi7sUpz5Tq/0QuBh3z0CILn4eD0vrPpcNx2tgfkp1qPipIaWVSPUzQThbLdwN jlhbGQlKII2fg4nKvDsIkcqqfXIOQhJJFHgqSqIaJYIMQVNe6xzCdwWD79+/3KnFxkIN TcDMpUvQHTpDKP4f4GyW630CuWVXw9BO1jMVQajqB0yb+YsXe7WCcRiA0qK1de1MJyg+ jkyaB/0+PCZ0ulxvictw8dWMjgSQP2EgMZVMsJwgvOz/9nsdzaal7u+BNGcF6iepH6LW GH3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423324; x=1701028124; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=5hBTip8l+rVWgXZvgeAFfyX4NMH7GqXDBsHUYVWi3uA=; b=IocGKSInJPDQxeJAX74SOzPcR/aQYuKSLAwpo10JZhbIWYBsws+Ga83ReVt0EGQtiu +LMju/kXgzw0xd/EVtP+/ExMt5HugZddezpE1JCTS4W5HbGjDC53m+Zna75FrUBhCM3+ Z7mBJU203kScmNGpY8x2ZDEQKiK/LN8Z+0TS6oAmmhNrRN/Ld96yCXEoPQmfC8hcxCuT 5b4C2bguz/LRuWkYXSUoU+RQZbPKTtiqHUCy38qWGjpvT25FSdot6GMZpSjeLzXYAmz2 q2KE9Q/iZZED7lspvd3TFSwdZR6tdD0fSxPNT2ITQcAmUVSk0/ryCF3mS2JXoJXULVcY JbDA== X-Gm-Message-State: AOJu0Yzm8pgyUo1gRv9vKc/yrbniJtnpP7BTnHktKfsqhIY4c2ZhVQeE bTU+SJidFmF4h7wdXQui5aw= X-Google-Smtp-Source: AGHT+IEailrfhPaQZQEKYeEqmxWit7bC+nmAWDqCtHjGUNXtOgvTTA/qDqxaYEq3DlyE+hI13556Tg== X-Received: by 2002:a05:6808:3a10:b0:3ae:bae2:fa76 with SMTP id gr16-20020a0568083a1000b003aebae2fa76mr10473632oib.36.1700423323924; Sun, 19 Nov 2023 11:48:43 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.40 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:43 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 13/24] swap: simplify swap_cache_get_folio Date: Mon, 20 Nov 2023 03:47:29 +0800 Message-ID: <20231119194740.94101-14-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song Rearrange the if statement, reduce the code indent, no feature change. Signed-off-by: Kairui Song --- mm/swap_state.c | 58 ++++++++++++++++++++++++------------------------- 1 file changed, 28 insertions(+), 30 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 91461e26a8cc..3b5a34f47192 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -336,41 +336,39 @@ static inline bool swap_use_vma_readahead(struct swap= _info_struct *si) */ struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_fault *vmf) { + bool vma_ra, readahead; struct folio *folio; =20 folio =3D filemap_get_folio(swap_address_space(entry), swp_offset(entry)); - if (!IS_ERR(folio)) { - bool vma_ra =3D swap_use_vma_readahead(swp_swap_info(entry)); - bool readahead; + if (IS_ERR(folio)) + return NULL; =20 - /* - * At the moment, we don't support PG_readahead for anon THP - * so let's bail out rather than confusing the readahead stat. - */ - if (unlikely(folio_test_large(folio))) - return folio; - - readahead =3D folio_test_clear_readahead(folio); - if (vmf && vma_ra) { - unsigned long ra_val; - int win, hits; - - ra_val =3D GET_SWAP_RA_VAL(vmf->vma); - win =3D SWAP_RA_WIN(ra_val); - hits =3D SWAP_RA_HITS(ra_val); - if (readahead) - hits =3D min_t(int, hits + 1, SWAP_RA_HITS_MAX); - atomic_long_set(&vmf->vma->swap_readahead_info, - SWAP_RA_VAL(vmf->address, win, hits)); - } + /* + * At the moment, we don't support PG_readahead for anon THP + * so let's bail out rather than confusing the readahead stat. + */ + if (unlikely(folio_test_large(folio))) + return folio; =20 - if (readahead) { - count_vm_event(SWAP_RA_HIT); - if (!vmf || !vma_ra) - atomic_inc(&swapin_readahead_hits); - } - } else { - folio =3D NULL; + vma_ra =3D swap_use_vma_readahead(swp_swap_info(entry)); + readahead =3D folio_test_clear_readahead(folio); + if (vmf && vma_ra) { + unsigned long ra_val; + int win, hits; + + ra_val =3D GET_SWAP_RA_VAL(vmf->vma); + win =3D SWAP_RA_WIN(ra_val); + hits =3D SWAP_RA_HITS(ra_val); + if (readahead) + hits =3D min_t(int, hits + 1, SWAP_RA_HITS_MAX); + atomic_long_set(&vmf->vma->swap_readahead_info, + SWAP_RA_VAL(vmf->address, win, hits)); + } + + if (readahead) { + count_vm_event(SWAP_RA_HIT); + if (!vmf || !vma_ra) + atomic_inc(&swapin_readahead_hits); } =20 return folio; --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C49FEC54FB9 for ; Sun, 19 Nov 2023 19:49:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231686AbjKSTtf (ORCPT ); Sun, 19 Nov 2023 14:49:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231819AbjKSTtL (ORCPT ); Sun, 19 Nov 2023 14:49:11 -0500 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02CCD10F9 for ; Sun, 19 Nov 2023 11:48:47 -0800 (PST) Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-6c10f098a27so2928347b3a.2 for ; Sun, 19 Nov 2023 11:48:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423327; x=1701028127; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=VCznjXM2MgFt7+7dcBajlWUiOZC/KHYcLFwrnZNwV5s=; b=OL7qQYlbB8hZlxpYO7qV2qBGwlsMJgdoEyrmUmaQqxtNVB9rDAhwZdFWAhlve4OSi9 /t7H1ppiyiZZMaIS8lSi8TZwTSJmtNfeFvwPwN4xtFw+Lb6Pw6MAP8VZGobsgNW2HqOe qXkiXd0BFvibp25yOSFpiVT46xVrwUvPv6Qpl5iVMOeQDhJ8khrvCnYI6DI5qWalOehl X8H/Ys4Dhip8B5X9NfpvXcw+v/NJXh5Nknrpxfhgrmqfen8w0TBjrbc9TKyOl2MqTNGF lIxALhuVLsXL0xlnv7wO+q0cXsd2Dh3xLmtCewkTyGRY5E8MFX2vpKanUPsJ7gTEWbIs UYhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423327; x=1701028127; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=VCznjXM2MgFt7+7dcBajlWUiOZC/KHYcLFwrnZNwV5s=; b=sG3CPKfP3eyDZGu/L8xdIP8hq7wFPwZ2gmbT4pkStfDstUN3WuOdx0iS/65yz/P8v3 TDJYLvrvFVD8Kjup1RMwVH+BN1UcacIsQeZFmZc59mvusQsSoa4dTGr4o2mTPohZYRD5 FCFIxUCGghR0oDECMS2eldHmN6o71Jzv1kZB7yPRY1cmpjyY8gFgADakXLY/pl5g7/gB 9Xjzg3RjCC+z1j74theCD1SNeF0VDGqqND44ft9QG02orMzrm7UpQAHJp4ozsX7/pjqS 8wuFOcBd/2HVRNskt6N4DypObx7AfR00x5/AWkW9QMNY6ZcOAkuyh6kI1d4veQ3styRb cPGQ== X-Gm-Message-State: AOJu0YxJ7CSMlf9tUryB+FhQrhJouHhghqsYE0DFRWfxAgeLa3gefD2q 3nzG1eScsT6x5ymjL3SnZVw= X-Google-Smtp-Source: AGHT+IGzuAUevQ1XHN3DEflD4nhrPqC3JAr2xo9JJHGah1/1WAAz1GSa8PG8e1ODtV8IMaqzQtkuzA== X-Received: by 2002:a05:6a20:6a1a:b0:189:df1b:6616 with SMTP id p26-20020a056a206a1a00b00189df1b6616mr3107739pzk.15.1700423327040; Sun, 19 Nov 2023 11:48:47 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.44 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:46 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 14/24] mm/swap: do shadow lookup as well when doing swap cache lookup Date: Mon, 20 Nov 2023 03:47:30 +0800 Message-ID: <20231119194740.94101-15-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song Make swap_cache_get_folio capable of returning the shadow value when the xarray contains a shadow instead of a valid folio. Just extend the arguments to be used later. Signed-off-by: Kairui Song --- mm/shmem.c | 2 +- mm/swap.h | 2 +- mm/swap_state.c | 11 +++++++---- 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 72239061c655..f9ce4067c742 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1875,7 +1875,7 @@ static int shmem_swapin_folio(struct inode *inode, pg= off_t index, } =20 /* Look it up and read it in.. */ - folio =3D swap_cache_get_folio(swap, NULL); + folio =3D swap_cache_get_folio(swap, NULL, NULL); if (!folio) { /* Or update major stats only when swapin succeeds?? */ if (fault_type) { diff --git a/mm/swap.h b/mm/swap.h index e43e965f123f..da9deb5ba37d 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -47,7 +47,7 @@ void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); struct folio *swap_cache_get_folio(swp_entry_t entry, - struct vm_fault *vmf); + struct vm_fault *vmf, void **shadowp); struct folio *filemap_get_incore_folio(struct address_space *mapping, pgoff_t index); =20 diff --git a/mm/swap_state.c b/mm/swap_state.c index 3b5a34f47192..e057c79fb06f 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -334,14 +334,17 @@ static inline bool swap_use_vma_readahead(struct swap= _info_struct *si) * * Caller must lock the swap device or hold a reference to keep it valid. */ -struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_fault *vmf) +struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_fault *vmf= , void **shadowp) { bool vma_ra, readahead; struct folio *folio; =20 - folio =3D filemap_get_folio(swap_address_space(entry), swp_offset(entry)); - if (IS_ERR(folio)) + folio =3D filemap_get_entry(swap_address_space(entry), swp_offset(entry)); + if (xa_is_value(folio)) { + if (shadowp) + *shadowp =3D folio; return NULL; + } =20 /* * At the moment, we don't support PG_readahead for anon THP @@ -923,7 +926,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t = gfp_mask, struct page *page; pgoff_t ilx; =20 - folio =3D swap_cache_get_folio(entry, vmf); + folio =3D swap_cache_get_folio(entry, vmf, NULL); if (folio) { page =3D folio_file_page(folio, swp_offset(entry)); cache_result =3D SWAP_CACHE_HIT; --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3A30C54FB9 for ; Sun, 19 Nov 2023 19:49:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231843AbjKSTtk (ORCPT ); Sun, 19 Nov 2023 14:49:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231716AbjKSTtO (ORCPT ); Sun, 19 Nov 2023 14:49:14 -0500 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19686198C for ; Sun, 19 Nov 2023 11:48:51 -0800 (PST) Received: by mail-oi1-x22a.google.com with SMTP id 5614622812f47-3b6ce6fac81so2425834b6e.1 for ; Sun, 19 Nov 2023 11:48:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423330; x=1701028130; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=aDkLnQICnVgpq4/orPONBtwFkKKcmzZtjb3XACAs/tw=; b=RXAJrU5+u3NIqZYjCeSgFrmFpHZ/y80OxLUHyDQIEANThnEXlB8Gkpn81diIPTlg0+ LNxnLDXmDvmkERe656iIzGP7k3hNqIhB6SSi/J9+OKY6c+KWH8oRWIzOBBMJO6wcwlGi DRcpDg4ti1nvZUwie9yI3HvBJjdwO6VAIZXFGdNan0Gvmhucnm83P8QjnnhNY+YoEcyM Lm9MWaXUevxHsmyQ2IFJUYneVbg48NBUWcJC7amwYWZ7FJPkan8bntgg3v56+ClzdKG0 +zYvV0F+r2Ow54NGWaxxFozYP+FGEXgysJ6pe93gP/MKf64MMSRXmz+L5Vh/W3FpuXe/ Fk2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423330; x=1701028130; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=aDkLnQICnVgpq4/orPONBtwFkKKcmzZtjb3XACAs/tw=; b=Za39uM4eYlCQfBlWgYJXm5pe9euZPGAWvfjMgc/jRVMV+VX7+m2zGCb0YAQxsZOUmg s9EXHErISWWJrZ6fSK7D3RvSnK5pXVEMp5UoQ0ziF+GjMLWb0n9KhnXX73wrf4P89qCv zpQ/lJNch9zGDqRX7xtPwR4cXdFdb+alZ71APkYD6+8tStXiT1KDRzR6toUjWvUr4U1N +xh5f7KrkfAbV8e4GvbhJexiriJDwLRlxFABc83htJZ1P176sW6tRirkn5rWhVe7CiWP brQXhKfpgtCmQOUFe2wATPrX6eg/19ytwhJ9b5yNWwksaXxqUURn/SgIuXeCLz854PY9 iBVA== X-Gm-Message-State: AOJu0YxtSdU9Kc5gCbMaSsGWeFkMDgGriAUEhTUlb6OOKfhB8eqyMOej i6GdzL0P6gdRYQHSTTpmXd4= X-Google-Smtp-Source: AGHT+IHnf4WdU+15DFAUt+1f+EUXMBC9sDjxOomJ4RYRSV1AA/+YKlySoWIJz9+C0DlwqbVhK6dXxQ== X-Received: by 2002:a05:6808:1452:b0:3af:983a:8129 with SMTP id x18-20020a056808145200b003af983a8129mr7743023oiv.53.1700423330257; Sun, 19 Nov 2023 11:48:50 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.47 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:49 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 15/24] mm/swap: avoid an duplicated swap cache lookup for SYNCHRONOUS_IO device Date: Mon, 20 Nov 2023 03:47:31 +0800 Message-ID: <20231119194740.94101-16-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song When a xa_value is returned by the cache lookup, keep it to be used later for workingset refault check instead of doing the looking up again in swapin_no_readahead. This does have a side effect of making swapoff also triggers workingset check, but should be fine since swapoff does effect the workload in many ways already. Signed-off-by: Kairui Song --- mm/swap_state.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index e057c79fb06f..51de2a0412df 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -872,7 +872,6 @@ static struct page *swapin_no_readahead(swp_entry_t ent= ry, gfp_t gfp_mask, { struct folio *folio; struct page *page; - void *shadow =3D NULL; =20 page =3D alloc_pages_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); folio =3D (struct folio *)page; @@ -888,10 +887,6 @@ static struct page *swapin_no_readahead(swp_entry_t en= try, gfp_t gfp_mask, =20 mem_cgroup_swapin_uncharge_swap(entry); =20 - shadow =3D get_shadow_from_swap_cache(entry); - if (shadow) - workingset_refault(folio, shadow); - folio_add_lru(folio); =20 /* To provide entry to swap_readpage() */ @@ -922,11 +917,12 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_= t gfp_mask, enum swap_cache_result cache_result; struct swap_info_struct *si; struct mempolicy *mpol; + void *shadow =3D NULL; struct folio *folio; struct page *page; pgoff_t ilx; =20 - folio =3D swap_cache_get_folio(entry, vmf, NULL); + folio =3D swap_cache_get_folio(entry, vmf, &shadow); if (folio) { page =3D folio_file_page(folio, swp_offset(entry)); cache_result =3D SWAP_CACHE_HIT; @@ -938,6 +934,8 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t = gfp_mask, if (swap_use_no_readahead(si, swp_offset(entry))) { page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm= ); cache_result =3D SWAP_CACHE_BYPASS; + if (shadow) + workingset_refault(page_folio(page), shadow); } else if (swap_use_vma_readahead(si)) { page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); cache_result =3D SWAP_CACHE_MISS; --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EA00C54FB9 for ; Sun, 19 Nov 2023 19:49:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231773AbjKSTtu (ORCPT ); Sun, 19 Nov 2023 14:49:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231759AbjKSTtT (ORCPT ); Sun, 19 Nov 2023 14:49:19 -0500 Received: from mail-il1-x12b.google.com (mail-il1-x12b.google.com [IPv6:2607:f8b0:4864:20::12b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4722AD56 for ; Sun, 19 Nov 2023 11:48:54 -0800 (PST) Received: by mail-il1-x12b.google.com with SMTP id e9e14a558f8ab-3574297c79eso14110605ab.1 for ; Sun, 19 Nov 2023 11:48:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423333; x=1701028133; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=9XjpTi/p1jP45iq6KrQ/NXIY9Wo9hQ4YnGSNNoLyAOA=; b=OV7vKBtM8QcyzCdAPX2k3P2b3HevaAp9wu9iHRuPa4/0gMJrOxaOgeg65Tr1JJe1az 6BwJEQF+1Af6fYV3TsevmGa7AH1ExN4MdYI31HH5+ab0NH+IkFs1c7pLpU9XZ9pX3oH3 rpkZ8mxpfjHMDLoBM6FMz2uBdiOjmJyWRkYw1aoCarjmAIVDTvh6z4+0ju77seNKv5YG vV5faKNmy6+it8DLlGQLOVFDPo366HXhDSM9+z1FqOcmluCS116AAO8YpiJ72+Co/pVX Yib44qDni3Y9lU6gKm0XMhBsjCQex8HvlnBJvejLCKAD3g5NdTgJVPGG6f8lSRm0fuXX oxjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423333; x=1701028133; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=9XjpTi/p1jP45iq6KrQ/NXIY9Wo9hQ4YnGSNNoLyAOA=; b=w4wKDOuyZVN8Aj4Yv6ZQNVsiuosFV+IQgxpljsh6psSPUU6TzRBYhKDU07+Z9u6+0t URQzsnwB0Mg4hGwbBICrZXAjgAwJDl7S5owrSqrkdKvWgCfEE5htUQn/0BLwKCmc7s/+ A/EmIYJQhkTiCNeX6U/lrckFcwo3vsPS3a23szzhe0gGoAH7rpxhbzpIwNdKeoNSIxQA l0pvzAzHanw2YxfN3DKWkP/v62mAuDu5AhklBD+Md1mJHBX5efnu5QPKi1+HFf5a7TiZ hJo2H4jFJJxIVcuRtvrmgsH6V6xtPEX9q3VsfYUWlfMGK9KkF2wt0+i4NOK7ldklE6fw IHIg== X-Gm-Message-State: AOJu0YyAZ23s7N/L4ATLeQE0e4DHZ1iN8Lt5mshtLWZIsCt7wRtSNefy +QCdFF2SO1bHJ17c5oPD/ng= X-Google-Smtp-Source: AGHT+IFar6VHQuFklPrfy7vOCI8aqHj+K8WskagViLl4BbbPZwTP7uc43B3VnuoR6l/B/N6mqJ6dzw== X-Received: by 2002:a92:7a06:0:b0:359:5389:c0dc with SMTP id v6-20020a927a06000000b003595389c0dcmr6674915ilc.7.1700423333536; Sun, 19 Nov 2023 11:48:53 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.50 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:52 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 16/24] mm/swap: reduce scope of get_swap_device in swapin path Date: Mon, 20 Nov 2023 03:47:32 +0800 Message-ID: <20231119194740.94101-17-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song Move get_swap_device into swapin_readahead, simplify the code and prepare for follow up commits. For the later part in do_swap_page, using swp_swap_info directly is fine since in that context, the swap device is pinned by swapcache reference. Signed-off-by: Kairui Song --- mm/memory.c | 16 ++++------------ mm/swap_state.c | 8 ++++++-- mm/swapfile.c | 4 +++- 3 files changed, 13 insertions(+), 15 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 22af9f3e8c75..e399b37ef395 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3789,7 +3789,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct folio *swapcache =3D NULL, *folio =3D NULL; enum swap_cache_result cache_result; struct page *page; - struct swap_info_struct *si =3D NULL; rmap_t rmap_flags =3D RMAP_NONE; bool exclusive =3D false; swp_entry_t entry; @@ -3845,14 +3844,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out; } =20 - /* Prevent swapoff from happening to us. */ - si =3D get_swap_device(entry); - if (unlikely(!si)) - goto out; - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf, &cache_result); - if (page) { + if (PTR_ERR(page) =3D=3D -EBUSY) { + goto out; + } else if (page) { folio =3D page_folio(page); if (cache_result !=3D SWAP_CACHE_HIT) { /* Had to read the page from swap area: Major fault */ @@ -3964,7 +3960,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ exclusive =3D true; } else if (exclusive && folio_test_writeback(folio) && - data_race(si->flags & SWP_STABLE_WRITES)) { + (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) { /* * This is tricky: not all swap backends support * concurrent page modifications while under writeback. @@ -4068,8 +4064,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); out: - if (si) - put_swap_device(si); return ret; out_nomap: if (vmf->pte) @@ -4082,8 +4076,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_unlock(swapcache); folio_put(swapcache); } - if (si) - put_swap_device(si); return ret; } =20 diff --git a/mm/swap_state.c b/mm/swap_state.c index 51de2a0412df..ff8a166603d0 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -922,6 +922,11 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t= gfp_mask, struct page *page; pgoff_t ilx; =20 + /* Prevent swapoff from happening to us */ + si =3D get_swap_device(entry); + if (unlikely(!si)) + return ERR_PTR(-EBUSY); + folio =3D swap_cache_get_folio(entry, vmf, &shadow); if (folio) { page =3D folio_file_page(folio, swp_offset(entry)); @@ -929,7 +934,6 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t = gfp_mask, goto done; } =20 - si =3D swp_swap_info(entry); mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); if (swap_use_no_readahead(si, swp_offset(entry))) { page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm= ); @@ -944,8 +948,8 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t = gfp_mask, cache_result =3D SWAP_CACHE_MISS; } mpol_cond_put(mpol); - done: + put_swap_device(si); if (result) *result =3D cache_result; =20 diff --git a/mm/swapfile.c b/mm/swapfile.c index b6d57fff5e21..925ad92486a4 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1857,7 +1857,9 @@ static int unuse_pte_range(struct vm_area_struct *vma= , pmd_t *pmd, pte =3D NULL; page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, &vmf, NULL); - if (page) + if (IS_ERR(page)) + return PTR_ERR(page); + else if (page) folio =3D page_folio(page); if (!folio) { /* --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA396C54FB9 for ; Sun, 19 Nov 2023 19:49:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231860AbjKSTty (ORCPT ); Sun, 19 Nov 2023 14:49:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231794AbjKSTt2 (ORCPT ); Sun, 19 Nov 2023 14:49:28 -0500 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8600D19A2 for ; Sun, 19 Nov 2023 11:48:57 -0800 (PST) Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-6c115026985so3870297b3a.1 for ; Sun, 19 Nov 2023 11:48:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423337; x=1701028137; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=FCPW0dI33gWre1bmPp4wG3A/kpp2Dtpn+fZQd/yP8/o=; b=kb+kvJTuqgPIZdsg4KJJjy4Vh03F0rOOeZzSaU+peAxitPCQ0MWo9ABfEVs7rZwyfS z31d/fxAwEsdIqbjZigitUmP9kWKcjX+K7kvGXrk+fb5YgU8iRigF+GxYXaKWB9SfiOQ o1TE0qSQmYaTMuuPz62JyWBVG5Lt7KoaU0BiaG0w3fXdqFmbMohiTWl3FQD7v91bOdev F3ujCKuGqWI5jqzdWoXJY6fHyQJlfyOqEP4nTHU21LO6fAwJZxQuzqMnsXdpd3HNCroY W7dC4sMoYYvi+BDePxgluzFC8G5PJ8JxxGPsDNhg14Snar4a8rukXcTwGxz9wwFLh9aM USlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423337; x=1701028137; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=FCPW0dI33gWre1bmPp4wG3A/kpp2Dtpn+fZQd/yP8/o=; b=ghxlr0cmFUU6IM9BG3A5q1d1ZbEsQ8uQuAWia1HdXcXoZc5CO4csurkVoQc0KuMNJV +I7ok0PGWjwDJCxQilEj2atfxeOrowANrzkvGS7rEnkcfKOnA8aN4zDTcdlB4kG3xTlj o1Qd44hoRUtsnjwAIDxnecYTAWTt8+fYajUXG7u48TM/LOEs7QvghKwPD4JF0f4yBt/L VS4V5hvvkZWBsj0S/6lw0szRhnqJZBDtds5paY2CcJDyA5TL5p26NMq6Bb7AO6zxnXEG oE2UBF0MfSyTtiHK4K+4MhGE4sX1/m2wfEVyjU2EUOnk7Oz1QdHeLYxYBX1gAYWDORLN SPbg== X-Gm-Message-State: AOJu0YyCwTVLjsMQOax9N3n0KiqcN4EagWRPhLyxNv+I3WTA4bWHK8Pf SL76K+Zj9t6UHxzVfsO9T7s= X-Google-Smtp-Source: AGHT+IF9QoAWouGwWEXuyCQ9NU0AgZDxQWALIyGpxr9BM9RgNBGPUMqyqjGgp7YBTc1CUFiZCcylkA== X-Received: by 2002:a05:6a20:244e:b0:187:f23d:f9f2 with SMTP id t14-20020a056a20244e00b00187f23df9f2mr6715469pzc.58.1700423336700; Sun, 19 Nov 2023 11:48:56 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.53 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:56 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 17/24] mm/swap: fix false error when swapoff race with swapin Date: Mon, 20 Nov 2023 03:47:33 +0800 Message-ID: <20231119194740.94101-18-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song When swapoff race with swapin, get_swap_device may fail and cause swapin_readahead to return -EBUSY. In such case check if the page is already swapped in by swapoff path. Signed-off-by: Kairui Song --- mm/memory.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e399b37ef395..620fa87557fd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3846,9 +3846,21 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) =20 page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf, &cache_result); - if (PTR_ERR(page) =3D=3D -EBUSY) { - goto out; - } else if (page) { + if (IS_ERR_OR_NULL(page)) { + /* + * Back out if somebody else faulted in this pte + * while we released the pte lock. + */ + vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, + vmf->address, &vmf->ptl); + if (likely(vmf->pte && pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (!page) + ret =3D VM_FAULT_OOM; + else + ret =3D VM_FAULT_RETRY; + } + goto unlock; + } else { folio =3D page_folio(page); if (cache_result !=3D SWAP_CACHE_HIT) { /* Had to read the page from swap area: Major fault */ @@ -3866,17 +3878,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) ret =3D VM_FAULT_HWPOISON; goto out_release; } - } else { - /* - * Back out if somebody else faulted in this pte - * while we released the pte lock. - */ - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); - if (likely(vmf->pte && - pte_same(ptep_get(vmf->pte), vmf->orig_pte))) - ret =3D VM_FAULT_OOM; - goto unlock; } =20 ret |=3D folio_lock_or_retry(folio, vmf); --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2018C54FB9 for ; Sun, 19 Nov 2023 19:50:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231700AbjKSTuD (ORCPT ); Sun, 19 Nov 2023 14:50:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231840AbjKSTtg (ORCPT ); Sun, 19 Nov 2023 14:49:36 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91A90D78 for ; Sun, 19 Nov 2023 11:49:00 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-6c396ef9a3dso3070117b3a.1 for ; Sun, 19 Nov 2023 11:49:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423340; x=1701028140; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ogKmBoCop4FjzMsXxzq+m99gJioFg3IUepp9Cnvtmi4=; b=NT0k2DETo/JYhGoZQ4D+JumGWDBfOY/56XsePUpsVAtRJmYdQk5TTXqqAJ1s9666nv sinsGxqPSBJrDP2+9F+KHJa1JXBrgXoqEe9VBaSDupZgufHG9KOIqIMISmIp3uQLO0RC AZVh+0Y0gkXcn/oaDGpZU4DRnbptL88CNGcb+Jmku6wCPRJgX4O9a4HRLuo52rR6vQwg Ez+BTVheKCadcnoUPjfqacuJ9BDnhknie24MJp6TPHjAmplsNhHazcPd/B9Z0UsSG5yT z2Tu2pBqFZjZSLjeDEr50AxaIfSVjoUHaC42f/Y9OALZJ1EiktzwAlAyH7ktbeS6N7Ju dQ5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423340; x=1701028140; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ogKmBoCop4FjzMsXxzq+m99gJioFg3IUepp9Cnvtmi4=; b=d2A6+cxVQoy7DZF/jLhRZ0CHramW+Xe3wMpRDQMCWKmpPmut80IEf9E4Poti5nC9/R svmdQPpCjpg6ZOvisJnklNXyFD254STt5XQ7BggF47VZjTJstda2NbBI2OyRv/DIpFSO jOHnbhq5G0RVtGuB8IXmiw+1DNrbueZQKkVaTKQEnkh6+Q/57X7DtGb80t3+q5Bvjl88 71dCdLZP83Hhk2XfM87LrVjd/A3ulYB2zOIxFitnLLP0B7YjGSF3orO8cDX3p/r7vTOA gICsEpGdN3mt13Yvs8OM/CWFTfl42HtWk3TnktcEzZBUER1zk7Py61CvsrXqg6rGl3kY IBMA== X-Gm-Message-State: AOJu0YzJc0c6hdbX0b6qOoeuwF9u5xpOrPQy2l0cccFq+MDNzJ3CtqCx gyeKeXxMzztncaPsgHeaISo= X-Google-Smtp-Source: AGHT+IEtFX0YlsrOdaCPyUWZ1WjriHhJN7FSpvj0uTb3jwG62CN+QG7TWo6/q5IUmUrNJVLup3+uwA== X-Received: by 2002:a05:6a00:301d:b0:693:3c11:4293 with SMTP id ay29-20020a056a00301d00b006933c114293mr4840721pfb.14.1700423339887; Sun, 19 Nov 2023 11:48:59 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.56 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:59 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 18/24] mm/swap: introduce a helper non fault swapin Date: Mon, 20 Nov 2023 03:47:34 +0800 Message-ID: <20231119194740.94101-19-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song There are two places where swapin is not direct caused by page fault: shmem swapin is invoked through shmem mapping, swapoff cause swapin by walking the page table. They used to construct a pseudo vmfault struct for swapin function. Shmem has dropped the pseudo vmfault recently in commit ddc1a5cbc05d ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"). Swapoff path is still using a pseudo vmfault. Introduce a helper for them both, this help save stack usage for swapoff path, and help apply a unified swapin cache and readahead policy check. Also prepare for follow up commits. Signed-off-by: Kairui Song --- mm/shmem.c | 51 ++++++++++++++++--------------------------------- mm/swap.h | 11 +++++++++++ mm/swap_state.c | 38 ++++++++++++++++++++++++++++++++++++ mm/swapfile.c | 23 +++++++++++----------- 4 files changed, 76 insertions(+), 47 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index f9ce4067c742..81d129aa66d1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1565,22 +1565,6 @@ static inline struct mempolicy *shmem_get_sbmpol(str= uct shmem_sb_info *sbinfo) static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *i= nfo, pgoff_t index, unsigned int order, pgoff_t *ilx); =20 -static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) -{ - struct mempolicy *mpol; - pgoff_t ilx; - struct page *page; - - mpol =3D shmem_get_pgoff_policy(info, index, 0, &ilx); - page =3D swap_cluster_readahead(swap, gfp, mpol, ilx); - mpol_cond_put(mpol); - - if (!page) - return NULL; - return page_folio(page); -} - /* * Make sure huge_gfp is always more limited than limit_gfp. * Some of the flags set permissions, while others set limitations. @@ -1854,9 +1838,12 @@ static int shmem_swapin_folio(struct inode *inode, p= goff_t index, { struct address_space *mapping =3D inode->i_mapping; struct shmem_inode_info *info =3D SHMEM_I(inode); - struct swap_info_struct *si; + enum swap_cache_result result; struct folio *folio =3D NULL; + struct mempolicy *mpol; + struct page *page; swp_entry_t swap; + pgoff_t ilx; int error; =20 VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); @@ -1866,34 +1853,30 @@ static int shmem_swapin_folio(struct inode *inode, = pgoff_t index, if (is_poisoned_swp_entry(swap)) return -EIO; =20 - si =3D get_swap_device(swap); - if (!si) { + mpol =3D shmem_get_pgoff_policy(info, index, 0, &ilx); + page =3D swapin_page_non_fault(swap, gfp, mpol, ilx, fault_mm, &result); + mpol_cond_put(mpol); + + if (PTR_ERR(page) =3D=3D -EBUSY) { if (!shmem_confirm_swap(mapping, index, swap)) return -EEXIST; else return -EINVAL; - } - - /* Look it up and read it in.. */ - folio =3D swap_cache_get_folio(swap, NULL, NULL); - if (!folio) { - /* Or update major stats only when swapin succeeds?? */ - if (fault_type) { + } else if (!page) { + error =3D -ENOMEM; + goto failed; + } else { + folio =3D page_folio(page); + if (fault_type && result !=3D SWAP_CACHE_HIT) { *fault_type |=3D VM_FAULT_MAJOR; count_vm_event(PGMAJFAULT); count_memcg_event_mm(fault_mm, PGMAJFAULT); } - /* Here we actually start the io */ - folio =3D shmem_swapin_cluster(swap, gfp, info, index); - if (!folio) { - error =3D -ENOMEM; - goto failed; - } } =20 /* We have to do this with folio locked to prevent races */ folio_lock(folio); - if (!folio_test_swapcache(folio) || + if ((result !=3D SWAP_CACHE_BYPASS && !folio_test_swapcache(folio)) || folio->swap.val !=3D swap.val || !shmem_confirm_swap(mapping, index, swap)) { error =3D -EEXIST; @@ -1930,7 +1913,6 @@ static int shmem_swapin_folio(struct inode *inode, pg= off_t index, delete_from_swap_cache(folio); folio_mark_dirty(folio); swap_free(swap); - put_swap_device(si); =20 *foliop =3D folio; return 0; @@ -1944,7 +1926,6 @@ static int shmem_swapin_folio(struct inode *inode, pg= off_t index, folio_unlock(folio); folio_put(folio); } - put_swap_device(si); =20 return error; } diff --git a/mm/swap.h b/mm/swap.h index da9deb5ba37d..b073c29c9790 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -62,6 +62,10 @@ struct page *swap_cluster_readahead(swp_entry_t entry, g= fp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf, enum swap_cache_result *result); +struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, + struct mm_struct *mm, + enum swap_cache_result *result); =20 static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -103,6 +107,13 @@ static inline struct page *swapin_readahead(swp_entry_= t swp, gfp_t gfp_mask, return NULL; } =20 +static inline struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t = gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, struct mm_struct *mm, + enum swap_cache_result *result) +{ + return NULL; +} + static inline int swap_writepage(struct page *p, struct writeback_control = *wbc) { return 0; diff --git a/mm/swap_state.c b/mm/swap_state.c index ff8a166603d0..eef66757c615 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -956,6 +956,44 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t= gfp_mask, return page; } =20 +struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, + struct mm_struct *mm, enum swap_cache_result *result) +{ + enum swap_cache_result cache_result; + struct swap_info_struct *si; + void *shadow =3D NULL; + struct folio *folio; + struct page *page; + + /* Prevent swapoff from happening to us */ + si =3D get_swap_device(entry); + if (unlikely(!si)) + return ERR_PTR(-EBUSY); + + folio =3D swap_cache_get_folio(entry, NULL, &shadow); + if (folio) { + page =3D folio_file_page(folio, swp_offset(entry)); + cache_result =3D SWAP_CACHE_HIT; + goto done; + } + + if (swap_use_no_readahead(si, swp_offset(entry))) { + page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, mm); + if (shadow) + workingset_refault(page_folio(page), shadow); + cache_result =3D SWAP_CACHE_BYPASS; + } else { + page =3D swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + cache_result =3D SWAP_CACHE_MISS; + } +done: + put_swap_device(si); + if (result) + *result =3D cache_result; + return page; +} + #ifdef CONFIG_SYSFS static ssize_t vma_ra_enabled_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) diff --git a/mm/swapfile.c b/mm/swapfile.c index 925ad92486a4..f8c5096fe0f0 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1822,20 +1822,15 @@ static int unuse_pte_range(struct vm_area_struct *v= ma, pmd_t *pmd, =20 si =3D swap_info[type]; do { + int ret; + pte_t ptent; + pgoff_t ilx; + swp_entry_t entry; struct page *page; unsigned long offset; + struct mempolicy *mpol; unsigned char swp_count; struct folio *folio =3D NULL; - swp_entry_t entry; - int ret; - pte_t ptent; - - struct vm_fault vmf =3D { - .vma =3D vma, - .address =3D addr, - .real_address =3D addr, - .pmd =3D pmd, - }; =20 if (!pte++) { pte =3D pte_offset_map(pmd, addr); @@ -1855,8 +1850,12 @@ static int unuse_pte_range(struct vm_area_struct *vm= a, pmd_t *pmd, offset =3D swp_offset(entry); pte_unmap(pte); pte =3D NULL; - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - &vmf, NULL); + + mpol =3D get_vma_policy(vma, addr, 0, &ilx); + page =3D swapin_page_non_fault(entry, GFP_HIGHUSER_MOVABLE, + mpol, ilx, vma->vm_mm, NULL); + mpol_cond_put(mpol); + if (IS_ERR(page)) return PTR_ERR(page); else if (page) --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 806E9C072A2 for ; Sun, 19 Nov 2023 19:50:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231684AbjKSTuV (ORCPT ); Sun, 19 Nov 2023 14:50:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231665AbjKSTtv (ORCPT ); Sun, 19 Nov 2023 14:49:51 -0500 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7DA91BD9 for ; Sun, 19 Nov 2023 11:49:03 -0800 (PST) Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-6b1d1099a84so3606091b3a.1 for ; Sun, 19 Nov 2023 11:49:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423343; x=1701028143; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=CM+OPz8sAX9/+pckE36Q2lCrK7XvSIfXWi261rZELtQ=; b=Afvt9npRwp95Im/FJLQ4+yLe4XtjOwRQlS5Bcenqfiu6s4zOVDNDC9TnLd2LDWhRkG Lf2Rk1xnJAxJCu05N3e5x7+kA52CyzE4QJvMnOej7jgkD7/TrKSYRxbeOZEiENa+1ffx vaITqRi5dxBNDB6InBBqOjlYYUHzoHuLXQxS3uUm9Mm/TnDtbYFej+7QIU3OEdCuKRKS UQG2QXnrODblRWZXfWxIhATw7KrBUVCu+krtYU7WYC5sGIqFPI8CE/QBuZK7Otg+aMeK w5ww+J45bGxSYsmDtMx2bO7Ot7hUiqUc+FTLPFYeQUCZv/sjEXwEGjoAB6Y3u1k5xelF gCQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423343; x=1701028143; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=CM+OPz8sAX9/+pckE36Q2lCrK7XvSIfXWi261rZELtQ=; b=hcr1MX/sniu/lkyTjcNc2ScEKn2CY5CQTz1Ha4oLyM/fySwiB1asq7RWacA4ri1yGq yYWiMt2khDWNXvL7kHkj4a4XQhxLEHFfG3UPhnyEPOw+V5WWmrdv/+TfOI1ABPkIHtjA G4TRwR/O1l6oV1wkAjPvhA9s2gIi539cbYqPDmacaUcRNp3yzLHRSidzj0rl7W6IE5Cl Ik1IsK3XR7L1IJSuNqBsfKqhA9TCX70A05aYe12ZvzQ19bW0gLbWB19K6x4RzwH1RURD dMfR6Gk3wi2LFpJClvYimhmK/8VCpvt/CE/JuCl//8SJgZdmMl8C2wkfK/BGA9CLojbl mhtw== X-Gm-Message-State: AOJu0YxMWPGOpR9DEg3WP4XUy3GEhnANX0smxKd1gSjuHkt7Mdc9XQ/G nu7Rxnjvxm3tbMyUpkjapR4= X-Google-Smtp-Source: AGHT+IGkCTfkx1rp2FzBZeJz7DuMjH5OX9xB0kVWtfaGQTv9o7aHxqdL3R9OOEYrRIDUL1WVW8rBzw== X-Received: by 2002:a05:6a20:748e:b0:186:58d6:ca65 with SMTP id p14-20020a056a20748e00b0018658d6ca65mr7241893pzd.32.1700423343119; Sun, 19 Nov 2023 11:49:03 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.49.00 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:49:02 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 19/24] shmem, swap: refactor error check on OOM or race Date: Mon, 20 Nov 2023 03:47:35 +0800 Message-ID: <20231119194740.94101-20-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song It should always check if a swap entry is already swaped in on error, fix potential false error issue. Signed-off-by: Kairui Song --- mm/shmem.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 81d129aa66d1..6154b5b68b8f 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1857,13 +1857,11 @@ static int shmem_swapin_folio(struct inode *inode, = pgoff_t index, page =3D swapin_page_non_fault(swap, gfp, mpol, ilx, fault_mm, &result); mpol_cond_put(mpol); =20 - if (PTR_ERR(page) =3D=3D -EBUSY) { - if (!shmem_confirm_swap(mapping, index, swap)) - return -EEXIST; + if (IS_ERR_OR_NULL(page)) { + if (!page) + error =3D -ENOMEM; else - return -EINVAL; - } else if (!page) { - error =3D -ENOMEM; + error =3D -EINVAL; goto failed; } else { folio =3D page_folio(page); --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5099C54FB9 for ; Sun, 19 Nov 2023 19:50:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231717AbjKSTuK (ORCPT ); Sun, 19 Nov 2023 14:50:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231633AbjKSTtr (ORCPT ); Sun, 19 Nov 2023 14:49:47 -0500 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A3EDD7A for ; Sun, 19 Nov 2023 11:49:07 -0800 (PST) Received: by mail-oi1-x22a.google.com with SMTP id 5614622812f47-3b6d88dbaa3so2548183b6e.1 for ; Sun, 19 Nov 2023 11:49:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423346; x=1701028146; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=Ze1Psd2Ie5OovBPk7SmUTsWgEFqDeyPvhXbgqpZhXY8=; b=N1NnsxV11ewog1K925/BlU5b9EhM0d5Pcde3SG+GzZSzkIjxMU7Mw8+uWuK+BkbY84 TyBrUGD4XlOXC2ZI6u14MuK/fTa33hbbdjdSLr9Ua+hsipTZgnm0Pb4JHrlW1dhrFv1t cp5cLbIvJto4kuOJ0CDqftFzZfT6vJEOxRq+YMd6So8r8qnK9RSf31fk6LV5USdIp7OY ohyenVI7/+mufGpLAUORUEQF8/sWaX6CZIqrRTTLPXdccE+l8KtO45RuoOPlGvq7fX7D wD7CB628QZWTED2q2ryqxvbVKdm0Qto+MEthPwassMAedVrSec3IOYMF9Do+KcJfzWfu VJCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423346; x=1701028146; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Ze1Psd2Ie5OovBPk7SmUTsWgEFqDeyPvhXbgqpZhXY8=; b=AQ5aDUtZXCbzp+RRnLZdJxiS6PDBSFXcBlFTcwZPqAW5Tk7s/NFkxswD8f9YcQHEvw lONTDkHayQLYyF1IW4cFeCSiPWZ8CQusK4hTRLcgVFVZtnikE3BC+t1FKrzsxZUrBk0K PqerYId5RR/j83evVbmSJYnLSftA/PPYjEZ/VuQ8jK6LFXk/QytbFjmVQBI9Q8yM7tN/ 1yswGNfjADjUdPtj+WX9Mjfj3/x+urZJnZa3N3hheipMfCKvkNQkVFMbXAYpzduzUvPo VGEMUM641U+cE5I0fZ85NxYvBX9Mt1nbDjLZl0Dm0qfXbmBuH5nZGFFP8ZyIPFRHvPa+ jSbg== X-Gm-Message-State: AOJu0YzfX7EIrjn4sG/aqytEO53YzLyGm0rp9Js73RdLRUH/mRyF/zhV sVMYRoCfStjyysmADIkKWN8= X-Google-Smtp-Source: AGHT+IGTlsOhMDU+nbF9BiKSPMYiu1dnBRu6RvVsGq1/cPrbFHBN7F+llm4LzYDPkkR6tdAEcGAOgQ== X-Received: by 2002:a05:6808:1451:b0:3b2:dcff:9e54 with SMTP id x17-20020a056808145100b003b2dcff9e54mr8319167oiv.24.1700423346359; Sun, 19 Nov 2023 11:49:06 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.49.03 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:49:05 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 20/24] swap: simplify and make swap_find_cache static Date: Mon, 20 Nov 2023 03:47:36 +0800 Message-ID: <20231119194740.94101-21-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song There are only two callers within the same file now, make it static and drop the redundant if check on the shadow variable. Signed-off-by: Kairui Song --- mm/swap.h | 2 -- mm/swap_state.c | 5 ++--- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index b073c29c9790..4402970547e7 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -46,8 +46,6 @@ void __delete_from_swap_cache(struct folio *folio, void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); -struct folio *swap_cache_get_folio(swp_entry_t entry, - struct vm_fault *vmf, void **shadowp); struct folio *filemap_get_incore_folio(struct address_space *mapping, pgoff_t index); =20 diff --git a/mm/swap_state.c b/mm/swap_state.c index eef66757c615..6f39aa8394f1 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -334,15 +334,14 @@ static inline bool swap_use_vma_readahead(struct swap= _info_struct *si) * * Caller must lock the swap device or hold a reference to keep it valid. */ -struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_fault *vmf= , void **shadowp) +static struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_fau= lt *vmf, void **shadowp) { bool vma_ra, readahead; struct folio *folio; =20 folio =3D filemap_get_entry(swap_address_space(entry), swp_offset(entry)); if (xa_is_value(folio)) { - if (shadowp) - *shadowp =3D folio; + *shadowp =3D folio; return NULL; } =20 --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9947C072A2 for ; Sun, 19 Nov 2023 19:50:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231743AbjKSTuN (ORCPT ); Sun, 19 Nov 2023 14:50:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231748AbjKSTtt (ORCPT ); Sun, 19 Nov 2023 14:49:49 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1671B1704 for ; Sun, 19 Nov 2023 11:49:10 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-6c115026985so3870434b3a.1 for ; Sun, 19 Nov 2023 11:49:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423349; x=1701028149; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=w2h0VsFq1SfFmbRfFbDLLmtclse5HYqoYk4CcDA6qDI=; b=kPVLQgwQevJOFiEY0KJ3SPO+qeDa6dHv8o5dvT8mV5RwVnvACWUcAZ09RNYS17O0uJ F5rUlZSZYZJSP3McplnFbayZBp3zJi4oniSKsO9H8xecRURD72zLnK9uzZs31NqN+FrE LF3atFZV3hgZHgdhmuSJ/RuCscN2YzlkQSPcYOHSmqnb3XOY4MxFL9qBy51K9+1mQPoZ ck8UYcQqX568dEFMmZBWVs8elpC8wi4GDirZVqtBxZVK4E8o5R2ZNdLTL325oViQy4W9 F/gRx4taMjSmqYErjPpedjaO3X5zf6tTBo8pad0BnIk+u5a2JlxqT1YPZpkEs9kWg3oO DkqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423349; x=1701028149; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=w2h0VsFq1SfFmbRfFbDLLmtclse5HYqoYk4CcDA6qDI=; b=A7tSNiJiiEpGtL4wf3bUp04p93syIY+rRQWQ85OXb2xmPneZKkyvnfHYM5FJDbkquE CI8nWL3aQUeKRJ0mFVlGEhFZmjh2w9tyeP4s9CouR1MbVbNKSGw/qL40EDAjTsRCZ5SM UmmsBrck9jpj8k4NwP9LjEnfpA7bhCjb52Ub40P+EGJVcksgPS5qnKzc2oXMmrwyeEc/ OXZC/tQZ8TMXHJ9tf46ywF/bIrF9WdsdSE8QO5Ym0wJm7ik1FAn8t7m7KJsTjWRbTIqb z8nohzw1pKOkZO/Bgvz3Msl6mcL9elP6jIQtQ6i2dKhbrpv1aEXA5zSjnQUxhiMqaWLF lggg== X-Gm-Message-State: AOJu0YwroDZ4VCarqa6mw9jY2e/KLe3DLC7JbleeCYdZueQkAEC3fOPZ aITERZ1MYRnVW15rOoI5zq0= X-Google-Smtp-Source: AGHT+IH7Ef3T9tYhjLf81++NfU6x692Tm7IbNx4ET6n1TwJLp+72WBRurfMYbX3sMqxL34SXbWHpQw== X-Received: by 2002:a05:6a00:1988:b0:6cb:a1fe:5217 with SMTP id d8-20020a056a00198800b006cba1fe5217mr787020pfl.16.1700423349552; Sun, 19 Nov 2023 11:49:09 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.49.06 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:49:08 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 21/24] swap: make swapin_readahead result checking argument mandatory Date: Mon, 20 Nov 2023 03:47:37 +0800 Message-ID: <20231119194740.94101-22-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song This is only one caller now in page fault path, make the result return argument mandatory. Signed-off-by: Kairui Song --- mm/swap_state.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 6f39aa8394f1..0433a2586c6d 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -913,7 +913,6 @@ static struct page *swapin_no_readahead(swp_entry_t ent= ry, gfp_t gfp_mask, struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_fault *vmf, enum swap_cache_result *result) { - enum swap_cache_result cache_result; struct swap_info_struct *si; struct mempolicy *mpol; void *shadow =3D NULL; @@ -928,29 +927,27 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_= t gfp_mask, =20 folio =3D swap_cache_get_folio(entry, vmf, &shadow); if (folio) { + *result =3D SWAP_CACHE_HIT; page =3D folio_file_page(folio, swp_offset(entry)); - cache_result =3D SWAP_CACHE_HIT; goto done; } =20 mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); if (swap_use_no_readahead(si, swp_offset(entry))) { + *result =3D SWAP_CACHE_BYPASS; page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm= ); - cache_result =3D SWAP_CACHE_BYPASS; if (shadow) workingset_refault(page_folio(page), shadow); - } else if (swap_use_vma_readahead(si)) { - page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); - cache_result =3D SWAP_CACHE_MISS; } else { - page =3D swap_cluster_readahead(entry, gfp_mask, mpol, ilx); - cache_result =3D SWAP_CACHE_MISS; + *result =3D SWAP_CACHE_MISS; + if (swap_use_vma_readahead(si)) + page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); + else + page =3D swap_cluster_readahead(entry, gfp_mask, mpol, ilx); } mpol_cond_put(mpol); done: put_swap_device(si); - if (result) - *result =3D cache_result; =20 return page; } --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CEDFC072A2 for ; Sun, 19 Nov 2023 19:50:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231695AbjKSTuY (ORCPT ); Sun, 19 Nov 2023 14:50:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231800AbjKSTt4 (ORCPT ); Sun, 19 Nov 2023 14:49:56 -0500 Received: from mail-io1-xd29.google.com (mail-io1-xd29.google.com [IPv6:2607:f8b0:4864:20::d29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB8EE1FC3 for ; Sun, 19 Nov 2023 11:49:13 -0800 (PST) Received: by mail-io1-xd29.google.com with SMTP id ca18e2360f4ac-7a6774da682so175596139f.3 for ; Sun, 19 Nov 2023 11:49:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423352; x=1701028152; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=YaE/AcfmvaBX4nhUPBOl45o2uIqec3EidHpbbgh18/A=; b=GMgAGHAGjIC3Ddy0feaX8VCRlLMutjNTDBX5rAcWdVC/FdX7vDfmOhxlW98XyoqPxM Mgw8BRbJ+gnw6un1cyxgndjsRWrHGR22bFefYaRmUIIIZBKYCO5HkcUcZN/mS3xmH/u2 TVHnUCPjFhKhfdy4CUfOhSTO7r0KOV7fAjbIbR/gkU+/OU2V1EGcCgnx+/MDwjL2azmD pqlxjlMOoIZnN9mcj9iOsXEWo21ezWM08nYbHt2iUnVCtJd5odLzC2wiRo27iyGVH2mc xH85SpjFZtJqOzFdAuMf1R0mDN7hE/LB0iYvnLWE1hZcwZ0pIRbp7G5/zGDuNCYrz/nr KJHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423352; x=1701028152; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=YaE/AcfmvaBX4nhUPBOl45o2uIqec3EidHpbbgh18/A=; b=MkREllb1Kik/odG9IqVmVxUSa1FOjqfID1SnORrScy44i4Xek/3RiGwFfJS0akUj0L 7bQ8cuPycHr3Dwva8vBLOSRJUftCuAsAiaBfPiN0nLWaavfpPauuPK0W/X+SRwIFFEYy 4cna5UJ7UlqwRig2iKk9Z6JbAS5bTJhFkib7ww0TCFPdBzRIMHv4cuQ+orfJK9Vo/1F8 TltlHM9RIt1+ff8yWX10E06CZny8nVnXL0NDHIBKVgiUqQXOTvJ+LbEUS3PGLRPsKemj YdNMXqCUnlqKOG9hUhRLSGUIsaYKWuY+wObJkWfpA2k+ejaRiMAEQgfr1vkcXElmRg3S T2Wg== X-Gm-Message-State: AOJu0YzweVJ338LxHOPIjB0HxjksYt7n6B5jcp172vhrL42BaCNoHdIR nqIsX7XZpjV9Mz7gShRarPc= X-Google-Smtp-Source: AGHT+IGe3yrZIEULCUKtOSeAd+ADt5bek5FOv3ZnPoX4w0FfFxyEh6EumJmpzCHvfJdKyxdaJe+uQA== X-Received: by 2002:a05:6e02:2607:b0:35a:fae1:1358 with SMTP id by7-20020a056e02260700b0035afae11358mr5301806ilb.12.1700423352719; Sun, 19 Nov 2023 11:49:12 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.49.09 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:49:12 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 22/24] swap: make swap_cluster_readahead static Date: Mon, 20 Nov 2023 03:47:38 +0800 Message-ID: <20231119194740.94101-23-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song Now there is no caller outside the same file, make it static. Signed-off-by: Kairui Song --- mm/swap.h | 8 -------- mm/swap_state.c | 4 ++-- 2 files changed, 2 insertions(+), 10 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 4402970547e7..795a25df87da 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -56,8 +56,6 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp= _t gfp_mask, struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated); -struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, - struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf, enum swap_cache_result *result); struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, @@ -93,12 +91,6 @@ static inline void show_swap_cache_info(void) { } =20 -static inline struct page *swap_cluster_readahead(swp_entry_t entry, - gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx) -{ - return NULL; -} - static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mas= k, struct vm_fault *vmf, enum swap_cache_result *result) { diff --git a/mm/swap_state.c b/mm/swap_state.c index 0433a2586c6d..b377e55cb850 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -627,8 +627,8 @@ static unsigned long swapin_nr_pages(unsigned long offs= et) * are used for every page of the readahead: neighbouring pages on swap * are fairly likely to have been swapped out from the same node. */ -struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct mempolicy *mpol, pgoff_t ilx) +static struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_ma= sk, + struct mempolicy *mpol, pgoff_t ilx) { struct page *page; unsigned long entry_offset =3D swp_offset(entry); --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F25A1C54FB9 for ; Sun, 19 Nov 2023 19:50:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231600AbjKSTuc (ORCPT ); Sun, 19 Nov 2023 14:50:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231874AbjKSTuC (ORCPT ); Sun, 19 Nov 2023 14:50:02 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9513710C0 for ; Sun, 19 Nov 2023 11:49:16 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-6b709048f32so3117372b3a.0 for ; Sun, 19 Nov 2023 11:49:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423356; x=1701028156; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=VWrBT21AdvUolArI+Vamx/Z+QXIBnOosXdhjX6fE5gQ=; b=llCCsuesSJyPuyRFTLv8zFEmg274BkyFCpdLGpGF3QZ/aVnQaB+b6VShm6BQkURq0r 0yZIb1FpJ6Vd5DSIo+ipxGVOQ7j5r4vTauzyN0Kkmzt8Ex0ited9OJ9b3Wf5IzmgxJ8Q nmRE88aq5dMmiOLORI01TSSnszaSgN7HKmSX9UQrfB8GvMLuhIckKTVUgwllptiawseb omx+/XZX5dzi0b+dmJjTDxnUqr6TNyKCggrpovEC09puWo54Ce6cjptMVeOded2FrN/k iXlXAKtRy2SqCCpVpIT+TvpHzO6Qa+ARpxdNyo1IuKaZVUzpuRVII0X5eOZm0I9WxCw9 J8zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423356; x=1701028156; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=VWrBT21AdvUolArI+Vamx/Z+QXIBnOosXdhjX6fE5gQ=; b=j98dgkOL6X7iHe1yf52yqm8iId3Z/rp+zwTfhHlo6/J19UdTH0/pq3Jkhbp73znLV/ KrTgnsCRhN9lZRqjImmp0ezkik7jS/Zv5RDWya6lFkOaPoceGQaQDx4PTPzEgY1QgWT7 2zCkx95YWa/LMngZjpG8CvT7JCgnLPj2CvmYeKwWVyHSuU5smZW4kB3Bvnel0wiDXsxS cU/VXKZ4BKlZNRtfN1VRS0fqBS+ngvVcXaar3CtwU3+ySxmZT01GktH4RIkGi6Sjtx6c v/Kic7uLFxwA7WkSzSBJxbEPNMzH64P0DLOi2j4NJW+f5giKCN4IuPBn4xKe7O59SfD4 GlnA== X-Gm-Message-State: AOJu0Ywbu2at4Pgg60ZhDwbMUJzq7gRqV7LbXVsFXPQLJpiqZplpvTQ2 Q0gLE++NEAOo74EY5ECSwOc= X-Google-Smtp-Source: AGHT+IGWIkDwN2GaIA67iopevmfTqkLtEBcX4Ge5hWNRIHGYUTrFSZBXVCdyc6Dj2MoCIbUQoxa2mw== X-Received: by 2002:a05:6a00:3492:b0:6cb:a0dc:3d56 with SMTP id cp18-20020a056a00349200b006cba0dc3d56mr660136pfb.0.1700423355969; Sun, 19 Nov 2023 11:49:15 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.49.13 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:49:15 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 23/24] swap: fix multiple swap leak when after cgroup migrate Date: Mon, 20 Nov 2023 03:47:39 +0800 Message-ID: <20231119194740.94101-24-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song When a process which previously swapped some memory was moved to another cgroup, and the cgroup it previous in is dead, then swapped in pages will be leaked into rootcg. Previous commits fixed the bug for no readahead path, this commit fix the same issue for readahead path. This can be easily reproduced by: - Setup a SSD or HDD swap. - Create memory cgroup A, B and C. - Spawn process P1 in cgroup A and make it swap out some pages. - Move process P1 to memory cgroup B. - Destroy cgroup A. - Do a swapoff in cgroup C - Swapped in pages is accounted into cgroup C. This patch will fix it make the swapped in pages accounted in cgroup B. Signed-off-by: Kairui Song --- mm/swap.h | 2 +- mm/swap_state.c | 19 ++++++++++--------- mm/zswap.c | 2 +- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 795a25df87da..4374bf11ca41 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -55,7 +55,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp= _t gfp_mask, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, - bool *new_page_allocated); + struct mm_struct *mm, bool *new_page_allocated); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf, enum swap_cache_result *result); struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, diff --git a/mm/swap_state.c b/mm/swap_state.c index b377e55cb850..362a6f674b36 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -416,7 +416,7 @@ struct folio *filemap_get_incore_folio(struct address_s= pace *mapping, =20 struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, - bool *new_page_allocated) + struct mm_struct *mm, bool *new_page_allocated) { struct swap_info_struct *si; struct folio *folio; @@ -462,7 +462,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry,= gfp_t gfp_mask, mpol, ilx, numa_node_id()); if (!folio) goto fail_put_swap; - if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry)) + if (mem_cgroup_swapin_charge_folio(folio, mm, gfp_mask, entry)) goto fail_put_folio; =20 /* @@ -540,7 +540,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, g= fp_t gfp_mask, =20 mpol =3D get_vma_policy(vma, addr, 0, &ilx); page =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + vma->vm_mm, &page_allocated); mpol_cond_put(mpol); =20 if (page_allocated) @@ -628,7 +628,8 @@ static unsigned long swapin_nr_pages(unsigned long offs= et) * are fairly likely to have been swapped out from the same node. */ static struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_ma= sk, - struct mempolicy *mpol, pgoff_t ilx) + struct mempolicy *mpol, pgoff_t ilx, + struct mm_struct *mm) { struct page *page; unsigned long entry_offset =3D swp_offset(entry); @@ -657,7 +658,7 @@ static struct page *swap_cluster_readahead(swp_entry_t = entry, gfp_t gfp_mask, /* Ok, do the async read-ahead now */ page =3D __read_swap_cache_async( swp_entry(swp_type(entry), offset), - gfp_mask, mpol, ilx, &page_allocated); + gfp_mask, mpol, ilx, mm, &page_allocated); if (!page) continue; if (page_allocated) { @@ -675,7 +676,7 @@ static struct page *swap_cluster_readahead(swp_entry_t = entry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ page =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + mm, &page_allocated); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); return page; @@ -830,7 +831,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ= _entry, gfp_t gfp_mask, pte_unmap(pte); pte =3D NULL; page =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, - &page_allocated); + vmf->vma->vm_mm, &page_allocated); if (!page) continue; if (page_allocated) { @@ -850,7 +851,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ= _entry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ page =3D __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx, - &page_allocated); + vmf->vma->vm_mm, &page_allocated); if (unlikely(page_allocated)) swap_readpage(page, false, NULL); return page; @@ -980,7 +981,7 @@ struct page *swapin_page_non_fault(swp_entry_t entry, g= fp_t gfp_mask, workingset_refault(page_folio(page), shadow); cache_result =3D SWAP_CACHE_BYPASS; } else { - page =3D swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + page =3D swap_cluster_readahead(entry, gfp_mask, mpol, ilx, mm); cache_result =3D SWAP_CACHE_MISS; } done: diff --git a/mm/zswap.c b/mm/zswap.c index 030cc137138f..e2712ff169b1 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1081,7 +1081,7 @@ static int zswap_writeback_entry(struct zswap_entry *= entry, /* try to allocate swap cache page */ mpol =3D get_task_policy(current); page =3D __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, - NO_INTERLEAVE_INDEX, &page_was_allocated); + NO_INTERLEAVE_INDEX, NULL, &page_was_allocated); if (!page) { ret =3D -ENOMEM; goto fail; --=20 2.42.0 From nobody Thu Dec 18 18:00:49 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05854C54FB9 for ; Sun, 19 Nov 2023 19:50:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231770AbjKSTuh (ORCPT ); Sun, 19 Nov 2023 14:50:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231919AbjKSTuF (ORCPT ); Sun, 19 Nov 2023 14:50:05 -0500 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB17E171E for ; Sun, 19 Nov 2023 11:49:19 -0800 (PST) Received: by mail-oi1-x230.google.com with SMTP id 5614622812f47-3b2e4107f47so2778864b6e.2 for ; Sun, 19 Nov 2023 11:49:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423359; x=1701028159; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ujyhaBmVKqUCILaG0//cQCFqqP/+VsPiP59qcIbNEbE=; b=fX0Qw+daECB1rjTwNoau70a5cC0VHEPhHZPMPF3Y4iA51CSDOJ/i8EPffWP3fHqtDR Ws803bFEgujnXb4Qcmmc5kvfCykRnh8s9YfTLjenNd0xIUC6p3tMyTOIB4NEbu8VvBiM XQk08s+K22SPV6bSTWAry3AhQhVhBtjIPBy5hO0/1z/ixGYD5icy0PoTl73jNlU4bO94 wkwhj5UmkiGuyTlq5UcYnWF745ih4oBi7GRSNfHJ3xDJEbBBbcrttCmlFUx4jTV1mYDU rHDpRgtYBZ634H2Bq3IbzKPAC7yq6uvjIaMNUbl0S3tn1X3O2JEAO4aukOQ+52j++IGC 8xAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423359; x=1701028159; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ujyhaBmVKqUCILaG0//cQCFqqP/+VsPiP59qcIbNEbE=; b=cxqKx6FtQlbcby+UVIL0PKP9YTvmaAQnrq6zO6ICJ7SST9tOMhFfKl//Mn1H7y0opW Lg6neE4lG80XVbhXOXTPbXmn0RVAvbiM3NAl5uE6IfecXMxoN6DVC6Vq97RibBjuv90k vmR5kGPa8YTkspP2z/jsZevegXmh1K7iqS/8wgqHB8vdR2uXuVH4YdGYJAFRjk0JAx0A LFfC/gkd10oeBMKhVGepO22P5ZfMaE50csm08ETgFq/DcuJJ7m3ydtYTrHyJOA8OX6lY eMWScE2lJyEK1c6OW7gc+iUF7iJ3KvsZIr/Xlt1IA7ZGOfuRKo3LCN0BnlqTINZRv2Sp HciQ== X-Gm-Message-State: AOJu0YzH1pIthriCU/JeyE2uX5r0YOGCCOMVj6yxv065ncJakHz1zZou ebZkGmVjz6gjY8FoU2NQuLpSl7WClKC2+01u X-Google-Smtp-Source: AGHT+IG9ZMvtFIPRmfdYGQxFIrUp0PVcy1icQvi67iQPNdmmyu8YtKZdvx7dJGBXf+Hyuvvv17Q3Eg== X-Received: by 2002:a05:6808:2389:b0:3b5:75d3:14aa with SMTP id bp9-20020a056808238900b003b575d314aamr9092741oib.25.1700423359201; Sun, 19 Nov 2023 11:49:19 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.49.16 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:49:18 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 24/24] mm/swap: change swapin_readahead to swapin_page_fault Date: Mon, 20 Nov 2023 03:47:40 +0800 Message-ID: <20231119194740.94101-25-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song Now swapin_readahead is only called from direct page fault path, so rename it and drop the gfp argument, since there is only one caller always using the same flag for userspace page fault. Signed-off-by: Kairui Song --- mm/memory.c | 4 ++-- mm/swap.h | 6 +++--- mm/swap_state.c | 15 +++++++++------ 3 files changed, 14 insertions(+), 11 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 620fa87557fd..4907a5b1b75b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3844,8 +3844,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out; } =20 - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - vmf, &cache_result); + page =3D swapin_page_fault(entry, GFP_HIGHUSER_MOVABLE, + vmf, &cache_result); if (IS_ERR_OR_NULL(page)) { /* * Back out if somebody else faulted in this pte diff --git a/mm/swap.h b/mm/swap.h index 4374bf11ca41..2f8f8befff89 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -56,8 +56,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp= _t gfp_mask, struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, struct mm_struct *mm, bool *new_page_allocated); -struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf, enum swap_cache_result *result); +struct page *swapin_page_fault(swp_entry_t entry, gfp_t flag, + struct vm_fault *vmf, enum swap_cache_result *result); struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx, struct mm_struct *mm, @@ -91,7 +91,7 @@ static inline void show_swap_cache_info(void) { } =20 -static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mas= k, +static inline struct page *swapin_page_fault(swp_entry_t swp, gfp_t gfp_ma= sk, struct vm_fault *vmf, enum swap_cache_result *result) { return NULL; diff --git a/mm/swap_state.c b/mm/swap_state.c index 362a6f674b36..2f51d2e64e59 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -899,7 +899,7 @@ static struct page *swapin_no_readahead(swp_entry_t ent= ry, gfp_t gfp_mask, } =20 /** - * swapin_readahead - swap in pages in hope we need them soon + * swapin_page_fault - swap in a page from page fault context * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vmf: fault information @@ -911,8 +911,8 @@ static struct page *swapin_no_readahead(swp_entry_t ent= ry, gfp_t gfp_mask, * it will read ahead blocks by cluster-based(ie, physical disk based) * or vma-based(ie, virtual address based on faulty address) readahead. */ -struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf, enum swap_cache_result *result) +struct page *swapin_page_fault(swp_entry_t entry, gfp_t gfp_mask, + struct vm_fault *vmf, enum swap_cache_result *result) { struct swap_info_struct *si; struct mempolicy *mpol; @@ -936,15 +936,18 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_= t gfp_mask, mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); if (swap_use_no_readahead(si, swp_offset(entry))) { *result =3D SWAP_CACHE_BYPASS; - page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm= ); + page =3D swapin_no_readahead(entry, GFP_HIGHUSER_MOVABLE, + mpol, ilx, vmf->vma->vm_mm); if (shadow) workingset_refault(page_folio(page), shadow); } else { *result =3D SWAP_CACHE_MISS; if (swap_use_vma_readahead(si)) - page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); + page =3D swap_vma_readahead(entry, GFP_HIGHUSER_MOVABLE, + mpol, ilx, vmf); else - page =3D swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + page =3D swap_cluster_readahead(entry, GFP_HIGHUSER_MOVABLE, + mpol, ilx, vmf->vma->vm_mm); } mpol_cond_put(mpol); done: --=20 2.42.0