From nobody Fri Dec 19 02:50:45 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2018C54FB9 for ; Sun, 19 Nov 2023 19:50:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231700AbjKSTuD (ORCPT ); Sun, 19 Nov 2023 14:50:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231840AbjKSTtg (ORCPT ); Sun, 19 Nov 2023 14:49:36 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91A90D78 for ; Sun, 19 Nov 2023 11:49:00 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-6c396ef9a3dso3070117b3a.1 for ; Sun, 19 Nov 2023 11:49:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423340; x=1701028140; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ogKmBoCop4FjzMsXxzq+m99gJioFg3IUepp9Cnvtmi4=; b=NT0k2DETo/JYhGoZQ4D+JumGWDBfOY/56XsePUpsVAtRJmYdQk5TTXqqAJ1s9666nv sinsGxqPSBJrDP2+9F+KHJa1JXBrgXoqEe9VBaSDupZgufHG9KOIqIMISmIp3uQLO0RC AZVh+0Y0gkXcn/oaDGpZU4DRnbptL88CNGcb+Jmku6wCPRJgX4O9a4HRLuo52rR6vQwg Ez+BTVheKCadcnoUPjfqacuJ9BDnhknie24MJp6TPHjAmplsNhHazcPd/B9Z0UsSG5yT z2Tu2pBqFZjZSLjeDEr50AxaIfSVjoUHaC42f/Y9OALZJ1EiktzwAlAyH7ktbeS6N7Ju dQ5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423340; x=1701028140; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ogKmBoCop4FjzMsXxzq+m99gJioFg3IUepp9Cnvtmi4=; b=d2A6+cxVQoy7DZF/jLhRZ0CHramW+Xe3wMpRDQMCWKmpPmut80IEf9E4Poti5nC9/R svmdQPpCjpg6ZOvisJnklNXyFD254STt5XQ7BggF47VZjTJstda2NbBI2OyRv/DIpFSO jOHnbhq5G0RVtGuB8IXmiw+1DNrbueZQKkVaTKQEnkh6+Q/57X7DtGb80t3+q5Bvjl88 71dCdLZP83Hhk2XfM87LrVjd/A3ulYB2zOIxFitnLLP0B7YjGSF3orO8cDX3p/r7vTOA gICsEpGdN3mt13Yvs8OM/CWFTfl42HtWk3TnktcEzZBUER1zk7Py61CvsrXqg6rGl3kY IBMA== X-Gm-Message-State: AOJu0YzJc0c6hdbX0b6qOoeuwF9u5xpOrPQy2l0cccFq+MDNzJ3CtqCx gyeKeXxMzztncaPsgHeaISo= X-Google-Smtp-Source: AGHT+IEtFX0YlsrOdaCPyUWZ1WjriHhJN7FSpvj0uTb3jwG62CN+QG7TWo6/q5IUmUrNJVLup3+uwA== X-Received: by 2002:a05:6a00:301d:b0:693:3c11:4293 with SMTP id ay29-20020a056a00301d00b006933c114293mr4840721pfb.14.1700423339887; Sun, 19 Nov 2023 11:48:59 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.56 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:59 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 18/24] mm/swap: introduce a helper non fault swapin Date: Mon, 20 Nov 2023 03:47:34 +0800 Message-ID: <20231119194740.94101-19-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Kairui Song There are two places where swapin is not direct caused by page fault: shmem swapin is invoked through shmem mapping, swapoff cause swapin by walking the page table. They used to construct a pseudo vmfault struct for swapin function. Shmem has dropped the pseudo vmfault recently in commit ddc1a5cbc05d ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"). Swapoff path is still using a pseudo vmfault. Introduce a helper for them both, this help save stack usage for swapoff path, and help apply a unified swapin cache and readahead policy check. Also prepare for follow up commits. Signed-off-by: Kairui Song --- mm/shmem.c | 51 ++++++++++++++++--------------------------------- mm/swap.h | 11 +++++++++++ mm/swap_state.c | 38 ++++++++++++++++++++++++++++++++++++ mm/swapfile.c | 23 +++++++++++----------- 4 files changed, 76 insertions(+), 47 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index f9ce4067c742..81d129aa66d1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1565,22 +1565,6 @@ static inline struct mempolicy *shmem_get_sbmpol(str= uct shmem_sb_info *sbinfo) static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *i= nfo, pgoff_t index, unsigned int order, pgoff_t *ilx); =20 -static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) -{ - struct mempolicy *mpol; - pgoff_t ilx; - struct page *page; - - mpol =3D shmem_get_pgoff_policy(info, index, 0, &ilx); - page =3D swap_cluster_readahead(swap, gfp, mpol, ilx); - mpol_cond_put(mpol); - - if (!page) - return NULL; - return page_folio(page); -} - /* * Make sure huge_gfp is always more limited than limit_gfp. * Some of the flags set permissions, while others set limitations. @@ -1854,9 +1838,12 @@ static int shmem_swapin_folio(struct inode *inode, p= goff_t index, { struct address_space *mapping =3D inode->i_mapping; struct shmem_inode_info *info =3D SHMEM_I(inode); - struct swap_info_struct *si; + enum swap_cache_result result; struct folio *folio =3D NULL; + struct mempolicy *mpol; + struct page *page; swp_entry_t swap; + pgoff_t ilx; int error; =20 VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); @@ -1866,34 +1853,30 @@ static int shmem_swapin_folio(struct inode *inode, = pgoff_t index, if (is_poisoned_swp_entry(swap)) return -EIO; =20 - si =3D get_swap_device(swap); - if (!si) { + mpol =3D shmem_get_pgoff_policy(info, index, 0, &ilx); + page =3D swapin_page_non_fault(swap, gfp, mpol, ilx, fault_mm, &result); + mpol_cond_put(mpol); + + if (PTR_ERR(page) =3D=3D -EBUSY) { if (!shmem_confirm_swap(mapping, index, swap)) return -EEXIST; else return -EINVAL; - } - - /* Look it up and read it in.. */ - folio =3D swap_cache_get_folio(swap, NULL, NULL); - if (!folio) { - /* Or update major stats only when swapin succeeds?? */ - if (fault_type) { + } else if (!page) { + error =3D -ENOMEM; + goto failed; + } else { + folio =3D page_folio(page); + if (fault_type && result !=3D SWAP_CACHE_HIT) { *fault_type |=3D VM_FAULT_MAJOR; count_vm_event(PGMAJFAULT); count_memcg_event_mm(fault_mm, PGMAJFAULT); } - /* Here we actually start the io */ - folio =3D shmem_swapin_cluster(swap, gfp, info, index); - if (!folio) { - error =3D -ENOMEM; - goto failed; - } } =20 /* We have to do this with folio locked to prevent races */ folio_lock(folio); - if (!folio_test_swapcache(folio) || + if ((result !=3D SWAP_CACHE_BYPASS && !folio_test_swapcache(folio)) || folio->swap.val !=3D swap.val || !shmem_confirm_swap(mapping, index, swap)) { error =3D -EEXIST; @@ -1930,7 +1913,6 @@ static int shmem_swapin_folio(struct inode *inode, pg= off_t index, delete_from_swap_cache(folio); folio_mark_dirty(folio); swap_free(swap); - put_swap_device(si); =20 *foliop =3D folio; return 0; @@ -1944,7 +1926,6 @@ static int shmem_swapin_folio(struct inode *inode, pg= off_t index, folio_unlock(folio); folio_put(folio); } - put_swap_device(si); =20 return error; } diff --git a/mm/swap.h b/mm/swap.h index da9deb5ba37d..b073c29c9790 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -62,6 +62,10 @@ struct page *swap_cluster_readahead(swp_entry_t entry, g= fp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf, enum swap_cache_result *result); +struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, + struct mm_struct *mm, + enum swap_cache_result *result); =20 static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -103,6 +107,13 @@ static inline struct page *swapin_readahead(swp_entry_= t swp, gfp_t gfp_mask, return NULL; } =20 +static inline struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t = gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, struct mm_struct *mm, + enum swap_cache_result *result) +{ + return NULL; +} + static inline int swap_writepage(struct page *p, struct writeback_control = *wbc) { return 0; diff --git a/mm/swap_state.c b/mm/swap_state.c index ff8a166603d0..eef66757c615 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -956,6 +956,44 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t= gfp_mask, return page; } =20 +struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, + struct mm_struct *mm, enum swap_cache_result *result) +{ + enum swap_cache_result cache_result; + struct swap_info_struct *si; + void *shadow =3D NULL; + struct folio *folio; + struct page *page; + + /* Prevent swapoff from happening to us */ + si =3D get_swap_device(entry); + if (unlikely(!si)) + return ERR_PTR(-EBUSY); + + folio =3D swap_cache_get_folio(entry, NULL, &shadow); + if (folio) { + page =3D folio_file_page(folio, swp_offset(entry)); + cache_result =3D SWAP_CACHE_HIT; + goto done; + } + + if (swap_use_no_readahead(si, swp_offset(entry))) { + page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, mm); + if (shadow) + workingset_refault(page_folio(page), shadow); + cache_result =3D SWAP_CACHE_BYPASS; + } else { + page =3D swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + cache_result =3D SWAP_CACHE_MISS; + } +done: + put_swap_device(si); + if (result) + *result =3D cache_result; + return page; +} + #ifdef CONFIG_SYSFS static ssize_t vma_ra_enabled_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) diff --git a/mm/swapfile.c b/mm/swapfile.c index 925ad92486a4..f8c5096fe0f0 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1822,20 +1822,15 @@ static int unuse_pte_range(struct vm_area_struct *v= ma, pmd_t *pmd, =20 si =3D swap_info[type]; do { + int ret; + pte_t ptent; + pgoff_t ilx; + swp_entry_t entry; struct page *page; unsigned long offset; + struct mempolicy *mpol; unsigned char swp_count; struct folio *folio =3D NULL; - swp_entry_t entry; - int ret; - pte_t ptent; - - struct vm_fault vmf =3D { - .vma =3D vma, - .address =3D addr, - .real_address =3D addr, - .pmd =3D pmd, - }; =20 if (!pte++) { pte =3D pte_offset_map(pmd, addr); @@ -1855,8 +1850,12 @@ static int unuse_pte_range(struct vm_area_struct *vm= a, pmd_t *pmd, offset =3D swp_offset(entry); pte_unmap(pte); pte =3D NULL; - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - &vmf, NULL); + + mpol =3D get_vma_policy(vma, addr, 0, &ilx); + page =3D swapin_page_non_fault(entry, GFP_HIGHUSER_MOVABLE, + mpol, ilx, vma->vm_mm, NULL); + mpol_cond_put(mpol); + if (IS_ERR(page)) return PTR_ERR(page); else if (page) --=20 2.42.0