From nobody Wed Dec 17 03:04:40 2025 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA57A7D07E for ; Tue, 18 Jun 2024 06:54:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718693680; cv=none; b=Fol8vPJ+37r8vg5BLpdq2p6kmH1X6Tyv1vUU3str2x5gdznjUW7OQ9kJ5ybTcDHD7F4o6vGVIaUSWVhVHwpjjoi7hPE02QAM2QnnHNCVMfeZ3zNB4ce7CjMZOP9pCEbaAc9pM32Yr1R79UHRVuvtthu/jtA2qBH/OUGHqf0VIYQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718693680; c=relaxed/simple; bh=BWI5TFADp95k290UyR2q8Ifcha58gdQYoHqG8YBDuVM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Y0+2D2YN518aMGonagQScj4U3UgibkOOMVVLxr25ge1+vzAs29c6LnYnvDTy4/8PZTaG0LGmnARfBG0mwjo8g5/9JEzgVFAk+UpyCFDCREw3qiCR0trlN+bOqHWRw6uoXfDS/iS1565uIpsgWT154XAnGTlxlNiZu57oIeUT6Og= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=SdmM4p/h; arc=none smtp.client-ip=115.124.30.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="SdmM4p/h" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1718693671; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=A1nuL6+6NmFkbTac9imOaGsyA3d5PGdnrfoW9BMoYYs=; b=SdmM4p/hL0pBaDI+D+wT/aVXDvnuD6x5K84xMmQFz4Uf9psxbNi3uFDaAVevcy0lwdNiRqmJ9Dx4cIDYkzCGog8zVFzD+UnRRgDBX00WtMKfZ0ouoZqDNpJWLS9QxPxHZ3nnUVEWp0Ft4QVLsSltfear6YK1JDHRaDF0W0PsVW0= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033032014031;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---0W8jH7lG_1718693669; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W8jH7lG_1718693669) by smtp.aliyun-inc.com; Tue, 18 Jun 2024 14:54:29 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, chrisl@kernel.org, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/9] mm: swap: extend swap_shmem_alloc() to support batch SWAP_MAP_SHMEM flag setting Date: Tue, 18 Jun 2024 14:54:14 +0800 Message-Id: <4909c4bf2f28665c6a35cd7e77990c5defc3d7fb.1718690645.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To support shmem large folio swap operations, add a new parameter to swap_shmem_alloc() that allows batch SWAP_MAP_SHMEM flag setting for shmem swap entries. While we are at it, using folio_nr_pages() to get the number of pages of the folio as a preparation. Signed-off-by: Baolin Wang --- include/linux/swap.h | 4 +- mm/shmem.c | 6 ++- mm/swapfile.c | 98 +++++++++++++++++++++++--------------------- 3 files changed, 57 insertions(+), 51 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index d33ce740b695..bffb2281840d 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -474,7 +474,7 @@ void put_swap_folio(struct folio *folio, swp_entry_t en= try); extern swp_entry_t get_swap_page_of_type(int); extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order); extern int add_swap_count_continuation(swp_entry_t, gfp_t); -extern void swap_shmem_alloc(swp_entry_t); +extern void swap_shmem_alloc(swp_entry_t, int); extern int swap_duplicate(swp_entry_t); extern int swapcache_prepare(swp_entry_t); extern void swap_free_nr(swp_entry_t entry, int nr_pages); @@ -541,7 +541,7 @@ static inline int add_swap_count_continuation(swp_entry= _t swp, gfp_t gfp_mask) return 0; } =20 -static inline void swap_shmem_alloc(swp_entry_t swp) +static inline void swap_shmem_alloc(swp_entry_t swp, int nr) { } =20 diff --git a/mm/shmem.c b/mm/shmem.c index d09c6bf1f28a..b90965486631 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1432,6 +1432,7 @@ static int shmem_writepage(struct page *page, struct = writeback_control *wbc) struct shmem_sb_info *sbinfo =3D SHMEM_SB(inode->i_sb); swp_entry_t swap; pgoff_t index; + int nr_pages; =20 /* * Our capabilities prevent regular writeback or sync from ever calling @@ -1464,6 +1465,7 @@ static int shmem_writepage(struct page *page, struct = writeback_control *wbc) } =20 index =3D folio->index; + nr_pages =3D folio_nr_pages(folio); =20 /* * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC @@ -1516,8 +1518,8 @@ static int shmem_writepage(struct page *page, struct = writeback_control *wbc) if (add_to_swap_cache(folio, swap, __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN, NULL) =3D=3D 0) { - shmem_recalc_inode(inode, 0, 1); - swap_shmem_alloc(swap); + shmem_recalc_inode(inode, 0, nr_pages); + swap_shmem_alloc(swap, nr_pages); shmem_delete_from_page_cache(folio, swp_to_radix_entry(swap)); =20 mutex_unlock(&shmem_swaplist_mutex); diff --git a/mm/swapfile.c b/mm/swapfile.c index 9c6d8e557c0f..1dde413264e2 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3362,62 +3362,58 @@ void si_swapinfo(struct sysinfo *val) * - swap-cache reference is requested but the entry is not used. -> ENOENT * - swap-mapped reference requested but needs continued swap count. -> EN= OMEM */ -static int __swap_duplicate(swp_entry_t entry, unsigned char usage) +static int __swap_duplicate(struct swap_info_struct *p, unsigned long offs= et, + int nr, unsigned char usage) { - struct swap_info_struct *p; struct swap_cluster_info *ci; - unsigned long offset; unsigned char count; unsigned char has_cache; - int err; + int err, i; =20 - p =3D swp_swap_info(entry); - - offset =3D swp_offset(entry); ci =3D lock_cluster_or_swap_info(p, offset); =20 - count =3D p->swap_map[offset]; - - /* - * swapin_readahead() doesn't check if a swap entry is valid, so the - * swap entry could be SWAP_MAP_BAD. Check here with lock held. - */ - if (unlikely(swap_count(count) =3D=3D SWAP_MAP_BAD)) { - err =3D -ENOENT; - goto unlock_out; - } - - has_cache =3D count & SWAP_HAS_CACHE; - count &=3D ~SWAP_HAS_CACHE; - err =3D 0; - - if (usage =3D=3D SWAP_HAS_CACHE) { + for (i =3D 0; i < nr; i++) { + count =3D p->swap_map[offset + i]; =20 - /* set SWAP_HAS_CACHE if there is no cache and entry is used */ - if (!has_cache && count) - has_cache =3D SWAP_HAS_CACHE; - else if (has_cache) /* someone else added cache */ - err =3D -EEXIST; - else /* no users remaining */ + /* + * swapin_readahead() doesn't check if a swap entry is valid, so the + * swap entry could be SWAP_MAP_BAD. Check here with lock held. + */ + if (unlikely(swap_count(count) =3D=3D SWAP_MAP_BAD)) { err =3D -ENOENT; + break; + } =20 - } else if (count || has_cache) { + has_cache =3D count & SWAP_HAS_CACHE; + count &=3D ~SWAP_HAS_CACHE; + err =3D 0; + + if (usage =3D=3D SWAP_HAS_CACHE) { + /* set SWAP_HAS_CACHE if there is no cache and entry is used */ + if (!has_cache && count) + has_cache =3D SWAP_HAS_CACHE; + else if (has_cache) /* someone else added cache */ + err =3D -EEXIST; + else /* no users remaining */ + err =3D -ENOENT; + } else if (count || has_cache) { + if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) + count +=3D usage; + else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX) + err =3D -EINVAL; + else if (swap_count_continued(p, offset + i, count)) + count =3D COUNT_CONTINUED; + else + err =3D -ENOMEM; + } else + err =3D -ENOENT; /* unused swap entry */ =20 - if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) - count +=3D usage; - else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX) - err =3D -EINVAL; - else if (swap_count_continued(p, offset, count)) - count =3D COUNT_CONTINUED; - else - err =3D -ENOMEM; - } else - err =3D -ENOENT; /* unused swap entry */ + if (err) + break; =20 - if (!err) - WRITE_ONCE(p->swap_map[offset], count | has_cache); + WRITE_ONCE(p->swap_map[offset + i], count | has_cache); + } =20 -unlock_out: unlock_cluster_or_swap_info(p, ci); return err; } @@ -3426,9 +3422,12 @@ static int __swap_duplicate(swp_entry_t entry, unsig= ned char usage) * Help swapoff by noting that swap entry belongs to shmem/tmpfs * (in which case its reference count is never incremented). */ -void swap_shmem_alloc(swp_entry_t entry) +void swap_shmem_alloc(swp_entry_t entry, int nr) { - __swap_duplicate(entry, SWAP_MAP_SHMEM); + struct swap_info_struct *p =3D swp_swap_info(entry); + unsigned long offset =3D swp_offset(entry); + + __swap_duplicate(p, offset, nr, SWAP_MAP_SHMEM); } =20 /* @@ -3440,9 +3439,11 @@ void swap_shmem_alloc(swp_entry_t entry) */ int swap_duplicate(swp_entry_t entry) { + struct swap_info_struct *p =3D swp_swap_info(entry); + unsigned long offset =3D swp_offset(entry); int err =3D 0; =20 - while (!err && __swap_duplicate(entry, 1) =3D=3D -ENOMEM) + while (!err && __swap_duplicate(p, offset, 1, 1) =3D=3D -ENOMEM) err =3D add_swap_count_continuation(entry, GFP_ATOMIC); return err; } @@ -3457,7 +3458,10 @@ int swap_duplicate(swp_entry_t entry) */ int swapcache_prepare(swp_entry_t entry) { - return __swap_duplicate(entry, SWAP_HAS_CACHE); + struct swap_info_struct *p =3D swp_swap_info(entry); + unsigned long offset =3D swp_offset(entry); + + return __swap_duplicate(p, offset, 1, SWAP_HAS_CACHE); } =20 void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) --=20 2.39.3