From nobody Thu Dec 18 14:35:19 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D05C2C4332F for ; Wed, 13 Dec 2023 04:27:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235229AbjLMESX (ORCPT ); Tue, 12 Dec 2023 23:18:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235222AbjLMESS (ORCPT ); Tue, 12 Dec 2023 23:18:18 -0500 Received: from out-179.mta0.migadu.com (out-179.mta0.migadu.com [IPv6:2001:41d0:1004:224b::b3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E6D4AC for ; Tue, 12 Dec 2023 20:18:24 -0800 (PST) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Wed, 13 Dec 2023 04:17:58 +0000 Subject: [PATCH 1/5] mm/zswap: reuse dstmem when decompress MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20231213-zswap-dstmem-v1-1-896763369d04@bytedance.com> References: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> In-Reply-To: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> To: Andrew Morton , Nhat Pham , Chris Li , Johannes Weiner , Seth Jennings , Dan Streetman , Vitaly Wool , Yosry Ahmed Cc: Nhat Pham , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chengming Zhou X-Developer-Signature: v=1; a=ed25519-sha256; t=1702441093; l=2420; i=zhouchengming@bytedance.com; s=20231204; h=from:subject:message-id; bh=4hmeraw6zxa25WbpK1ECDI7EvpvobxeRhEkruBDznLg=; b=mbsWJQiE+cXMa66hJjNiMeb6vb0c2Wd9Q8gByBTeYUg30xknG8Mp4sD3keTJZ+y7+jzNC8l+X ujPVN2QVCJ9BZB6v/vneplbZq2p6q14W2exDJ4xuhzyTJtIF+0z++Bz X-Developer-Key: i=zhouchengming@bytedance.com; a=ed25519; pk=xFTmRtMG3vELGJBUiml7OYNdM393WOMv0iWWeQEVVdA= X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In the !zpool_can_sleep_mapped() case such as zsmalloc, we need to first copy the entry->handle memory to a temporary memory, which is allocated using kmalloc. Obviously we can reuse the per-compressor dstmem to avoid allocating every time, since it's percpu-compressor and protected in mutex. Signed-off-by: Chengming Zhou Reviewed-by: Nhat Pham Acked-by: Chris Li --- mm/zswap.c | 29 +++++++++-------------------- 1 file changed, 9 insertions(+), 20 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 7ee54a3d8281..edb8b45ed5a1 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1772,9 +1772,9 @@ bool zswap_load(struct folio *folio) struct zswap_entry *entry; struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; - u8 *src, *dst, *tmp; + unsigned int dlen =3D PAGE_SIZE; + u8 *src, *dst; struct zpool *zpool; - unsigned int dlen; bool ret; =20 VM_WARN_ON_ONCE(!folio_test_locked(folio)); @@ -1796,27 +1796,18 @@ bool zswap_load(struct folio *folio) goto stats; } =20 - zpool =3D zswap_find_zpool(entry); - if (!zpool_can_sleep_mapped(zpool)) { - tmp =3D kmalloc(entry->length, GFP_KERNEL); - if (!tmp) { - ret =3D false; - goto freeentry; - } - } - /* decompress */ - dlen =3D PAGE_SIZE; - src =3D zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); + acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); + mutex_lock(acomp_ctx->mutex); =20 + zpool =3D zswap_find_zpool(entry); + src =3D zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); if (!zpool_can_sleep_mapped(zpool)) { - memcpy(tmp, src, entry->length); - src =3D tmp; + memcpy(acomp_ctx->dstmem, src, entry->length); + src =3D acomp_ctx->dstmem; zpool_unmap_handle(zpool, entry->handle); } =20 - acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); - mutex_lock(acomp_ctx->mutex); sg_init_one(&input, src, entry->length); sg_init_table(&output, 1); sg_set_page(&output, page, PAGE_SIZE, 0); @@ -1827,15 +1818,13 @@ bool zswap_load(struct folio *folio) =20 if (zpool_can_sleep_mapped(zpool)) zpool_unmap_handle(zpool, entry->handle); - else - kfree(tmp); =20 ret =3D true; stats: count_vm_event(ZSWPIN); if (entry->objcg) count_objcg_event(entry->objcg, ZSWPIN); -freeentry: + spin_lock(&tree->lock); if (ret && zswap_exclusive_loads_enabled) { zswap_invalidate_entry(tree, entry); --=20 b4 0.10.1 From nobody Thu Dec 18 14:35:19 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B1EAC4167D for ; Wed, 13 Dec 2023 04:18:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378496AbjLMESa (ORCPT ); Tue, 12 Dec 2023 23:18:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235248AbjLMESY (ORCPT ); Tue, 12 Dec 2023 23:18:24 -0500 Received: from out-176.mta0.migadu.com (out-176.mta0.migadu.com [91.218.175.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 451A6DB for ; Tue, 12 Dec 2023 20:18:28 -0800 (PST) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Wed, 13 Dec 2023 04:17:59 +0000 Subject: [PATCH 2/5] mm/zswap: change dstmem size to one page MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20231213-zswap-dstmem-v1-2-896763369d04@bytedance.com> References: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> In-Reply-To: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> To: Andrew Morton , Nhat Pham , Chris Li , Johannes Weiner , Seth Jennings , Dan Streetman , Vitaly Wool , Yosry Ahmed Cc: Nhat Pham , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chengming Zhou X-Developer-Signature: v=1; a=ed25519-sha256; t=1702441093; l=1331; i=zhouchengming@bytedance.com; s=20231204; h=from:subject:message-id; bh=SLwC/l/d7alrAZPMCA93CoRFuL9N61Dk4roVdlkjgAA=; b=FqWQBihVstOvjrBfPRgfiP9wgadr1XURWEFeQL4WfiDUcr3D21m8m/22XnhX+6BfBtu8xruQ4 Mq4yNgcfV1aAyr2EQ0oZOFiKr5LalUHvp2MuUSb5WHaHk5Paq/AcXW5 X-Developer-Key: i=zhouchengming@bytedance.com; a=ed25519; pk=xFTmRtMG3vELGJBUiml7OYNdM393WOMv0iWWeQEVVdA= X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Change the dstmem size from 2 * PAGE_SIZE to only one page since we only need at most one page when compress, and the "dlen" is also PAGE_SIZE in acomp_request_set_params(). If the output size > PAGE_SIZE we don't wanna store the output in zswap anyway. So change it to one page, and delete the stale comment. Signed-off-by: Chengming Zhou --- mm/zswap.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index edb8b45ed5a1..fa186945010d 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -707,7 +707,7 @@ static int zswap_dstmem_prepare(unsigned int cpu) struct mutex *mutex; u8 *dst; =20 - dst =3D kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); + dst =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu)); if (!dst) return -ENOMEM; =20 @@ -1662,8 +1662,7 @@ bool zswap_store(struct folio *folio) sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); =20 - /* zswap_dstmem is of size (PAGE_SIZE * 2). Reflect same in sg_list */ - sg_init_one(&output, dst, PAGE_SIZE * 2); + sg_init_one(&output, dst, PAGE_SIZE); acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SIZE, dlen= ); /* * it maybe looks a little bit silly that we send an asynchronous request, --=20 b4 0.10.1 From nobody Thu Dec 18 14:35:19 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A6AFC4332F for ; Wed, 13 Dec 2023 04:18:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378498AbjLMESd (ORCPT ); Tue, 12 Dec 2023 23:18:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378492AbjLMES2 (ORCPT ); Tue, 12 Dec 2023 23:18:28 -0500 Received: from out-172.mta0.migadu.com (out-172.mta0.migadu.com [IPv6:2001:41d0:1004:224b::ac]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AE15D3 for ; Tue, 12 Dec 2023 20:18:34 -0800 (PST) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Wed, 13 Dec 2023 04:18:00 +0000 Subject: [PATCH 3/5] mm/zswap: refactor out __zswap_load() MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20231213-zswap-dstmem-v1-3-896763369d04@bytedance.com> References: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> In-Reply-To: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> To: Andrew Morton , Nhat Pham , Chris Li , Johannes Weiner , Seth Jennings , Dan Streetman , Vitaly Wool , Yosry Ahmed Cc: Nhat Pham , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chengming Zhou X-Developer-Signature: v=1; a=ed25519-sha256; t=1702441093; l=5311; i=zhouchengming@bytedance.com; s=20231204; h=from:subject:message-id; bh=LVsWNjlGxO7eWZ2spAo/8MIeJF4v6Hli6085n3PpW7s=; b=ijQ9/feCc3OfwRSCbwZHiaWMvVlGwOYR72XX1nEbxryfbyqXRUY9fmMy48+fedmJ3WU1+lzPx h1PB4sHPNCpCvJRMoxnF1fNULuTW9QjfqI+tpyflPewkKZ77sQTkEtj X-Developer-Key: i=zhouchengming@bytedance.com; a=ed25519; pk=xFTmRtMG3vELGJBUiml7OYNdM393WOMv0iWWeQEVVdA= X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The zswap_load() and zswap_writeback_entry() have the same part that decompress the data from zswap_entry to page, so refactor out the common part as __zswap_load(entry, page). Signed-off-by: Chengming Zhou Reviewed-by: Nhat Pham Reviewed-by: Yosry Ahmed --- mm/zswap.c | 107 ++++++++++++++++++++++-----------------------------------= ---- 1 file changed, 38 insertions(+), 69 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index fa186945010d..2f095c919a5c 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1392,6 +1392,41 @@ static int zswap_enabled_param_set(const char *val, return ret; } =20 +static void __zswap_load(struct zswap_entry *entry, struct page *page) +{ + struct scatterlist input, output; + unsigned int dlen =3D PAGE_SIZE; + struct crypto_acomp_ctx *acomp_ctx; + struct zpool *zpool; + u8 *src; + int ret; + + acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); + mutex_lock(acomp_ctx->mutex); + + zpool =3D zswap_find_zpool(entry); + src =3D zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); + if (!zpool_can_sleep_mapped(zpool)) { + memcpy(acomp_ctx->dstmem, src, entry->length); + src =3D acomp_ctx->dstmem; + zpool_unmap_handle(zpool, entry->handle); + } + + sg_init_one(&input, src, entry->length); + sg_init_table(&output, 1); + sg_set_page(&output, page, PAGE_SIZE, 0); + acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, = dlen); + ret =3D crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_c= tx->wait); + dlen =3D acomp_ctx->req->dlen; + mutex_unlock(acomp_ctx->mutex); + + if (zpool_can_sleep_mapped(zpool)) + zpool_unmap_handle(zpool, entry->handle); + + BUG_ON(ret); + BUG_ON(dlen !=3D PAGE_SIZE); +} + /********************************* * writeback code **********************************/ @@ -1413,23 +1448,12 @@ static int zswap_writeback_entry(struct zswap_entry= *entry, swp_entry_t swpentry =3D entry->swpentry; struct page *page; struct mempolicy *mpol; - struct scatterlist input, output; - struct crypto_acomp_ctx *acomp_ctx; - struct zpool *pool =3D zswap_find_zpool(entry); bool page_was_allocated; - u8 *src, *tmp =3D NULL; - unsigned int dlen; int ret; struct writeback_control wbc =3D { .sync_mode =3D WB_SYNC_NONE, }; =20 - if (!zpool_can_sleep_mapped(pool)) { - tmp =3D kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!tmp) - return -ENOMEM; - } - /* try to allocate swap cache page */ mpol =3D get_task_policy(current); page =3D __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, @@ -1462,33 +1486,7 @@ static int zswap_writeback_entry(struct zswap_entry = *entry, } spin_unlock(&tree->lock); =20 - /* decompress */ - acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); - dlen =3D PAGE_SIZE; - - src =3D zpool_map_handle(pool, entry->handle, ZPOOL_MM_RO); - if (!zpool_can_sleep_mapped(pool)) { - memcpy(tmp, src, entry->length); - src =3D tmp; - zpool_unmap_handle(pool, entry->handle); - } - - mutex_lock(acomp_ctx->mutex); - sg_init_one(&input, src, entry->length); - sg_init_table(&output, 1); - sg_set_page(&output, page, PAGE_SIZE, 0); - acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, = dlen); - ret =3D crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_c= tx->wait); - dlen =3D acomp_ctx->req->dlen; - mutex_unlock(acomp_ctx->mutex); - - if (!zpool_can_sleep_mapped(pool)) - kfree(tmp); - else - zpool_unmap_handle(pool, entry->handle); - - BUG_ON(ret); - BUG_ON(dlen !=3D PAGE_SIZE); + __zswap_load(entry, page); =20 /* page is up to date */ SetPageUptodate(page); @@ -1508,9 +1506,6 @@ static int zswap_writeback_entry(struct zswap_entry *= entry, return ret; =20 fail: - if (!zpool_can_sleep_mapped(pool)) - kfree(tmp); - /* * If we get here because the page is already in swapcache, a * load may be happening concurrently. It is safe and okay to @@ -1769,11 +1764,7 @@ bool zswap_load(struct folio *folio) struct page *page =3D &folio->page; struct zswap_tree *tree =3D zswap_trees[type]; struct zswap_entry *entry; - struct scatterlist input, output; - struct crypto_acomp_ctx *acomp_ctx; - unsigned int dlen =3D PAGE_SIZE; - u8 *src, *dst; - struct zpool *zpool; + u8 *dst; bool ret; =20 VM_WARN_ON_ONCE(!folio_test_locked(folio)); @@ -1795,29 +1786,7 @@ bool zswap_load(struct folio *folio) goto stats; } =20 - /* decompress */ - acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); - mutex_lock(acomp_ctx->mutex); - - zpool =3D zswap_find_zpool(entry); - src =3D zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); - if (!zpool_can_sleep_mapped(zpool)) { - memcpy(acomp_ctx->dstmem, src, entry->length); - src =3D acomp_ctx->dstmem; - zpool_unmap_handle(zpool, entry->handle); - } - - sg_init_one(&input, src, entry->length); - sg_init_table(&output, 1); - sg_set_page(&output, page, PAGE_SIZE, 0); - acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, = dlen); - if (crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->= wait)) - WARN_ON(1); - mutex_unlock(acomp_ctx->mutex); - - if (zpool_can_sleep_mapped(zpool)) - zpool_unmap_handle(zpool, entry->handle); - + __zswap_load(entry, page); ret =3D true; stats: count_vm_event(ZSWPIN); --=20 b4 0.10.1 From nobody Thu Dec 18 14:35:19 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5E47C4332F for ; Wed, 13 Dec 2023 04:18:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235345AbjLMESr (ORCPT ); Tue, 12 Dec 2023 23:18:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378507AbjLMESk (ORCPT ); Tue, 12 Dec 2023 23:18:40 -0500 Received: from out-189.mta0.migadu.com (out-189.mta0.migadu.com [IPv6:2001:41d0:1004:224b::bd]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EDD2E9 for ; Tue, 12 Dec 2023 20:18:38 -0800 (PST) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Wed, 13 Dec 2023 04:18:01 +0000 Subject: [PATCH 4/5] mm/zswap: cleanup zswap_load() MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20231213-zswap-dstmem-v1-4-896763369d04@bytedance.com> References: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> In-Reply-To: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> To: Andrew Morton , Nhat Pham , Chris Li , Johannes Weiner , Seth Jennings , Dan Streetman , Vitaly Wool , Yosry Ahmed Cc: Nhat Pham , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chengming Zhou X-Developer-Signature: v=1; a=ed25519-sha256; t=1702441093; l=1420; i=zhouchengming@bytedance.com; s=20231204; h=from:subject:message-id; bh=gHk9F7eN5VVpt8CHpkuzKmAhJem3BWIWL9xkZea1e1U=; b=Mq7ZVygwDSg26fqcIH0lgGs33EOIphxRRADgWgY+gCZsSYodcZU12+MAMo+qu+v55RREfVaQD qasMy8oE2aaBeXRlyCau99fMz3B22hGNgTkHN3IdyqWLfrnbHDZ9MEG X-Developer-Key: i=zhouchengming@bytedance.com; a=ed25519; pk=xFTmRtMG3vELGJBUiml7OYNdM393WOMv0iWWeQEVVdA= X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org After the common decompress part goes to __zswap_load(), we can cleanup the zswap_load() a little. Signed-off-by: Chengming Zhou Reviewed-by: Yosry Ahmed --- mm/zswap.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 2f095c919a5c..0476e1c553c2 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1765,7 +1765,6 @@ bool zswap_load(struct folio *folio) struct zswap_tree *tree =3D zswap_trees[type]; struct zswap_entry *entry; u8 *dst; - bool ret; =20 VM_WARN_ON_ONCE(!folio_test_locked(folio)); =20 @@ -1782,19 +1781,16 @@ bool zswap_load(struct folio *folio) dst =3D kmap_local_page(page); zswap_fill_page(dst, entry->value); kunmap_local(dst); - ret =3D true; - goto stats; + } else { + __zswap_load(entry, page); } =20 - __zswap_load(entry, page); - ret =3D true; -stats: count_vm_event(ZSWPIN); if (entry->objcg) count_objcg_event(entry->objcg, ZSWPIN); =20 spin_lock(&tree->lock); - if (ret && zswap_exclusive_loads_enabled) { + if (zswap_exclusive_loads_enabled) { zswap_invalidate_entry(tree, entry); folio_mark_dirty(folio); } else if (entry->length) { @@ -1804,7 +1800,7 @@ bool zswap_load(struct folio *folio) zswap_entry_put(tree, entry); spin_unlock(&tree->lock); =20 - return ret; + return true; } =20 void zswap_invalidate(int type, pgoff_t offset) --=20 b4 0.10.1 From nobody Thu Dec 18 14:35:19 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C646C4332F for ; Wed, 13 Dec 2023 04:19:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378491AbjLMES6 (ORCPT ); Tue, 12 Dec 2023 23:18:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235304AbjLMESp (ORCPT ); Tue, 12 Dec 2023 23:18:45 -0500 Received: from out-184.mta0.migadu.com (out-184.mta0.migadu.com [IPv6:2001:41d0:1004:224b::b8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A898184 for ; Tue, 12 Dec 2023 20:18:43 -0800 (PST) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Wed, 13 Dec 2023 04:18:02 +0000 Subject: [PATCH 5/5] mm/zswap: cleanup zswap_reclaim_entry() MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20231213-zswap-dstmem-v1-5-896763369d04@bytedance.com> References: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> In-Reply-To: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> To: Andrew Morton , Nhat Pham , Chris Li , Johannes Weiner , Seth Jennings , Dan Streetman , Vitaly Wool , Yosry Ahmed Cc: Nhat Pham , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chengming Zhou X-Developer-Signature: v=1; a=ed25519-sha256; t=1702441093; l=1963; i=zhouchengming@bytedance.com; s=20231204; h=from:subject:message-id; bh=XtUmsKxEGZDM6rCPaj7hU7R0sVQDeXZ+79WRnhx41L4=; b=daDp6iGtf2E6cgWllBKZRMe+XLcYJPeNFS2p+GHbNKI/dAonKm2M1/DQ3KX+qO/v3iXSgap41 5xxgLXMKxrsDJFAkHGGJmnbdCOdDUZdM+O8ekdbB4ml4twb8Nf1Erxz X-Developer-Key: i=zhouchengming@bytedance.com; a=ed25519; pk=xFTmRtMG3vELGJBUiml7OYNdM393WOMv0iWWeQEVVdA= X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Also after the common decompress part goes to __zswap_load(), we can cleanup the zswap_reclaim_entry() a little. Signed-off-by: Chengming Zhou Reviewed-by: Chengming Zhou Reviewed-by: Nhat Pham Reviewed-by: Yosry Ahmed --- mm/zswap.c | 23 +++++------------------ 1 file changed, 5 insertions(+), 18 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 0476e1c553c2..9c709368a0e6 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1449,7 +1449,6 @@ static int zswap_writeback_entry(struct zswap_entry *= entry, struct page *page; struct mempolicy *mpol; bool page_was_allocated; - int ret; struct writeback_control wbc =3D { .sync_mode =3D WB_SYNC_NONE, }; @@ -1458,16 +1457,13 @@ static int zswap_writeback_entry(struct zswap_entry= *entry, mpol =3D get_task_policy(current); page =3D __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, NO_INTERLEAVE_INDEX, &page_was_allocated, true); - if (!page) { - ret =3D -ENOMEM; - goto fail; - } + if (!page) + return -ENOMEM; =20 /* Found an existing page, we raced with load/swapin */ if (!page_was_allocated) { put_page(page); - ret =3D -EEXIST; - goto fail; + return -EEXIST; } =20 /* @@ -1481,8 +1477,7 @@ static int zswap_writeback_entry(struct zswap_entry *= entry, if (zswap_rb_search(&tree->rbroot, swp_offset(entry->swpentry)) !=3D entr= y) { spin_unlock(&tree->lock); delete_from_swap_cache(page_folio(page)); - ret =3D -ENOMEM; - goto fail; + return -ENOMEM; } spin_unlock(&tree->lock); =20 @@ -1503,15 +1498,7 @@ static int zswap_writeback_entry(struct zswap_entry = *entry, __swap_writepage(page, &wbc); put_page(page); =20 - return ret; - -fail: - /* - * If we get here because the page is already in swapcache, a - * load may be happening concurrently. It is safe and okay to - * not free the entry. It is also okay to return !0. - */ - return ret; + return 0; } =20 static int zswap_is_page_same_filled(void *ptr, unsigned long *value) --=20 b4 0.10.1