From nobody Fri Dec 19 00:22:46 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5E47C4332F for ; Wed, 13 Dec 2023 04:18:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235345AbjLMESr (ORCPT ); Tue, 12 Dec 2023 23:18:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378507AbjLMESk (ORCPT ); Tue, 12 Dec 2023 23:18:40 -0500 Received: from out-189.mta0.migadu.com (out-189.mta0.migadu.com [IPv6:2001:41d0:1004:224b::bd]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EDD2E9 for ; Tue, 12 Dec 2023 20:18:38 -0800 (PST) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Wed, 13 Dec 2023 04:18:01 +0000 Subject: [PATCH 4/5] mm/zswap: cleanup zswap_load() MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20231213-zswap-dstmem-v1-4-896763369d04@bytedance.com> References: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> In-Reply-To: <20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com> To: Andrew Morton , Nhat Pham , Chris Li , Johannes Weiner , Seth Jennings , Dan Streetman , Vitaly Wool , Yosry Ahmed Cc: Nhat Pham , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chengming Zhou X-Developer-Signature: v=1; a=ed25519-sha256; t=1702441093; l=1420; i=zhouchengming@bytedance.com; s=20231204; h=from:subject:message-id; bh=gHk9F7eN5VVpt8CHpkuzKmAhJem3BWIWL9xkZea1e1U=; b=Mq7ZVygwDSg26fqcIH0lgGs33EOIphxRRADgWgY+gCZsSYodcZU12+MAMo+qu+v55RREfVaQD qasMy8oE2aaBeXRlyCau99fMz3B22hGNgTkHN3IdyqWLfrnbHDZ9MEG X-Developer-Key: i=zhouchengming@bytedance.com; a=ed25519; pk=xFTmRtMG3vELGJBUiml7OYNdM393WOMv0iWWeQEVVdA= X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org After the common decompress part goes to __zswap_load(), we can cleanup the zswap_load() a little. Signed-off-by: Chengming Zhou Reviewed-by: Yosry Ahmed --- mm/zswap.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 2f095c919a5c..0476e1c553c2 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1765,7 +1765,6 @@ bool zswap_load(struct folio *folio) struct zswap_tree *tree =3D zswap_trees[type]; struct zswap_entry *entry; u8 *dst; - bool ret; =20 VM_WARN_ON_ONCE(!folio_test_locked(folio)); =20 @@ -1782,19 +1781,16 @@ bool zswap_load(struct folio *folio) dst =3D kmap_local_page(page); zswap_fill_page(dst, entry->value); kunmap_local(dst); - ret =3D true; - goto stats; + } else { + __zswap_load(entry, page); } =20 - __zswap_load(entry, page); - ret =3D true; -stats: count_vm_event(ZSWPIN); if (entry->objcg) count_objcg_event(entry->objcg, ZSWPIN); =20 spin_lock(&tree->lock); - if (ret && zswap_exclusive_loads_enabled) { + if (zswap_exclusive_loads_enabled) { zswap_invalidate_entry(tree, entry); folio_mark_dirty(folio); } else if (entry->length) { @@ -1804,7 +1800,7 @@ bool zswap_load(struct folio *folio) zswap_entry_put(tree, entry); spin_unlock(&tree->lock); =20 - return ret; + return true; } =20 void zswap_invalidate(int type, pgoff_t offset) --=20 b4 0.10.1