From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B85FB13B294 for ; Tue, 13 Aug 2024 08:41:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538470; cv=none; b=bStFoEEBjRevkLnmL2UObW1iKK50uD5zSw5DRZDaPKwGhbhwdCxpek/UKvAzGocxpm5l+3aL1JFCAbHnb6fuHghO4/jMTm0ytmu46a6ZV6Twsp+NVJAc79juhPWtlS4oUwB+X20Xc7AOlxzBQJHyuwYewneCn0w9BpLgxcw7K9c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538470; c=relaxed/simple; bh=lcqEVHRvpSRBTNxWYIoFd8H0J5LADCF1j8AgJLJ7lyU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qGAFZiYpUmWF3v6daH8qOipA/aroORZPYiXVVPz1pm96tV22GA8qGAx74sVx6FmvP9OFRbKmdw+1pI1mr2jkUaIHEFbmKOlRdccH01YxdkxMGWbIHUA0vfPSzgZS3D6vtc4IikxUygoDvHHeHR6DSyyVORUjR3POIMhGvIJOJLs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sctRuWuO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sctRuWuO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C85E4C4AF0B; Tue, 13 Aug 2024 08:41:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538470; bh=lcqEVHRvpSRBTNxWYIoFd8H0J5LADCF1j8AgJLJ7lyU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sctRuWuOZUMHpgEBqQgzFcyirs9bs/BF2RsAguyaDVXZh8TyHBKHlq0mAx6t/cAQw stINPQtq33vZoulhTzvNOnn8M4MantaAoEpY3p+JOyQ9F8NZbvsaIlPdr4HXyvUvjk NPevCP8yIulU0G8VM9YbZ7C1O0x+RO3pN8CgrlQ4u8hctVFYUu9O1Tub2ebG6P+/64 Rdrg6bNJneuMkPj5szU3VOE1WynC3YoCtCzqSx12ZTaNtQA8fRSPtWuSV02P9bX9Yt ezIXBlQInIGxfpz1NkDMEMPMmeAIqEDGaiB3RXvQKSzASlDi3SrXYdkYgV1kpkMUE2 GQTccEpwQdpuw== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 01/21] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool Date: Tue, 13 Aug 2024 16:45:47 +0800 Message-ID: <20240813084611.4122571-2-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi The 1st patch introduces new memory decriptor zpdesc and rename zspage.first_page to zspage.first_zpdesc, no functional change. We removed PG_owner_priv_1 since it was moved to zspage after commit a41ec880aa7b ("zsmalloc: move huge compressed obj from page to zspage"). And keep the memcg_data member, since as Yosry pointed out: "When the pages are freed, put_page() -> folio_put() -> __folio_put() will = call mem_cgroup_uncharge(). The latter will call folio_memcg() (which reads folio->memcg_data) to figure out if uncharging needs to be done. There are also other similar code paths that will check folio->memcg_data. It is currently expected to be present for all folios. So until we have custom code paths per-folio type for allocation/freeing/etc, we need to keep folio->memcg_data present and properly initialized." Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zpdesc.h | 72 +++++++++++++++++++++++++++++++++++++++++++++++++++ mm/zsmalloc.c | 25 +++++++++--------- 2 files changed, 84 insertions(+), 13 deletions(-) create mode 100644 mm/zpdesc.h diff --git a/mm/zpdesc.h b/mm/zpdesc.h new file mode 100644 index 000000000000..721ef8861131 --- /dev/null +++ b/mm/zpdesc.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* zpdesc.h: zswap.zpool memory descriptor + * + * Written by Alex Shi + * Hyeonggon Yoo <42.hyeyoo@gmail.com> + */ +#ifndef __MM_ZPDESC_H__ +#define __MM_ZPDESC_H__ + +/* + * struct zpdesc - Memory descriptor for zpool memory, now is for zsmalloc + * @flags: Page flags, PG_private: identifies the first component page + * @lru: Indirectly used by page migration + * @mops: Used by page migration + * @next: Next zpdesc in a zspage in zsmalloc zpool + * @handle: For huge zspage in zsmalloc zpool + * @zspage: Points to the zspage this zpdesc is a part of + * @first_obj_offset: First object offset in zsmalloc zpool + * @_refcount: Indirectly use by page migration + * @memcg_data: Memory Control Group data. + * + * This struct overlays struct page for now. Do not modify without a good + * understanding of the issues. + */ +struct zpdesc { + unsigned long flags; + struct list_head lru; + struct movable_operations *mops; + union { + /* Next zpdescs in a zspage in zsmalloc zpool */ + struct zpdesc *next; + /* For huge zspage in zsmalloc zpool */ + unsigned long handle; + }; + struct zspage *zspage; + unsigned int first_obj_offset; + atomic_t _refcount; +#ifdef CONFIG_MEMCG + unsigned long memcg_data; +#endif +}; +#define ZPDESC_MATCH(pg, zp) \ + static_assert(offsetof(struct page, pg) =3D=3D offsetof(struct zpdesc, zp= )) + +ZPDESC_MATCH(flags, flags); +ZPDESC_MATCH(lru, lru); +ZPDESC_MATCH(mapping, mops); +ZPDESC_MATCH(index, next); +ZPDESC_MATCH(index, handle); +ZPDESC_MATCH(private, zspage); +ZPDESC_MATCH(page_type, first_obj_offset); +ZPDESC_MATCH(_refcount, _refcount); +#ifdef CONFIG_MEMCG +ZPDESC_MATCH(memcg_data, memcg_data); +#endif +#undef ZPDESC_MATCH +static_assert(sizeof(struct zpdesc) <=3D sizeof(struct page)); + +#define zpdesc_page(zp) (_Generic((zp), \ + const struct zpdesc *: (const struct page *)(zp), \ + struct zpdesc *: (struct page *)(zp))) + +/* Using folio conversion to skip compound_head checking */ +#define zpdesc_folio(zp) (_Generic((zp), \ + const struct zpdesc *: (const struct folio *)(zp), \ + struct zpdesc *: (struct folio *)(zp))) + +#define page_zpdesc(p) (_Generic((p), \ + const struct page *: (const struct zpdesc *)(p), \ + struct page *: (struct zpdesc *)(p))) + +#endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 5d6581ab7c07..30f0a7abbda3 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -13,20 +13,18 @@ =20 /* * Following is how we use various fields and flags of underlying - * struct page(s) to form a zspage. + * struct zpdesc(page) to form a zspage. * - * Usage of struct page fields: - * page->private: points to zspage - * page->index: links together all component pages of a zspage + * Usage of struct zpdesc fields: + * zpdesc->zspage: points to zspage + * zpdesc->next: links together all component pages of a zspage * For the huge page, this is always 0, so we use this field * to store handle. - * page->page_type: PG_zsmalloc, lower 16 bit locate the first object - * offset in a subpage of a zspage + * zpdesc->first_obj_offset: PG_zsmalloc, lower 16 bit locate the first + * object offset in a subpage of a zspage * - * Usage of struct page flags: + * Usage of struct zpdesc(page) flags: * PG_private: identifies the first component page - * PG_owner_priv_1: identifies the huge component page - * */ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -64,6 +62,7 @@ #include #include #include +#include "zpdesc.h" =20 #define ZSPAGE_MAGIC 0x58 =20 @@ -253,7 +252,7 @@ struct zspage { }; unsigned int inuse; unsigned int freeobj; - struct page *first_page; + struct zpdesc *first_zpdesc; struct list_head list; /* fullness list */ struct zs_pool *pool; rwlock_t lock; @@ -448,7 +447,7 @@ static inline void mod_zspage_inuse(struct zspage *zspa= ge, int val) =20 static inline struct page *get_first_page(struct zspage *zspage) { - struct page *first_page =3D zspage->first_page; + struct page *first_page =3D zpdesc_page(zspage->first_zpdesc); =20 VM_BUG_ON_PAGE(!is_first_page(first_page), first_page); return first_page; @@ -948,7 +947,7 @@ static void create_page_chain(struct size_class *class,= struct zspage *zspage, set_page_private(page, (unsigned long)zspage); page->index =3D 0; if (i =3D=3D 0) { - zspage->first_page =3D page; + zspage->first_zpdesc =3D page_zpdesc(page); SetPagePrivate(page); if (unlikely(class->objs_per_zspage =3D=3D 1 && class->pages_per_zspage =3D=3D 1)) @@ -1324,7 +1323,7 @@ static unsigned long obj_malloc(struct zs_pool *pool, link->handle =3D handle | OBJ_ALLOCATED_TAG; else /* record handle to page->index */ - zspage->first_page->index =3D handle | OBJ_ALLOCATED_TAG; + zspage->first_zpdesc->handle =3D handle | OBJ_ALLOCATED_TAG; =20 kunmap_atomic(vaddr); mod_zspage_inuse(zspage, 1); --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A285316C854 for ; Tue, 13 Aug 2024 08:41:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538474; cv=none; b=utpZRPb1bBs6HrLI+mVmabCSg0Me45U1Q8yy9dORz2NDdmjxOcxC7DD4/OvF0Ga+8bPG8pUEXYoH8mWe+VJlVmc1FPw5CymW53kgkoramJjBWyGEK5er2i9n8ufnJIT9geKZ9b8yyquwRElOb5NmWx2dGuKg2WfmP50DZ/GvFP0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538474; c=relaxed/simple; bh=PQtx1dFIRykqdoAci1ap2Dlk7DGGX895wpZmxxTa/no=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NKZHfAJKRxVc0JLWL2XeEcHH5kQJ08AVhPXiY9PpnClPGCTav/maArmeGlt9ZffR6e/ddxulspeO2GMkYXn9VTzA2Bbg4ch+gpg1mDG5/WuZd72bLhVdCbSYQiJI0c4NhbqOGl95qXidOszRCMUKelPY8fM1lYEWVflNxh2t820= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JjhJrf/w; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JjhJrf/w" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2888C4AF10; Tue, 13 Aug 2024 08:41:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538474; bh=PQtx1dFIRykqdoAci1ap2Dlk7DGGX895wpZmxxTa/no=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JjhJrf/wtkBI9UEBsUcawJtJs8gmNmxHrh1Mo480fI0G6gJrj94bzUPkFtEItVqaS vvk0HHpWOyA7tNNTEQi1OU+L8oYA8et3hJzE0UaFiy78T/w0wveVetYoCybv1xZK62 NzuSeIFYgYQ6JtJxEw3MRxT41R0yHnWLIUbY2OTTnF8YYcp2OAEnS6xAlUbToI8mEI ot2el35HS7YnXfjX0Jak615XPEbamwZncUhoSUIXXPWDvOhKVojS7FDYeyhsjH78nq WuhR1Ueh4Scg40xQw0c26/I0EtwDGn1eQe2qGoIibIksi2gBQ+R+qISZyjaFJIxj1N bcX3wurSiw0Pg== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 02/21] mm/zsmalloc: use zpdesc in trylock_zspage()/lock_zspage() Date: Tue, 13 Aug 2024 16:45:48 +0800 Message-ID: <20240813084611.4122571-3-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi To use zpdesc in trylock_zspage()/lock_zspage() funcs, we add couple of hel= pers: zpdesc_lock()/zpdesc_unlock()/zpdesc_trylock()/zpdesc_wait_locked() and zpdesc_get()/zpdesc_put() for this purpose. Here we use the folio series func in guts for 2 reasons, one zswap.zpool only get single page, and use folio could save some compound_head checking; two, folio_put could bypass devmap checking that we don't need. BTW, thanks Intel LKP found a build warning on the patch. Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zpdesc.h | 30 ++++++++++++++++++++++++ mm/zsmalloc.c | 64 ++++++++++++++++++++++++++++++++++----------------- 2 files changed, 73 insertions(+), 21 deletions(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 721ef8861131..782b5ad67cda 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -69,4 +69,34 @@ static_assert(sizeof(struct zpdesc) <=3D sizeof(struct p= age)); const struct page *: (const struct zpdesc *)(p), \ struct page *: (struct zpdesc *)(p))) =20 +static inline void zpdesc_lock(struct zpdesc *zpdesc) +{ + folio_lock(zpdesc_folio(zpdesc)); +} + +static inline bool zpdesc_trylock(struct zpdesc *zpdesc) +{ + return folio_trylock(zpdesc_folio(zpdesc)); +} + +static inline void zpdesc_unlock(struct zpdesc *zpdesc) +{ + folio_unlock(zpdesc_folio(zpdesc)); +} + +static inline void zpdesc_wait_locked(struct zpdesc *zpdesc) +{ + folio_wait_locked(zpdesc_folio(zpdesc)); +} + +static inline void zpdesc_get(struct zpdesc *zpdesc) +{ + folio_get(zpdesc_folio(zpdesc)); +} + +static inline void zpdesc_put(struct zpdesc *zpdesc) +{ + folio_put(zpdesc_folio(zpdesc)); +} + #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 30f0a7abbda3..25c90224f21f 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -433,13 +433,17 @@ static __maybe_unused int is_first_page(struct page *= page) return PagePrivate(page); } =20 +static inline bool is_first_zpdesc(struct zpdesc *zpdesc) +{ + return PagePrivate(zpdesc_page(zpdesc)); +} + /* Protected by class->lock */ static inline int get_zspage_inuse(struct zspage *zspage) { return zspage->inuse; } =20 - static inline void mod_zspage_inuse(struct zspage *zspage, int val) { zspage->inuse +=3D val; @@ -453,6 +457,14 @@ static inline struct page *get_first_page(struct zspag= e *zspage) return first_page; } =20 +static struct zpdesc *get_first_zpdesc(struct zspage *zspage) +{ + struct zpdesc *first_zpdesc =3D zspage->first_zpdesc; + + VM_BUG_ON_PAGE(!is_first_zpdesc(first_zpdesc), zpdesc_page(first_zpdesc)); + return first_zpdesc; +} + #define FIRST_OBJ_PAGE_TYPE_MASK 0xffff =20 static inline void reset_first_obj_offset(struct page *page) @@ -745,6 +757,16 @@ static struct page *get_next_page(struct page *page) return (struct page *)page->index; } =20 +static struct zpdesc *get_next_zpdesc(struct zpdesc *zpdesc) +{ + struct zspage *zspage =3D get_zspage(zpdesc_page(zpdesc)); + + if (unlikely(ZsHugePage(zspage))) + return NULL; + + return zpdesc->next; +} + /** * obj_to_location - get (, ) from encoded object value * @obj: the encoded object value @@ -815,11 +837,11 @@ static void reset_page(struct page *page) =20 static int trylock_zspage(struct zspage *zspage) { - struct page *cursor, *fail; + struct zpdesc *cursor, *fail; =20 - for (cursor =3D get_first_page(zspage); cursor !=3D NULL; cursor =3D - get_next_page(cursor)) { - if (!trylock_page(cursor)) { + for (cursor =3D get_first_zpdesc(zspage); cursor !=3D NULL; cursor =3D + get_next_zpdesc(cursor)) { + if (!zpdesc_trylock(cursor)) { fail =3D cursor; goto unlock; } @@ -827,9 +849,9 @@ static int trylock_zspage(struct zspage *zspage) =20 return 1; unlock: - for (cursor =3D get_first_page(zspage); cursor !=3D fail; cursor =3D - get_next_page(cursor)) - unlock_page(cursor); + for (cursor =3D get_first_zpdesc(zspage); cursor !=3D fail; cursor =3D + get_next_zpdesc(cursor)) + zpdesc_unlock(cursor); =20 return 0; } @@ -1658,7 +1680,7 @@ static int putback_zspage(struct size_class *class, s= truct zspage *zspage) */ static void lock_zspage(struct zspage *zspage) { - struct page *curr_page, *page; + struct zpdesc *curr_zpdesc, *zpdesc; =20 /* * Pages we haven't locked yet can be migrated off the list while we're @@ -1670,24 +1692,24 @@ static void lock_zspage(struct zspage *zspage) */ while (1) { migrate_read_lock(zspage); - page =3D get_first_page(zspage); - if (trylock_page(page)) + zpdesc =3D get_first_zpdesc(zspage); + if (zpdesc_trylock(zpdesc)) break; - get_page(page); + zpdesc_get(zpdesc); migrate_read_unlock(zspage); - wait_on_page_locked(page); - put_page(page); + zpdesc_wait_locked(zpdesc); + zpdesc_put(zpdesc); } =20 - curr_page =3D page; - while ((page =3D get_next_page(curr_page))) { - if (trylock_page(page)) { - curr_page =3D page; + curr_zpdesc =3D zpdesc; + while ((zpdesc =3D get_next_zpdesc(curr_zpdesc))) { + if (zpdesc_trylock(zpdesc)) { + curr_zpdesc =3D zpdesc; } else { - get_page(page); + zpdesc_get(zpdesc); migrate_read_unlock(zspage); - wait_on_page_locked(page); - put_page(page); + zpdesc_wait_locked(zpdesc); + zpdesc_put(zpdesc); migrate_read_lock(zspage); } } --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 717E717B427 for ; Tue, 13 Aug 2024 08:41:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538478; cv=none; b=njEkgzRgWFN1UpNbLk9maTL7378AlKz9X61y4Y27/U/ixiIO7W/xWTHhtr2J8lpMo4LQn54HZrWkoN/9ju5WYrKKWMljvsuWFM/TZHktB+pGm9w6NUG3jPuk69BvkP/5HK7hf4r4wedZH1UHft2uqzpRtnhkoekMIo97K/oyS6M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538478; c=relaxed/simple; bh=rMm13SX4aLf3oZ/lBmcjJeeV3CGO+2RCAQN5DP+xGmo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=APwSJUJRdl1+lWivwuFoF7BRXUwL86zLyEY7CEr4le7CcJHXFepuafmUbaIRETLdXuqS0Nq+pZdvtK+nATeg29N59v4SiMp7HsknGvIpAmhRHheQWFzA7ynKcld2UYd1pOLGm4KgT+m9qz02qiBkptDvE5ZO87bbLGGOTHNTYFk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jJgXG5/M; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jJgXG5/M" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D74FCC4AF0B; Tue, 13 Aug 2024 08:41:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538478; bh=rMm13SX4aLf3oZ/lBmcjJeeV3CGO+2RCAQN5DP+xGmo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jJgXG5/MU5u93XZsl+G9CJ0mZ5Zsa5hW1D87/ECOlkqHb+Sr7/6G9JpU9JSleiaMM jeO2UHtuI8wPWRVqLVhxb3dyjcW8uiCm+Wre+EoFb8xV3wGDYcMFKa9RWetwLmAfpW 7wGshauiHkGZQEFx7I7wSWteKnO22jVG7YHziDbs66qRtrx9x/JGYl0r8S9g0dUrhw pe6jnDZAoZToI0zoFrugdc2rA141iOVEivfjDwK26VKKJn39xFkNkKoNQ5vYFHKL5I V1bo7H8J9Swu0EPIh8U+xtQC+GvyGGiBC1koj5PQ+iVCqihCzllNAuur2U0FggB05u phlueemlWNrtw== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 03/21] mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc Date: Tue, 13 Aug 2024 16:45:49 +0800 Message-ID: <20240813084611.4122571-4-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> These two functions take pointer to an array of struct page. Introduce zpdesc_kmap_atomic() and make __zs_{map,unmap}_object() take pointer to an array of zpdesc instead of page. Add silly type casting when calling them. Casting will be removed late. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 25c90224f21f..b9b5e2824f2c 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -243,6 +243,11 @@ struct zs_pool { atomic_t compaction_in_progress; }; =20 +static inline void *zpdesc_kmap_atomic(struct zpdesc *zpdesc) +{ + return kmap_atomic(zpdesc_page(zpdesc)); +} + struct zspage { struct { unsigned int huge:HUGE_BITS; @@ -1061,7 +1066,7 @@ static inline void __zs_cpu_down(struct mapping_area = *area) } =20 static void *__zs_map_object(struct mapping_area *area, - struct page *pages[2], int off, int size) + struct zpdesc *zpdescs[2], int off, int size) { int sizes[2]; void *addr; @@ -1078,10 +1083,10 @@ static void *__zs_map_object(struct mapping_area *a= rea, sizes[1] =3D size - sizes[0]; =20 /* copy object to per-cpu buffer */ - addr =3D kmap_atomic(pages[0]); + addr =3D zpdesc_kmap_atomic(zpdescs[0]); memcpy(buf, addr + off, sizes[0]); kunmap_atomic(addr); - addr =3D kmap_atomic(pages[1]); + addr =3D zpdesc_kmap_atomic(zpdescs[1]); memcpy(buf + sizes[0], addr, sizes[1]); kunmap_atomic(addr); out: @@ -1089,7 +1094,7 @@ static void *__zs_map_object(struct mapping_area *are= a, } =20 static void __zs_unmap_object(struct mapping_area *area, - struct page *pages[2], int off, int size) + struct zpdesc *zpdescs[2], int off, int size) { int sizes[2]; void *addr; @@ -1108,10 +1113,10 @@ static void __zs_unmap_object(struct mapping_area *= area, sizes[1] =3D size - sizes[0]; =20 /* copy per-cpu buffer to object */ - addr =3D kmap_atomic(pages[0]); + addr =3D zpdesc_kmap_atomic(zpdescs[0]); memcpy(addr + off, buf, sizes[0]); kunmap_atomic(addr); - addr =3D kmap_atomic(pages[1]); + addr =3D zpdesc_kmap_atomic(zpdescs[1]); memcpy(addr, buf + sizes[0], sizes[1]); kunmap_atomic(addr); =20 @@ -1252,7 +1257,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned lo= ng handle, pages[1] =3D get_next_page(page); BUG_ON(!pages[1]); =20 - ret =3D __zs_map_object(area, pages, off, class->size); + ret =3D __zs_map_object(area, (struct zpdesc **)pages, off, class->size); out: if (likely(!ZsHugePage(zspage))) ret +=3D ZS_HANDLE_SIZE; @@ -1287,7 +1292,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned l= ong handle) pages[1] =3D get_next_page(page); BUG_ON(!pages[1]); =20 - __zs_unmap_object(area, pages, off, class->size); + __zs_unmap_object(area, (struct zpdesc **)pages, off, class->size); } local_unlock(&zs_map_area.lock); =20 --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3329B175D26 for ; Tue, 13 Aug 2024 08:41:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538482; cv=none; b=blfpEx1+s797pfaOferdJn5xKsy7Aftz1CYP186xkrqSONtNqLV0BuA+CRn1P+dG4FlWkvglUNeyQyl0zsKAhnRd8RDtFJSiFItIajoGS+tTGZX7eAp4mvNWDr1cFNWrHKDbMhoH6Ln36gZSzWU21t+HsUu16aZo7btisihRNRo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538482; c=relaxed/simple; bh=3JFITvoM2vdTLh0S1hX/hhJAVedfseKmbRV8MWLijFE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T5xKquEMT3aeBj4gRKEEAZH30MSJzQ/OlIvbSaVQzzvAHw7ID8bwN2C2eQqkl4Wl80zS/jp7x+bUqNlR62XmlOqWcFGp9FhaqWwmm6XKaa5rsYPkKZSziFQqutZPBBWlq7L2itZCfZ6HDoeWKouL9rGC2PCgWnb/EwVw1npfn70= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ATf69V+r; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ATf69V+r" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B392EC4AF09; Tue, 13 Aug 2024 08:41:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538482; bh=3JFITvoM2vdTLh0S1hX/hhJAVedfseKmbRV8MWLijFE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ATf69V+rkNHvcKEhZ4OMbfXyaIRrEgDaD3lPd3b2uhkn1uq3GX1I8PMUccr/8Po8M 0iZBkgTgW9zR333lSv5o+jCu5k9uX8zsChDh8AduidPjulDTLd5m1xAcPaXGCTdyYy TVJugJwFoO3juUkYtS5OGZk7tSTQAXO5h2QTpKSbtz9cA5E0Yq5+medYMzAwjWjUnD HkJybZOkjGucGmSQG3ReLrhCS2/qUELtVCIZz6IvrblljdqCeMtbplf3PZ7gCIG+Wj 7Q9oepphR0IgGUEh5B7utLIJbHJp1jLlBvNMCmg9G1nLHZ1O3/2Hy8E7V51l2FQi2P 834YXy5qSA1hA== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 04/21] mm/zsmalloc: add and use pfn/zpdesc seeking funcs Date: Tue, 13 Aug 2024 16:45:50 +0800 Message-ID: <20240813084611.4122571-5-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Add pfn_zpdesc conversion, convert obj_to_location() to take zpdesc and also convert its users to use zpdesc. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zpdesc.h | 9 +++++++ mm/zsmalloc.c | 75 ++++++++++++++++++++++++++------------------------- 2 files changed, 47 insertions(+), 37 deletions(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 782b5ad67cda..11083a1c2464 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -99,4 +99,13 @@ static inline void zpdesc_put(struct zpdesc *zpdesc) folio_put(zpdesc_folio(zpdesc)); } =20 +static inline unsigned long zpdesc_pfn(struct zpdesc *zpdesc) +{ + return page_to_pfn(zpdesc_page(zpdesc)); +} + +static inline struct zpdesc *pfn_zpdesc(unsigned long pfn) +{ + return page_zpdesc(pfn_to_page(pfn)); +} #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b9b5e2824f2c..384a5ba49788 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -773,15 +773,15 @@ static struct zpdesc *get_next_zpdesc(struct zpdesc *= zpdesc) } =20 /** - * obj_to_location - get (, ) from encoded object value + * obj_to_location - get (, ) from encoded object value * @obj: the encoded object value - * @page: page object resides in zspage + * @zpdesc: zpdesc object resides in zspage * @obj_idx: object index */ -static void obj_to_location(unsigned long obj, struct page **page, +static void obj_to_location(unsigned long obj, struct zpdesc **zpdesc, unsigned int *obj_idx) { - *page =3D pfn_to_page(obj >> OBJ_INDEX_BITS); + *zpdesc =3D pfn_zpdesc(obj >> OBJ_INDEX_BITS); *obj_idx =3D (obj & OBJ_INDEX_MASK); } =20 @@ -1208,13 +1208,13 @@ void *zs_map_object(struct zs_pool *pool, unsigned = long handle, enum zs_mapmode mm) { struct zspage *zspage; - struct page *page; + struct zpdesc *zpdesc; unsigned long obj, off; unsigned int obj_idx; =20 struct size_class *class; struct mapping_area *area; - struct page *pages[2]; + struct zpdesc *zpdescs[2]; void *ret; =20 /* @@ -1227,8 +1227,8 @@ void *zs_map_object(struct zs_pool *pool, unsigned lo= ng handle, /* It guarantees it can get zspage from handle safely */ read_lock(&pool->migrate_lock); obj =3D handle_to_obj(handle); - obj_to_location(obj, &page, &obj_idx); - zspage =3D get_zspage(page); + obj_to_location(obj, &zpdesc, &obj_idx); + zspage =3D get_zspage(zpdesc_page(zpdesc)); =20 /* * migration cannot move any zpages in this zspage. Here, class->lock @@ -1247,17 +1247,17 @@ void *zs_map_object(struct zs_pool *pool, unsigned = long handle, area->vm_mm =3D mm; if (off + class->size <=3D PAGE_SIZE) { /* this object is contained entirely within a page */ - area->vm_addr =3D kmap_atomic(page); + area->vm_addr =3D zpdesc_kmap_atomic(zpdesc); ret =3D area->vm_addr + off; goto out; } =20 /* this object spans two pages */ - pages[0] =3D page; - pages[1] =3D get_next_page(page); - BUG_ON(!pages[1]); + zpdescs[0] =3D zpdesc; + zpdescs[1] =3D get_next_zpdesc(zpdesc); + BUG_ON(!zpdescs[1]); =20 - ret =3D __zs_map_object(area, (struct zpdesc **)pages, off, class->size); + ret =3D __zs_map_object(area, zpdescs, off, class->size); out: if (likely(!ZsHugePage(zspage))) ret +=3D ZS_HANDLE_SIZE; @@ -1269,7 +1269,7 @@ EXPORT_SYMBOL_GPL(zs_map_object); void zs_unmap_object(struct zs_pool *pool, unsigned long handle) { struct zspage *zspage; - struct page *page; + struct zpdesc *zpdesc; unsigned long obj, off; unsigned int obj_idx; =20 @@ -1277,8 +1277,8 @@ void zs_unmap_object(struct zs_pool *pool, unsigned l= ong handle) struct mapping_area *area; =20 obj =3D handle_to_obj(handle); - obj_to_location(obj, &page, &obj_idx); - zspage =3D get_zspage(page); + obj_to_location(obj, &zpdesc, &obj_idx); + zspage =3D get_zspage(zpdesc_page(zpdesc)); class =3D zspage_class(pool, zspage); off =3D offset_in_page(class->size * obj_idx); =20 @@ -1286,13 +1286,13 @@ void zs_unmap_object(struct zs_pool *pool, unsigned= long handle) if (off + class->size <=3D PAGE_SIZE) kunmap_atomic(area->vm_addr); else { - struct page *pages[2]; + struct zpdesc *zpdescs[2]; =20 - pages[0] =3D page; - pages[1] =3D get_next_page(page); - BUG_ON(!pages[1]); + zpdescs[0] =3D zpdesc; + zpdescs[1] =3D get_next_zpdesc(zpdesc); + BUG_ON(!zpdescs[1]); =20 - __zs_unmap_object(area, (struct zpdesc **)pages, off, class->size); + __zs_unmap_object(area, zpdescs, off, class->size); } local_unlock(&zs_map_area.lock); =20 @@ -1434,23 +1434,24 @@ static void obj_free(int class_size, unsigned long = obj) { struct link_free *link; struct zspage *zspage; - struct page *f_page; + struct zpdesc *f_zpdesc; unsigned long f_offset; unsigned int f_objidx; void *vaddr; =20 - obj_to_location(obj, &f_page, &f_objidx); + + obj_to_location(obj, &f_zpdesc, &f_objidx); f_offset =3D offset_in_page(class_size * f_objidx); - zspage =3D get_zspage(f_page); + zspage =3D get_zspage(zpdesc_page(f_zpdesc)); =20 - vaddr =3D kmap_atomic(f_page); + vaddr =3D zpdesc_kmap_atomic(f_zpdesc); link =3D (struct link_free *)(vaddr + f_offset); =20 /* Insert this object in containing zspage's freelist */ if (likely(!ZsHugePage(zspage))) link->next =3D get_freeobj(zspage) << OBJ_TAG_BITS; else - f_page->index =3D 0; + f_zpdesc->next =3D NULL; set_freeobj(zspage, f_objidx); =20 kunmap_atomic(vaddr); @@ -1495,7 +1496,7 @@ EXPORT_SYMBOL_GPL(zs_free); static void zs_object_copy(struct size_class *class, unsigned long dst, unsigned long src) { - struct page *s_page, *d_page; + struct zpdesc *s_zpdesc, *d_zpdesc; unsigned int s_objidx, d_objidx; unsigned long s_off, d_off; void *s_addr, *d_addr; @@ -1504,8 +1505,8 @@ static void zs_object_copy(struct size_class *class, = unsigned long dst, =20 s_size =3D d_size =3D class->size; =20 - obj_to_location(src, &s_page, &s_objidx); - obj_to_location(dst, &d_page, &d_objidx); + obj_to_location(src, &s_zpdesc, &s_objidx); + obj_to_location(dst, &d_zpdesc, &d_objidx); =20 s_off =3D offset_in_page(class->size * s_objidx); d_off =3D offset_in_page(class->size * d_objidx); @@ -1516,8 +1517,8 @@ static void zs_object_copy(struct size_class *class, = unsigned long dst, if (d_off + class->size > PAGE_SIZE) d_size =3D PAGE_SIZE - d_off; =20 - s_addr =3D kmap_atomic(s_page); - d_addr =3D kmap_atomic(d_page); + s_addr =3D zpdesc_kmap_atomic(s_zpdesc); + d_addr =3D zpdesc_kmap_atomic(d_zpdesc); =20 while (1) { size =3D min(s_size, d_size); @@ -1542,17 +1543,17 @@ static void zs_object_copy(struct size_class *class= , unsigned long dst, if (s_off >=3D PAGE_SIZE) { kunmap_atomic(d_addr); kunmap_atomic(s_addr); - s_page =3D get_next_page(s_page); - s_addr =3D kmap_atomic(s_page); - d_addr =3D kmap_atomic(d_page); + s_zpdesc =3D get_next_zpdesc(s_zpdesc); + s_addr =3D zpdesc_kmap_atomic(s_zpdesc); + d_addr =3D zpdesc_kmap_atomic(d_zpdesc); s_size =3D class->size - written; s_off =3D 0; } =20 if (d_off >=3D PAGE_SIZE) { kunmap_atomic(d_addr); - d_page =3D get_next_page(d_page); - d_addr =3D kmap_atomic(d_page); + d_zpdesc =3D get_next_zpdesc(d_zpdesc); + d_addr =3D zpdesc_kmap_atomic(d_zpdesc); d_size =3D class->size - written; d_off =3D 0; } @@ -1791,7 +1792,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, struct zs_pool *pool; struct size_class *class; struct zspage *zspage; - struct page *dummy; + struct zpdesc *dummy; void *s_addr, *d_addr, *addr; unsigned int offset; unsigned long handle; --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23AC3186298 for ; Tue, 13 Aug 2024 08:41:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538486; cv=none; b=ugUzbMWpr9sENqTDc9I4TwQXxiccz93J3ClSxb5sTRS9sI0TlDxDay9sOqzCNClhjkVSCtjVI/1eMei7vRUSPhXlhnEUHzewrw9FBeBWzs/W8JBzqLzbCoNMUiYRluyUW0V5eQRBhoFNrMLni5omVaD5C4pv/nlRRN+N0BntZS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538486; c=relaxed/simple; bh=j9RlWiCV98maWH46GH8LQqP9KXOBrIEnE2eMblguIA0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t+tvnrYB/Ule2rCxkAcLjPZOUeTQ58gcnjMtCBmS3g89szMnyAjmxyP7cK1NLwj4U1zkTBMShPt8y0Ucmhz/18epMt9H+3HGAFOmjQRV+hP/tO/3ot/B+Qou2jP+jlTuPgAwzbp/orcS7b9m88bdmIYLnQFvjBJNx95V5ztsPws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tRCXz/JP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tRCXz/JP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BCC2AC4AF09; Tue, 13 Aug 2024 08:41:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538486; bh=j9RlWiCV98maWH46GH8LQqP9KXOBrIEnE2eMblguIA0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tRCXz/JP3Ty7JtZpW73rKxJ1lhhYk9Ol1Cmy9UFewuVw/ovD7dnyJn3Fbc0FM9JyH mbyqFfGvvltrcRIkhc4Kqz8hdrpnbnVVrP2SWs7vG5U6rft8AnC4LlOKPQGi5ClCVH 3Mn+kEiwvynvpeD3qsCqqNS3lbmOLEQg0BQgJID9O7ZBmqUXu/9Ptl/7Ocu0ngA7bQ AxxymPJxLhXJdFG2aFdsZZLVsIwDpMKZsq9uEsIf4ZtIkaZnzsGRiYyyQzOnk/d8WL eoFkQ512SDtJmPOshzJRwq8BO4kzCBEv+6GJFUI/d80irjxb7cVWmkqi+WJHy3Gh2y VvTRBPTeO0Y5Q== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 05/21] mm/zsmalloc: convert obj_malloc() to use zpdesc Date: Tue, 13 Aug 2024 16:45:51 +0800 Message-ID: <20240813084611.4122571-6-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Use get_first_zpdesc/get_next_zpdesc to replace get_first_page/get_next_page. no functional change. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 384a5ba49788..7421d7678880 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1322,12 +1322,12 @@ EXPORT_SYMBOL_GPL(zs_huge_class_size); static unsigned long obj_malloc(struct zs_pool *pool, struct zspage *zspage, unsigned long handle) { - int i, nr_page, offset; + int i, nr_zpdesc, offset; unsigned long obj; struct link_free *link; struct size_class *class; =20 - struct page *m_page; + struct zpdesc *m_zpdesc; unsigned long m_offset; void *vaddr; =20 @@ -1335,14 +1335,14 @@ static unsigned long obj_malloc(struct zs_pool *poo= l, obj =3D get_freeobj(zspage); =20 offset =3D obj * class->size; - nr_page =3D offset >> PAGE_SHIFT; + nr_zpdesc =3D offset >> PAGE_SHIFT; m_offset =3D offset_in_page(offset); - m_page =3D get_first_page(zspage); + m_zpdesc =3D get_first_zpdesc(zspage); =20 - for (i =3D 0; i < nr_page; i++) - m_page =3D get_next_page(m_page); + for (i =3D 0; i < nr_zpdesc; i++) + m_zpdesc =3D get_next_zpdesc(m_zpdesc); =20 - vaddr =3D kmap_atomic(m_page); + vaddr =3D zpdesc_kmap_atomic(m_zpdesc); link =3D (struct link_free *)vaddr + m_offset / sizeof(*link); set_freeobj(zspage, link->next >> OBJ_TAG_BITS); if (likely(!ZsHugePage(zspage))) @@ -1355,7 +1355,7 @@ static unsigned long obj_malloc(struct zs_pool *pool, kunmap_atomic(vaddr); mod_zspage_inuse(zspage, 1); =20 - obj =3D location_to_obj(m_page, obj); + obj =3D location_to_obj(zpdesc_page(m_zpdesc), obj); record_obj(handle, obj); =20 return obj; --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 623F94D8DA for ; Tue, 13 Aug 2024 08:41:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538490; cv=none; b=M8XzoRWc9Qv2EWRcCXJplE51zKm+qBxdGlurYLBfO3QIagYIOf2l+6sCME4l+4FPkdXkeoXc3OSGutyYOXmUl4Q8q4HUrPdqJH58iqCPHkDy46FFCVMTdRSNNcWIbho2Asl2tgoFZ57L5BKOfvHX5itDFvbCfTzXUZHCDk4jYzY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538490; c=relaxed/simple; bh=It08fyjD3QWexpsDd5b4wIE8APHhPnH+kLBEpOpVbWw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DmW0uSexVXW97eMIOmJZ+eOLKHMv6J/BQqMIGM5H2JMoTIT7jREcTtrcyCJmrILiwGbBuBA12YfzfaPTZEY+kRoXygsYPn9ygyY9RNvyf1l8NaNaCXoa3uptDNJD9JAKanYe0RESxsgEck8FXITe9DnhfLP+3IZR9ABUhRpFGGw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LRJGVXPM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LRJGVXPM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4125C4AF12; Tue, 13 Aug 2024 08:41:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538490; bh=It08fyjD3QWexpsDd5b4wIE8APHhPnH+kLBEpOpVbWw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LRJGVXPMQpMirOcEYCVJ2PjrlSaeIZMkPeyEPqRJc10lEEzZG1mk7YLd4+e8qKzBQ o+Ow6SJpIAgPPpCi2JYGfEien/wgKV9DzVRW0yBRlAjzhSfnKVh0AP5vkUP9EBi8Dv T2GbJPU+PD3E9Dogt/B2sRCEr4oJj9vL6ea4uj/yRIl+IcPUd40hS1VvLQ88r2ZGnO qRx7IntvucBBuag5ICIL0WOK+kUcWoswWW78NdodJ0NSaO+BzQVOzg8FjyThLefxm5 7PqoaEF3XnjvBO+M8yHMz1UiYC81gAqB8xZxjI4tSaVrJ0BV3dQj7leBEYGsEuIhBT rYsAKQBs+LbLA== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 06/21] mm/zsmalloc: convert create_page_chain() and its users to use zpdesc Date: Tue, 13 Aug 2024 16:45:52 +0800 Message-ID: <20240813084611.4122571-7-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi Introduce a few helper functions for conversion to convert create_page_chai= n() to use zpdesc, then use zpdesc in replace_sub_page() too. Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zpdesc.h | 6 +++ mm/zsmalloc.c | 109 ++++++++++++++++++++++++++++++++------------------ 2 files changed, 76 insertions(+), 39 deletions(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 11083a1c2464..3a65a7d494b7 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -108,4 +108,10 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long = pfn) { return page_zpdesc(pfn_to_page(pfn)); } + +static inline void __zpdesc_set_movable(struct zpdesc *zpdesc, + const struct movable_operations *mops) +{ + __SetPageMovable(zpdesc_page(zpdesc), mops); +} #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 7421d7678880..2d0f3d9011ac 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -248,6 +248,35 @@ static inline void *zpdesc_kmap_atomic(struct zpdesc *= zpdesc) return kmap_atomic(zpdesc_page(zpdesc)); } =20 +static inline void zpdesc_set_first(struct zpdesc *zpdesc) +{ + SetPagePrivate(zpdesc_page(zpdesc)); +} + +static inline void zpdesc_inc_zone_page_state(struct zpdesc *zpdesc) +{ + inc_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES); +} + +static inline void zpdesc_dec_zone_page_state(struct zpdesc *zpdesc) +{ + dec_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES); +} + +static inline struct zpdesc *alloc_zpdesc(gfp_t gfp) +{ + struct page *page =3D alloc_page(gfp); + + return page_zpdesc(page); +} + +static inline void free_zpdesc(struct zpdesc *zpdesc) +{ + struct page *page =3D zpdesc_page(zpdesc); + + __free_page(page); +} + struct zspage { struct { unsigned int huge:HUGE_BITS; @@ -954,35 +983,35 @@ static void init_zspage(struct size_class *class, str= uct zspage *zspage) } =20 static void create_page_chain(struct size_class *class, struct zspage *zsp= age, - struct page *pages[]) + struct zpdesc *zpdescs[]) { int i; - struct page *page; - struct page *prev_page =3D NULL; - int nr_pages =3D class->pages_per_zspage; + struct zpdesc *zpdesc; + struct zpdesc *prev_zpdesc =3D NULL; + int nr_zpdescs =3D class->pages_per_zspage; =20 /* * Allocate individual pages and link them together as: - * 1. all pages are linked together using page->index - * 2. each sub-page point to zspage using page->private + * 1. all pages are linked together using zpdesc->next + * 2. each sub-page point to zspage using zpdesc->zspage * - * we set PG_private to identify the first page (i.e. no other sub-page + * we set PG_private to identify the first zpdesc (i.e. no other zpdesc * has this flag set). */ - for (i =3D 0; i < nr_pages; i++) { - page =3D pages[i]; - set_page_private(page, (unsigned long)zspage); - page->index =3D 0; + for (i =3D 0; i < nr_zpdescs; i++) { + zpdesc =3D zpdescs[i]; + zpdesc->zspage =3D zspage; + zpdesc->next =3D NULL; if (i =3D=3D 0) { - zspage->first_zpdesc =3D page_zpdesc(page); - SetPagePrivate(page); + zspage->first_zpdesc =3D zpdesc; + zpdesc_set_first(zpdesc); if (unlikely(class->objs_per_zspage =3D=3D 1 && class->pages_per_zspage =3D=3D 1)) SetZsHugePage(zspage); } else { - prev_page->index =3D (unsigned long)page; + prev_zpdesc->next =3D zpdesc; } - prev_page =3D page; + prev_zpdesc =3D zpdesc; } } =20 @@ -994,7 +1023,7 @@ static struct zspage *alloc_zspage(struct zs_pool *poo= l, gfp_t gfp) { int i; - struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE]; + struct zpdesc *zpdescs[ZS_MAX_PAGES_PER_ZSPAGE]; struct zspage *zspage =3D cache_alloc_zspage(pool, gfp); =20 if (!zspage) @@ -1004,25 +1033,25 @@ static struct zspage *alloc_zspage(struct zs_pool *= pool, migrate_lock_init(zspage); =20 for (i =3D 0; i < class->pages_per_zspage; i++) { - struct page *page; + struct zpdesc *zpdesc; =20 - page =3D alloc_page(gfp); - if (!page) { + zpdesc =3D alloc_zpdesc(gfp); + if (!zpdesc) { while (--i >=3D 0) { - dec_zone_page_state(pages[i], NR_ZSPAGES); - __ClearPageZsmalloc(pages[i]); - __free_page(pages[i]); + zpdesc_dec_zone_page_state(zpdescs[i]); + __ClearPageZsmalloc(zpdesc_page(zpdescs[i])); + free_zpdesc(zpdescs[i]); } cache_free_zspage(pool, zspage); return NULL; } - __SetPageZsmalloc(page); + __SetPageZsmalloc(zpdesc_page(zpdesc)); =20 - inc_zone_page_state(page, NR_ZSPAGES); - pages[i] =3D page; + zpdesc_inc_zone_page_state(zpdesc); + zpdescs[i] =3D zpdesc; } =20 - create_page_chain(class, zspage, pages); + create_page_chain(class, zspage, zpdescs); init_zspage(class, zspage); zspage->pool =3D pool; zspage->class =3D class->index; @@ -1753,26 +1782,28 @@ static void migrate_write_unlock(struct zspage *zsp= age) static const struct movable_operations zsmalloc_mops; =20 static void replace_sub_page(struct size_class *class, struct zspage *zspa= ge, - struct page *newpage, struct page *oldpage) + struct zpdesc *newzpdesc, struct zpdesc *oldzpdesc) { - struct page *page; - struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE] =3D {NULL, }; + struct zpdesc *zpdesc; + struct zpdesc *zpdescs[ZS_MAX_PAGES_PER_ZSPAGE] =3D {NULL, }; + unsigned int first_obj_offset; int idx =3D 0; =20 - page =3D get_first_page(zspage); + zpdesc =3D get_first_zpdesc(zspage); do { - if (page =3D=3D oldpage) - pages[idx] =3D newpage; + if (zpdesc =3D=3D oldzpdesc) + zpdescs[idx] =3D newzpdesc; else - pages[idx] =3D page; + zpdescs[idx] =3D zpdesc; idx++; - } while ((page =3D get_next_page(page)) !=3D NULL); + } while ((zpdesc =3D get_next_zpdesc(zpdesc)) !=3D NULL); =20 - create_page_chain(class, zspage, pages); - set_first_obj_offset(newpage, get_first_obj_offset(oldpage)); + create_page_chain(class, zspage, zpdescs); + first_obj_offset =3D get_first_obj_offset(zpdesc_page(oldzpdesc)); + set_first_obj_offset(zpdesc_page(newzpdesc), first_obj_offset); if (unlikely(ZsHugePage(zspage))) - newpage->index =3D oldpage->index; - __SetPageMovable(newpage, &zsmalloc_mops); + newzpdesc->handle =3D oldzpdesc->handle; + __zpdesc_set_movable(newzpdesc, &zsmalloc_mops); } =20 static bool zs_page_isolate(struct page *page, isolate_mode_t mode) @@ -1845,7 +1876,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, } kunmap_atomic(s_addr); =20 - replace_sub_page(class, zspage, newpage, page); + replace_sub_page(class, zspage, page_zpdesc(newpage), page_zpdesc(page)); /* * Since we complete the data copy and set up new zspage structure, * it's okay to release migration_lock. --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C9C94D8DA for ; Tue, 13 Aug 2024 08:41:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538494; cv=none; b=GIZZzLtNmN769BCYC6ScuZXYAs0WjYAomb6AJ+p9p/M24HPs/a23jRnRFEraDigZqi+94JL+uns45R6Kkzp+d+jqmcKK6KbxJfuMtkCPnGG3n9RGUQgOMDqgJFPSZ+gc9eY2a57HqC7pYZ6vZ9rQ0r2pk1tuZWe/9xUrf70Wo4M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538494; c=relaxed/simple; bh=NHbMPinJZbaSf10+aKQqs+WxIPwaHLYUrrf2vkQMe8s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PRxDRR44Wd5fwYIPOw4zY7dgFz+9KdlOf819P/eNdLMyYpRVxxZeSV8CW35iZjimAuZQT0gU8g8VGDaY7VpFh3GaobR4ue7K1e36YtJIddN0uUNxBchi8mIcbaNPkBgRFnP0xSMeomh3XzYX/e0tQuhReRaIfPrlVBkir3eZ1EE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oBiwvACn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oBiwvACn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB249C4AF09; Tue, 13 Aug 2024 08:41:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538493; bh=NHbMPinJZbaSf10+aKQqs+WxIPwaHLYUrrf2vkQMe8s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oBiwvACnUOx9fF6WAJKC3Ncc3QfJT0w+4/qElt/JAmOLrk/TUUoDtXszV1zYfezaL mXeZYTtMykO9vw1MJ0ZryLAbrLv1yQmWSXLhYgkk5XkQ2fzFKHYmGgOotSgGy1y7WT 9G+VzpmkPb8FjojkNTnSlkAS7pTaMN+oR7H16qLpjNBHtuNKW1L/c/dY39laZ4V4uf g5tsJyjl2CE3UaiBoR8IUlYBHYpUFtzwYlVR2TXMTSsXAHcyCHx1rinO6IQ+bFPgbJ yQAMoEoXrz+7v60ZL3sHg2Ud0fA25HisV1hIksvPKXlAGbd0LNhCPYuUiX05aaqzKa Vqku0Xt3M9vtg== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 07/21] mm/zsmalloc: convert obj_allocated() and related helpers to use zpdesc Date: Tue, 13 Aug 2024 16:45:53 +0800 Message-ID: <20240813084611.4122571-8-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Convert obj_allocated(), and related helpers to take zpdesc. Also make its callers to cast (struct page *) to (struct zpdesc *) when calling them. The users will be converted gradually as there are many. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 2d0f3d9011ac..69305ddc0e81 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -839,15 +839,15 @@ static unsigned long handle_to_obj(unsigned long hand= le) return *(unsigned long *)handle; } =20 -static inline bool obj_allocated(struct page *page, void *obj, +static inline bool obj_allocated(struct zpdesc *zpdesc, void *obj, unsigned long *phandle) { unsigned long handle; - struct zspage *zspage =3D get_zspage(page); + struct zspage *zspage =3D get_zspage(zpdesc_page(zpdesc)); =20 if (unlikely(ZsHugePage(zspage))) { - VM_BUG_ON_PAGE(!is_first_page(page), page); - handle =3D page->index; + VM_BUG_ON_PAGE(!is_first_zpdesc(zpdesc), zpdesc_page(zpdesc)); + handle =3D zpdesc->handle; } else handle =3D *(unsigned long *)obj; =20 @@ -1597,18 +1597,18 @@ static void zs_object_copy(struct size_class *class= , unsigned long dst, * return handle. */ static unsigned long find_alloced_obj(struct size_class *class, - struct page *page, int *obj_idx) + struct zpdesc *zpdesc, int *obj_idx) { unsigned int offset; int index =3D *obj_idx; unsigned long handle =3D 0; - void *addr =3D kmap_atomic(page); + void *addr =3D zpdesc_kmap_atomic(zpdesc); =20 - offset =3D get_first_obj_offset(page); + offset =3D get_first_obj_offset(zpdesc_page(zpdesc)); offset +=3D class->size * index; =20 while (offset < PAGE_SIZE) { - if (obj_allocated(page, addr + offset, &handle)) + if (obj_allocated(zpdesc, addr + offset, &handle)) break; =20 offset +=3D class->size; @@ -1632,7 +1632,7 @@ static void migrate_zspage(struct zs_pool *pool, stru= ct zspage *src_zspage, struct size_class *class =3D pool->size_class[src_zspage->class]; =20 while (1) { - handle =3D find_alloced_obj(class, s_page, &obj_idx); + handle =3D find_alloced_obj(class, page_zpdesc(s_page), &obj_idx); if (!handle) { s_page =3D get_next_page(s_page); if (!s_page) @@ -1865,7 +1865,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, =20 for (addr =3D s_addr + offset; addr < s_addr + PAGE_SIZE; addr +=3D class->size) { - if (obj_allocated(page, addr, &handle)) { + if (obj_allocated(page_zpdesc(page), addr, &handle)) { =20 old_obj =3D handle_to_obj(handle); obj_to_location(old_obj, &dummy, &obj_idx); --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D36A94D8DA for ; Tue, 13 Aug 2024 08:41:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538497; cv=none; b=tha9LJh8BepJteyJZi35knH0J+9xNXDt20TY6/1rOS2cQCHl9L5YRTCCW5pD51IQHKUqFNsJWvrSXGUX4mOExFLUWkU3yXp9t68i0y7RxgymwnE5jJn52o6ROAcyy4ED6XQ5/dWHwk2U2k4B6O5plmeu34E0XaZqowIhPlztTok= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538497; c=relaxed/simple; bh=WK8OGotBTrkxbKYhrb1ooiiEWg5GM27+9CUbteEDELE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lRjQxVZtB/Gfk1rxf2yScYNRBP2RhXDJ6WNxNiz6Cbi2/dsFdZ/dtNrzsJdKOA8+84tOMJpLzqgXPezDWPwdSyGf97r6EsLJlLaIt5mBfNIq48/j/ICOhDQOHcvEjpWpxI2JsJN7J728MuHKIopk1F/5p+n+uMe2N1qKj168qP4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=F7UIAcQ2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="F7UIAcQ2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 84F52C4AF13; Tue, 13 Aug 2024 08:41:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538497; bh=WK8OGotBTrkxbKYhrb1ooiiEWg5GM27+9CUbteEDELE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F7UIAcQ2ktZdGn1OWLy0E/zasMsvSF0kijBroHF8EHYacYxlB2MSlCDYeT7dZ5PEW hKROAwVNCVMZVR+fwI0yTGPNoBq5EBrBecd+M52TBLNgx+wkv56wwvC12zZhUQ8mwT +F+57D62/AqdkqlVTqT1n79ttTF99im4A3EcDPvAFtxW64q4kWkdhyQBTAd1UPNn6J Jsjm3wHPRJ86FBV0xo8SRnIYWL6ysi77/wGwMHyBkg0DBonGQ/oe4+WZG14R5xU9Md Pjw4a+uz84D61qulH54+gpXX+rMLGjKv/rwbp+re9WUDj8w72WRHBXQP9CHoAlNvBv ceaxbBtYv1Beg== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 08/21] mm/zsmalloc: convert init_zspage() to use zpdesc Date: Tue, 13 Aug 2024 16:45:54 +0800 Message-ID: <20240813084611.4122571-9-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Replace get_first/next_page func series and kmap_atomic to new helper, no functional change. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 69305ddc0e81..b07c14552276 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -942,16 +942,16 @@ static void init_zspage(struct size_class *class, str= uct zspage *zspage) { unsigned int freeobj =3D 1; unsigned long off =3D 0; - struct page *page =3D get_first_page(zspage); + struct zpdesc *zpdesc =3D get_first_zpdesc(zspage); =20 - while (page) { - struct page *next_page; + while (zpdesc) { + struct zpdesc *next_zpdesc; struct link_free *link; void *vaddr; =20 - set_first_obj_offset(page, off); + set_first_obj_offset(zpdesc_page(zpdesc), off); =20 - vaddr =3D kmap_atomic(page); + vaddr =3D zpdesc_kmap_atomic(zpdesc); link =3D (struct link_free *)vaddr + off / sizeof(*link); =20 while ((off +=3D class->size) < PAGE_SIZE) { @@ -964,8 +964,8 @@ static void init_zspage(struct size_class *class, struc= t zspage *zspage) * page, which must point to the first object on the next * page (if present) */ - next_page =3D get_next_page(page); - if (next_page) { + next_zpdesc =3D get_next_zpdesc(zpdesc); + if (next_zpdesc) { link->next =3D freeobj++ << OBJ_TAG_BITS; } else { /* @@ -975,7 +975,7 @@ static void init_zspage(struct size_class *class, struc= t zspage *zspage) link->next =3D -1UL << OBJ_TAG_BITS; } kunmap_atomic(vaddr); - page =3D next_page; + zpdesc =3D next_zpdesc; off %=3D PAGE_SIZE; } =20 --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15F1518E043 for ; Tue, 13 Aug 2024 08:41:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538502; cv=none; b=Z1MIQWn2aD8nmkRqryK/CLx984FkjS/tDYbrBhY/o+yqaWAlOetEHKjtXJTVRAfWZ4+x7IgFqXJm4tImeYfk1csyi34s4OUKtVy4k24L6gQDi6+LU9iaw/qJZNhA5fh2vlnTw/fWkBIBzHI4AiJdA+7rGz/fxgPZpWRjDqSlhzs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538502; c=relaxed/simple; bh=TLwbltxTVpT2lgiPJc6Jxl8phogKw6tUJfKuP0Et5M4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=g7W9D/Z5+57Ne1ammkD7CYC/yZjamZmG8kV5ompReUQJ8y2AejyT1mbC2koOOVtZpwhrra/RqM/MNitfZp7hz5W1syJEea4n7wBDkCuWiRIbra1iocHAKJgI4Tdo0QzLU3wDVQ/sGEdt6Nne30v/o0QWmGEUNagPnPHjZZ0dTaw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Zn4AO2PP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Zn4AO2PP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6896CC4AF0B; Tue, 13 Aug 2024 08:41:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538501; bh=TLwbltxTVpT2lgiPJc6Jxl8phogKw6tUJfKuP0Et5M4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Zn4AO2PPsEJcQ4S/HgpTW9guymMeO2zo3b9e8FX/6yLaPpDw0yvUCSZEF1oFBDj2V ee4zE0Yh6pulxHPa96xkoQA8A38ttfDQppDSn3RBMZez9+OUE2NO1KlMRwQjgRWSfC MW25K0Kbmj/zZZO12e6xpBLe8W8zJA6dBZvuw6mmzVnXgdXxtQ+7LXnPPqAJFP9z8G IaZuTq09bkCVYWjqEmZZLFL5NbsJJEnOzGWh4c7AFa6Af5SoD6Yego+/BzJ2D0mO8w 6j7jE3ZKKt/fXyxXpy/uhyjt47zFMj3n8FY+7qL5SI8jqBzoxQRKYHJ8QAxwY7xUe6 LnSzxgNYWLTpg== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 09/21] mm/zsmalloc: convert obj_to_page() and zs_free() to use zpdesc Date: Tue, 13 Aug 2024 16:45:55 +0800 Message-ID: <20240813084611.4122571-10-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Rename obj_to_page() to obj_to_zpdesc() and also convert it and its user zs_free() to use zpdesc. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b07c14552276..cb90defd3c0a 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -814,9 +814,9 @@ static void obj_to_location(unsigned long obj, struct z= pdesc **zpdesc, *obj_idx =3D (obj & OBJ_INDEX_MASK); } =20 -static void obj_to_page(unsigned long obj, struct page **page) +static void obj_to_zpdesc(unsigned long obj, struct zpdesc **zpdesc) { - *page =3D pfn_to_page(obj >> OBJ_INDEX_BITS); + *zpdesc =3D pfn_zpdesc(obj >> OBJ_INDEX_BITS); } =20 /** @@ -1490,7 +1490,7 @@ static void obj_free(int class_size, unsigned long ob= j) void zs_free(struct zs_pool *pool, unsigned long handle) { struct zspage *zspage; - struct page *f_page; + struct zpdesc *f_zpdesc; unsigned long obj; struct size_class *class; int fullness; @@ -1504,8 +1504,8 @@ void zs_free(struct zs_pool *pool, unsigned long hand= le) */ read_lock(&pool->migrate_lock); obj =3D handle_to_obj(handle); - obj_to_page(obj, &f_page); - zspage =3D get_zspage(f_page); + obj_to_zpdesc(obj, &f_zpdesc); + zspage =3D get_zspage(zpdesc_page(f_zpdesc)); class =3D zspage_class(pool, zspage); spin_lock(&class->lock); read_unlock(&pool->migrate_lock); --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B790A197A61 for ; Tue, 13 Aug 2024 08:41:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538505; cv=none; b=QUw9PPJNtWZaml0wL9vl0aLHokyJUK4gga4WjpxJ3z5pRo7Ta43PnVGWn0g6hyUU9XE01XfuMDDo0NPWWIA9d5C4s/+eIums8NeIo5k5ZLkfWqjJL0KPf0qVqsLXAkpbUSuzUZrArb4FYxewcIHcB4aXJvn2yzPJ6RjG0R6QSqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538505; c=relaxed/simple; bh=UmY8mz0Mo/B8gMn4RPUMsn8q8kByucvcQFpc8ROaoN8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YT1vyvZ2gv3QQsF/9jRfn8LGXBxNm7y8KZwkil2l1O7fSuZelEl7Wn+31B+7RRboL6LsTDvMxZOJM1yj1j5XOhMOrufHGwSW6M4aovu6Gzak/4d0+PhIu8Zg5GMU531mPQkXgD2oN8IxR+SwG/fHpoIrTnQ2beFTxkuYWrAfmTE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dOOxP1Wv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dOOxP1Wv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45719C4AF10; Tue, 13 Aug 2024 08:41:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538505; bh=UmY8mz0Mo/B8gMn4RPUMsn8q8kByucvcQFpc8ROaoN8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dOOxP1WvR4nlRS/PNgkaS23vVd55SRWG+qIRE209IfYLyLsqGZjR7PklKPuEYICYJ XeL5/VZs+2k81LiQUCMDbY4hJfhw4fIletndPnqrB4P0GXkBkofSmpzqnXYOO081Vr 3SsdAS8ceOfC1hyNu2WdynXMx5o2fHHL46sbzY8uTYIY+G2F3A8bGMOAObdCoZUTPj C9KAAb+2OfoGY1Y25Imog1SooutwDOnDEry17xuJOhHxDogxrQXQUpB1DLISTIjOhT YiZI9A/IdZ7bhzYUmI36JbUZqh/Bs9jLalJ5FRfHyT+W2uibxDS6T1OFyvVi1gHpmq ZRXdWlzPjK9Sw== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 10/21] mm/zsmalloc: add zpdesc_is_isolated()/zpdesc_zone() helper for zs_page_migrate() Date: Tue, 13 Aug 2024 16:45:56 +0800 Message-ID: <20240813084611.4122571-11-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To convert page to zpdesc in zs_page_migrate(), we added zpdesc_is_isolated()/zpdesc_zone() helpers. No functional change. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zpdesc.h | 11 +++++++++++ mm/zsmalloc.c | 30 ++++++++++++++++-------------- 2 files changed, 27 insertions(+), 14 deletions(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 3a65a7d494b7..4b42d8517fcb 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -114,4 +114,15 @@ static inline void __zpdesc_set_movable(struct zpdesc = *zpdesc, { __SetPageMovable(zpdesc_page(zpdesc), mops); } + +static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc) +{ + return PageIsolated(zpdesc_page(zpdesc)); +} + +static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc) +{ + return page_zone(zpdesc_page(zpdesc)); +} + #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index cb90defd3c0a..65c4252412b3 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1824,19 +1824,21 @@ static int zs_page_migrate(struct page *newpage, st= ruct page *page, struct size_class *class; struct zspage *zspage; struct zpdesc *dummy; + struct zpdesc *newzpdesc =3D page_zpdesc(newpage); + struct zpdesc *zpdesc =3D page_zpdesc(page); void *s_addr, *d_addr, *addr; unsigned int offset; unsigned long handle; unsigned long old_obj, new_obj; unsigned int obj_idx; =20 - VM_BUG_ON_PAGE(!PageIsolated(page), page); + VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc)); =20 /* We're committed, tell the world that this is a Zsmalloc page. */ - __SetPageZsmalloc(newpage); + __SetPageZsmalloc(zpdesc_page(newzpdesc)); =20 /* The page is locked, so this pointer must remain valid */ - zspage =3D get_zspage(page); + zspage =3D get_zspage(zpdesc_page(zpdesc)); pool =3D zspage->pool; =20 /* @@ -1853,30 +1855,30 @@ static int zs_page_migrate(struct page *newpage, st= ruct page *page, /* the migrate_write_lock protects zpage access via zs_map_object */ migrate_write_lock(zspage); =20 - offset =3D get_first_obj_offset(page); - s_addr =3D kmap_atomic(page); + offset =3D get_first_obj_offset(zpdesc_page(zpdesc)); + s_addr =3D zpdesc_kmap_atomic(zpdesc); =20 /* * Here, any user cannot access all objects in the zspage so let's move. */ - d_addr =3D kmap_atomic(newpage); + d_addr =3D zpdesc_kmap_atomic(newzpdesc); copy_page(d_addr, s_addr); kunmap_atomic(d_addr); =20 for (addr =3D s_addr + offset; addr < s_addr + PAGE_SIZE; addr +=3D class->size) { - if (obj_allocated(page_zpdesc(page), addr, &handle)) { + if (obj_allocated(zpdesc, addr, &handle)) { =20 old_obj =3D handle_to_obj(handle); obj_to_location(old_obj, &dummy, &obj_idx); - new_obj =3D (unsigned long)location_to_obj(newpage, + new_obj =3D (unsigned long)location_to_obj(zpdesc_page(newzpdesc), obj_idx); record_obj(handle, new_obj); } } kunmap_atomic(s_addr); =20 - replace_sub_page(class, zspage, page_zpdesc(newpage), page_zpdesc(page)); + replace_sub_page(class, zspage, newzpdesc, zpdesc); /* * Since we complete the data copy and set up new zspage structure, * it's okay to release migration_lock. @@ -1885,14 +1887,14 @@ static int zs_page_migrate(struct page *newpage, st= ruct page *page, spin_unlock(&class->lock); migrate_write_unlock(zspage); =20 - get_page(newpage); - if (page_zone(newpage) !=3D page_zone(page)) { - dec_zone_page_state(page, NR_ZSPAGES); - inc_zone_page_state(newpage, NR_ZSPAGES); + zpdesc_get(newzpdesc); + if (zpdesc_zone(newzpdesc) !=3D zpdesc_zone(zpdesc)) { + zpdesc_dec_zone_page_state(zpdesc); + zpdesc_inc_zone_page_state(newzpdesc); } =20 reset_page(page); - put_page(page); + zpdesc_put(zpdesc); =20 return MIGRATEPAGE_SUCCESS; } --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED79619AD6C for ; Tue, 13 Aug 2024 08:41:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538510; cv=none; b=YFQIeygdoUkXY/miKIBOg1ygo366SdVaynTj8FxIb3kil0w0kISc+w4cHu5sDRH9peY+lug7StIu9BNwMDufmkDbQE0gDmaB7qwebilmf0AiRF0/aBiARL9AENnuox+JTBTILXLeme/4gjDSRQ2ZLut+9WAohv4m76YoYXxYqcs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538510; c=relaxed/simple; bh=43XgPIuMdTYjompc4vGGiVQL8qZrEOEuMCnAM7TFR7U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ctTJLJewnhjo8g3jcjvwzCqbrY67N/e+IifyC+aFOzRJ0+4Lwf7fa2CQizEadlAErxZM/r2gX2rWA9K81lmmFpvM+Ht8B/epsWNKMUolcTyJciSqcN29mQGg8L2wfrn5QgKf5AOLFlU8tIsQ05eT4uFVj7Xvkz++T7PqhuBoPtc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Xo7gB3Bq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Xo7gB3Bq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4D8E3C4AF12; Tue, 13 Aug 2024 08:41:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538509; bh=43XgPIuMdTYjompc4vGGiVQL8qZrEOEuMCnAM7TFR7U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Xo7gB3BqFWVM4It6qj2lRPdZgFjGIcIeJ/86oumVRtJApctRnsQGjwaEQPKdxMvdd MIjd+B5OVxXlzcGCXoZEk19wwJ3ilpyb4FBBmyoWbikJ/vlYyjgxOR2qyRs2xsJ9jh iXar1Zrpgk0vDVxw1cUMmGUgHjZbTZk5J+utPy+6t3/0MCdFiLgdWnUEKw2G4UJRry IN2pWUO+/6I0YChVjnKMQJyZTj3WiSG9RlNiPrlV7UFTJG92WEm4irz8zk14pmPVYS vkJQj3Rarc8ig0DecFxTs6bgMD/6IU+9OxCfzbfLdQztTGMrRIddglPt1yIUMHeNsk 5GoTdMIMtC6/A== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 11/21] mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it Date: Tue, 13 Aug 2024 16:45:57 +0800 Message-ID: <20240813084611.4122571-12-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi zpdesc.zspage matches with page.private, zpdesc.next matches with page.index. They will be reset in reset_page() wich is called prior to free base pages of a zspage. Use zpdesc to replace page struct and rename it to reset_zpdesc(), few page helper still left since they are used too widely. Signed-off-by: Alex Shi --- mm/zsmalloc.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 65c4252412b3..6c5dccbc9a60 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -859,12 +859,14 @@ static inline bool obj_allocated(struct zpdesc *zpdes= c, void *obj, return true; } =20 -static void reset_page(struct page *page) +static void reset_zpdesc(struct zpdesc *zpdesc) { + struct page *page =3D zpdesc_page(zpdesc); + __ClearPageMovable(page); ClearPagePrivate(page); - set_page_private(page, 0); - page->index =3D 0; + zpdesc->zspage =3D NULL; + zpdesc->next =3D NULL; reset_first_obj_offset(page); __ClearPageZsmalloc(page); } @@ -904,7 +906,7 @@ static void __free_zspage(struct zs_pool *pool, struct = size_class *class, do { VM_BUG_ON_PAGE(!PageLocked(page), page); next =3D get_next_page(page); - reset_page(page); + reset_zpdesc(page_zpdesc(page)); unlock_page(page); dec_zone_page_state(page, NR_ZSPAGES); put_page(page); @@ -1893,7 +1895,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, zpdesc_inc_zone_page_state(newzpdesc); } =20 - reset_page(page); + reset_zpdesc(zpdesc); zpdesc_put(zpdesc); =20 return MIGRATEPAGE_SUCCESS; --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDCEC16E89B for ; Tue, 13 Aug 2024 08:41:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538513; cv=none; b=qaXoaHD3pLo5KcdY6KbiRrA7L7q3JqQpIrFmD2JHncFZAyMR867qEhWwGiUbMVgjRFqwfC/FxzL9ljdGYYv4ekNrXJHToilWnpQxAiOATffgb0GDjLm/FpVZM48HlSYc/qDC+HDHClV0u3vLvKnEhG41v42gkwFBnhvlHXlgsIA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538513; c=relaxed/simple; bh=Zf9TjxFFbO1e13JTKQj4PfBu8BRbAcD0oZRJJHyNdF8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LQktexD/7dEouMTJTCofdVn/ZeMQpnwsrzkKqFoCkKemNVnQszQX5CjC6x3BW/yhgNxDt39IN7o47pRa10YnUqAstNL0e3jlESah/+0bejKox59cfo7/xqQt30OVIXT/OC8yX6KdYO0caQy0gteRWD9WUZpC/BuLIHYGV0uhKZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Ng8rlXIL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Ng8rlXIL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2AA27C4AF11; Tue, 13 Aug 2024 08:41:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538513; bh=Zf9TjxFFbO1e13JTKQj4PfBu8BRbAcD0oZRJJHyNdF8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ng8rlXILf+oInKW3qAfGb1XtZwQc0EivxacQrteO99zmio5Op2vaSm9cKsH+s1glz hJ0rJrDKlc0/qijL9tJ01zDCu1RpN1zV2AEN9qj11s9Zx10q5OPDKnwo3duHiTiPNx vKSDhEysjCzTbVyFhzVB73nFhg0avnJsnwNCvZDzwTxe8BfEcu0mqXawP/+s19lgK1 phB0C4MjtZvYdtZ79SPDYXjiQjHUc068HvT8fcgQLniUWBeoVsyl6RpGZVF6TwGvSR hW3TDaMc3aKQLK+52Loibwk8NS3uYXPbVig0vN29xtUkM18bRFwOoBXm9j6PKwhxz4 /Hx8TbYDOrQnw== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 12/21] mm/zsmalloc: convert __free_zspage() to use zdsesc Date: Tue, 13 Aug 2024 16:45:58 +0800 Message-ID: <20240813084611.4122571-13-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Introduce zpdesc_is_locked() and convert __free_zspage() to use zpdesc. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zpdesc.h | 4 ++++ mm/zsmalloc.c | 20 ++++++++++---------- 2 files changed, 14 insertions(+), 10 deletions(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 4b42d8517fcb..a1834d36ccfc 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -125,4 +125,8 @@ static inline struct zone *zpdesc_zone(struct zpdesc *z= pdesc) return page_zone(zpdesc_page(zpdesc)); } =20 +static inline bool zpdesc_is_locked(struct zpdesc *zpdesc) +{ + return PageLocked(zpdesc_page(zpdesc)); +} #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 6c5dccbc9a60..7b3344c226a0 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -895,23 +895,23 @@ static int trylock_zspage(struct zspage *zspage) static void __free_zspage(struct zs_pool *pool, struct size_class *class, struct zspage *zspage) { - struct page *page, *next; + struct zpdesc *zpdesc, *next; =20 assert_spin_locked(&class->lock); =20 VM_BUG_ON(get_zspage_inuse(zspage)); VM_BUG_ON(zspage->fullness !=3D ZS_INUSE_RATIO_0); =20 - next =3D page =3D get_first_page(zspage); + next =3D zpdesc =3D get_first_zpdesc(zspage); do { - VM_BUG_ON_PAGE(!PageLocked(page), page); - next =3D get_next_page(page); - reset_zpdesc(page_zpdesc(page)); - unlock_page(page); - dec_zone_page_state(page, NR_ZSPAGES); - put_page(page); - page =3D next; - } while (page !=3D NULL); + VM_BUG_ON_PAGE(!zpdesc_is_locked(zpdesc), zpdesc_page(zpdesc)); + next =3D get_next_zpdesc(zpdesc); + reset_zpdesc(zpdesc); + zpdesc_unlock(zpdesc); + zpdesc_dec_zone_page_state(zpdesc); + zpdesc_put(zpdesc); + zpdesc =3D next; + } while (zpdesc !=3D NULL); =20 cache_free_zspage(pool, zspage); =20 --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4A0216DEBD for ; Tue, 13 Aug 2024 08:41:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538517; cv=none; b=E2GZ+VJifIiOhVQolue4UVifON8lxo5CWLfRVENltHyW/EEQQqFKG9smQjgIY+oCuso2ufgDOGfawuNCt8tOu7fgjJ0VtYD52JH2x+DvRoDM956sKeEHwPjWYT3GkN+PRBRkWoroawIGv0nkmYdQ1Dqh6iDl16FTCUHRn59qVzc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538517; c=relaxed/simple; bh=P2h4TL0al2Tdq16vEvHZlxJuPRha9tB1mdH4uW7GsvU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Kfx0+WR6Hgx5CBEZPVNPj6a6i2jl+QvR+G80wskUlYnjotZ0Va2Fn+0/hPWPQWELKUrrYabrstm09IBZWQw14PhRa5JPXhnD1RcvicPnSln1qLnX/OAKuL42WoJyaKGmxk4eTOCgH4DWNrUAdomCdD8PcRICYuc+0YrO1bbl+Vk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SyU1F0CU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SyU1F0CU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 094C4C4AF09; Tue, 13 Aug 2024 08:41:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538517; bh=P2h4TL0al2Tdq16vEvHZlxJuPRha9tB1mdH4uW7GsvU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SyU1F0CUC5fJJ+b0xwqG/Ht51CluyUnYwwigzprVYJCkZsEq4VEoBHyVwwQ2VHi+8 3QoMrvaINRWhJ/usOIFbXtV2ZefP60g2/PcvhoM6VIgi9e9J5p0EcYK7sudn+/ET3Z BiXsmhroLMiO0q5npsqO+h9hJRPRkv/trQP27fIIiGhqhZJt+2OdqFtovOSWnue1zN yc4GNEfoV6BOTJicX27tXNKpte65+7eeh9T6ZKi1C+gw/Wt9/mMOG5Qx6CYVTu3ouo 1qWLwB/0gyxRxR3udXWKOGCr8+5kajpmf1mzvCscAOe45l1aezR3oeHw+vkUDjAd7d Vb9d+lE8wwL/Q== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 13/21] mm/zsmalloc: convert location_to_obj() to take zpdesc Date: Tue, 13 Aug 2024 16:45:59 +0800 Message-ID: <20240813084611.4122571-14-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> As all users of location_to_obj() now use zpdesc, convert location_to_obj() to take zpdesc. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 7b3344c226a0..9218f1e6e8ef 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -820,15 +820,15 @@ static void obj_to_zpdesc(unsigned long obj, struct z= pdesc **zpdesc) } =20 /** - * location_to_obj - get obj value encoded from (, ) - * @page: page object resides in zspage + * location_to_obj - get obj value encoded from (, ) + * @zpdesc: zpdesc object resides in zspage * @obj_idx: object index */ -static unsigned long location_to_obj(struct page *page, unsigned int obj_i= dx) +static unsigned long location_to_obj(struct zpdesc *zpdesc, unsigned int o= bj_idx) { unsigned long obj; =20 - obj =3D page_to_pfn(page) << OBJ_INDEX_BITS; + obj =3D zpdesc_pfn(zpdesc) << OBJ_INDEX_BITS; obj |=3D obj_idx & OBJ_INDEX_MASK; =20 return obj; @@ -1386,7 +1386,7 @@ static unsigned long obj_malloc(struct zs_pool *pool, kunmap_atomic(vaddr); mod_zspage_inuse(zspage, 1); =20 - obj =3D location_to_obj(zpdesc_page(m_zpdesc), obj); + obj =3D location_to_obj(m_zpdesc, obj); record_obj(handle, obj); =20 return obj; @@ -1873,8 +1873,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, =20 old_obj =3D handle_to_obj(handle); obj_to_location(old_obj, &dummy, &obj_idx); - new_obj =3D (unsigned long)location_to_obj(zpdesc_page(newzpdesc), - obj_idx); + new_obj =3D (unsigned long)location_to_obj(newzpdesc, obj_idx); record_obj(handle, new_obj); } } --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5BACB16DEBD for ; Tue, 13 Aug 2024 08:42:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538521; cv=none; b=WuXPRRozUSpv4QvP/LQ84Gq7y4eO3JPSo4Il2XaA87PTYOstZTiXg0chAqQAWEpNo6Rhaw+vxdfKQy2cJKdlxsN6c7zsOpTWVhbsgyvkHqDkbahRpWkAFnxLYcLCj4bU497II6y/pxRh9u8PcDRconFIDEuwHUi70DxVG2QPLF4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538521; c=relaxed/simple; bh=lwqsbsNLYr/NAY4uxgyMtSwAZaoCij/qukX2cmqm4rY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eORhOG63M7gIazqA+9WY3RQfcTj/d+llKFh5vRmAIKbOpY4sUSjIqZ0SjM72o4gGpBiMY6MN6F3fC6gadFnaXo5Z3vQ+L73KVZZxoXVgYzLEOtXV4c9yonunka67tngw0kKHPZMCB2aGBFT1XlMkmedBSvswLdKoQELCQqjnOEk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FD7LH1Td; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FD7LH1Td" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12BF9C4AF10; Tue, 13 Aug 2024 08:41:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538521; bh=lwqsbsNLYr/NAY4uxgyMtSwAZaoCij/qukX2cmqm4rY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FD7LH1TdysUcP3n1SbAE6auP5bdtAyVHXf3rUWPZVQR1EC39qS51lMmMomKTs3fvy 7fsB8mSnAG/hgU65T4GMmYCDPKayYBgRSRtWVmfvdU5gEtZA+kPGlUaeTzdQRp8vjf 4U7tvJjBF5qFb9kSP0xZrNyJTFcF8lSvOl0NmegGoWZBVevdSVEHorHaBIh5yFkCr4 3ob6BNnneg8KHjTpnx2o2raeE/wob3UTFXVqrK0UqM7kJtjrkHE12Xj9jXvQhYoeyQ ktd8xeRN0S6JfPDRmiWXuDQ0JLZs24fIZ42/mgUTS4NExhZ8tTjn+QBkifl5Mj3Nam btW7bLc+cfArw== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 14/21] mm/zsmalloc: convert migrate_zspage() to use zpdesc Date: Tue, 13 Aug 2024 16:46:00 +0800 Message-ID: <20240813084611.4122571-15-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Use get_first_zpdesc/get_next_zpdesc to replace get_first/next_page. No functional change. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 9218f1e6e8ef..f93af6e10c3a 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1630,14 +1630,14 @@ static void migrate_zspage(struct zs_pool *pool, st= ruct zspage *src_zspage, unsigned long used_obj, free_obj; unsigned long handle; int obj_idx =3D 0; - struct page *s_page =3D get_first_page(src_zspage); + struct zpdesc *s_zpdesc =3D get_first_zpdesc(src_zspage); struct size_class *class =3D pool->size_class[src_zspage->class]; =20 while (1) { - handle =3D find_alloced_obj(class, page_zpdesc(s_page), &obj_idx); + handle =3D find_alloced_obj(class, s_zpdesc, &obj_idx); if (!handle) { - s_page =3D get_next_page(s_page); - if (!s_page) + s_zpdesc =3D get_next_zpdesc(s_zpdesc); + if (!s_zpdesc) break; obj_idx =3D 0; continue; --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6922819AD89 for ; Tue, 13 Aug 2024 08:42:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538525; cv=none; b=g8g4lrzesyAp2iNJxojhxKI51Jls5RjJ+hn9876AiMx2iML35QiVPP0u1gVzQUg/htxRIcMuN8qd1o9gEdjji8tTwCM9vF02+LRoAfV482csZucyDZ4bCaYttVdRP1U3xy2optpMXPHJZwGbGrBFme7G0EyuCsjZdaaFG83tgHs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538525; c=relaxed/simple; bh=tL/aSxtJHYmt2JA8n2u9RLnt0i9TICyqYsVXNrqh3nU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tN4N3g/qmbCM2ymQhH0fcR9OalCRYCKVWJT6cyDFt3VMb3YVTzz8sxmTSkFyHlkUIfGPjAdZQWxyfKazx/lf041IvjkFX6gzJrxlflGiy/NVkJBu0Y8HaDwddS9RL9pJ/AYU7HAAKXQEwgwPRZas07w2dPuf74waFqHf6rv89jc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=D0tJ5d2Y; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="D0tJ5d2Y" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E40DCC4AF16; Tue, 13 Aug 2024 08:42:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538525; bh=tL/aSxtJHYmt2JA8n2u9RLnt0i9TICyqYsVXNrqh3nU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D0tJ5d2Ykx0EcLRUEijuPnSiFdQLUU+KiE+TEkgtbafVTWnhrhpjOOGIrKgUD+c1P nWObHJs0C3iwN/7tMgmGpNy89peRAfNOizZEYiOy6KG0gDzXJFn7hm3uhifphw5MqJ HpLxzjmtjt1nX7g1JiLabCtATwm1VUdkHHigB9bxgc3mDSZU1X8Iiwhm3iJOp0U3Og TZQDncbP6fbBm1G+6g3EGSdeyhoHL166jXO92b579SmM0/4viqqMgA4EXFUSis7uEd LUaTOvfVj/9xJVb+GTjsTyke/3ZEeF3t2nH4qXnAK6+sVBkbQnTxhYSP8Vmjcu3O4I Dyl83gBdiNbSQ== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 15/21] mm/zsmalloc: convert get_zspage() to take zpdesc Date: Tue, 13 Aug 2024 16:46:01 +0800 Message-ID: <20240813084611.4122571-16-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Now that all users except get_next_page() (which will be removed in later patch) use zpdesc, convert get_zspage() to take zpdesc instead of page. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index f93af6e10c3a..5e96ff2ee0d4 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -773,9 +773,9 @@ static int fix_fullness_group(struct size_class *class,= struct zspage *zspage) return newfg; } =20 -static struct zspage *get_zspage(struct page *page) +static struct zspage *get_zspage(struct zpdesc *zpdesc) { - struct zspage *zspage =3D (struct zspage *)page_private(page); + struct zspage *zspage =3D zpdesc->zspage; =20 BUG_ON(zspage->magic !=3D ZSPAGE_MAGIC); return zspage; @@ -783,7 +783,7 @@ static struct zspage *get_zspage(struct page *page) =20 static struct page *get_next_page(struct page *page) { - struct zspage *zspage =3D get_zspage(page); + struct zspage *zspage =3D get_zspage(page_zpdesc(page)); =20 if (unlikely(ZsHugePage(zspage))) return NULL; @@ -793,7 +793,7 @@ static struct page *get_next_page(struct page *page) =20 static struct zpdesc *get_next_zpdesc(struct zpdesc *zpdesc) { - struct zspage *zspage =3D get_zspage(zpdesc_page(zpdesc)); + struct zspage *zspage =3D get_zspage(zpdesc); =20 if (unlikely(ZsHugePage(zspage))) return NULL; @@ -843,7 +843,7 @@ static inline bool obj_allocated(struct zpdesc *zpdesc,= void *obj, unsigned long *phandle) { unsigned long handle; - struct zspage *zspage =3D get_zspage(zpdesc_page(zpdesc)); + struct zspage *zspage =3D get_zspage(zpdesc); =20 if (unlikely(ZsHugePage(zspage))) { VM_BUG_ON_PAGE(!is_first_zpdesc(zpdesc), zpdesc_page(zpdesc)); @@ -1259,7 +1259,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned lo= ng handle, read_lock(&pool->migrate_lock); obj =3D handle_to_obj(handle); obj_to_location(obj, &zpdesc, &obj_idx); - zspage =3D get_zspage(zpdesc_page(zpdesc)); + zspage =3D get_zspage(zpdesc); =20 /* * migration cannot move any zpages in this zspage. Here, class->lock @@ -1309,7 +1309,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned l= ong handle) =20 obj =3D handle_to_obj(handle); obj_to_location(obj, &zpdesc, &obj_idx); - zspage =3D get_zspage(zpdesc_page(zpdesc)); + zspage =3D get_zspage(zpdesc); class =3D zspage_class(pool, zspage); off =3D offset_in_page(class->size * obj_idx); =20 @@ -1473,7 +1473,7 @@ static void obj_free(int class_size, unsigned long ob= j) =20 obj_to_location(obj, &f_zpdesc, &f_objidx); f_offset =3D offset_in_page(class_size * f_objidx); - zspage =3D get_zspage(zpdesc_page(f_zpdesc)); + zspage =3D get_zspage(f_zpdesc); =20 vaddr =3D zpdesc_kmap_atomic(f_zpdesc); link =3D (struct link_free *)(vaddr + f_offset); @@ -1507,7 +1507,7 @@ void zs_free(struct zs_pool *pool, unsigned long hand= le) read_lock(&pool->migrate_lock); obj =3D handle_to_obj(handle); obj_to_zpdesc(obj, &f_zpdesc); - zspage =3D get_zspage(zpdesc_page(f_zpdesc)); + zspage =3D get_zspage(f_zpdesc); class =3D zspage_class(pool, zspage); spin_lock(&class->lock); read_unlock(&pool->migrate_lock); @@ -1840,7 +1840,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, __SetPageZsmalloc(zpdesc_page(newzpdesc)); =20 /* The page is locked, so this pointer must remain valid */ - zspage =3D get_zspage(zpdesc_page(zpdesc)); + zspage =3D get_zspage(zpdesc); pool =3D zspage->pool; =20 /* --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69D5E19B3EE for ; Tue, 13 Aug 2024 08:42:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538529; cv=none; b=B1PLHoL6szQMOxYqwCUScSxKA9pLZlDsSufrRYCcRr23U3+W7uOh5T0OmQLlWmygzzSzKq9lffZl+hfFULqVajcjMnH8nIznVB5/l1odNkY4K1pMGIdnRQRVfPWcueM8i/S0/Tgj9phwTGMkACrvFBw7GuYfQ2IXrmMDnud/wCk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538529; c=relaxed/simple; bh=gQaMHcDtxoBXgRT3p7vOtyKNFbzrfkZ7CKreedzRtjs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n2keYA+cl8hnZc7165NXF8a97scIXEtATMBtD4DBhoVMtfnPC13Sh3wNZQhuj3wMm7DviNRfumd3jIBiBblW4nCbJl1ievUlW0ArYmdyLcOmxmc7Xmq20QCvZTVrUEuSFuHq8K1HgJnM1qP7vQmMwFf2JGG9hu7PtunbYTEc7jM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JgfTSlvZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JgfTSlvZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE86FC4AF13; Tue, 13 Aug 2024 08:42:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538529; bh=gQaMHcDtxoBXgRT3p7vOtyKNFbzrfkZ7CKreedzRtjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JgfTSlvZRX5rj4F+je0IgQVoumPPB1HFUsceRotVPVhoikowdDEW0P2qObUTYI398 gR1/EIMADa8MqSXujMKyHg/EExmLFsocPWJg6SoEbAOwiulDmKDAQOclW/XRzGG0uC KpWTOdWx2HgFPAywG6qxV7VN5PH3agkB+nina86PjV5/5qTIA7ebSafJN3QNkipVW5 AUs2ctwy+2+ZBzAftUKxISXseq9T18Wg5QL6U8S+DU1CjGcZJJOMb1wab8R8Mn1GEb IzCYbHJRKIqxQVx1HPqmCnGxr5TG8ZZr16pysTaSatTJE36v7R6CxlKYcdWuIyKVIE HNnVoo5DzUppA== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 16/21] mm/zsmalloc: convert SetZsPageMovable and remove unused funcs Date: Tue, 13 Aug 2024 16:46:02 +0800 Message-ID: <20240813084611.4122571-17-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi Convert SetZsPageMovable() to use zpdesc, and then remove unused funcs: get_next_page()/get_first_page()/is_first_page(). Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 33 +++++---------------------------- 1 file changed, 5 insertions(+), 28 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 5e96ff2ee0d4..e2b203db192a 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -462,11 +462,6 @@ static DEFINE_PER_CPU(struct mapping_area, zs_map_area= ) =3D { .lock =3D INIT_LOCAL_LOCK(lock), }; =20 -static __maybe_unused int is_first_page(struct page *page) -{ - return PagePrivate(page); -} - static inline bool is_first_zpdesc(struct zpdesc *zpdesc) { return PagePrivate(zpdesc_page(zpdesc)); @@ -483,14 +478,6 @@ static inline void mod_zspage_inuse(struct zspage *zsp= age, int val) zspage->inuse +=3D val; } =20 -static inline struct page *get_first_page(struct zspage *zspage) -{ - struct page *first_page =3D zpdesc_page(zspage->first_zpdesc); - - VM_BUG_ON_PAGE(!is_first_page(first_page), first_page); - return first_page; -} - static struct zpdesc *get_first_zpdesc(struct zspage *zspage) { struct zpdesc *first_zpdesc =3D zspage->first_zpdesc; @@ -781,16 +768,6 @@ static struct zspage *get_zspage(struct zpdesc *zpdesc) return zspage; } =20 -static struct page *get_next_page(struct page *page) -{ - struct zspage *zspage =3D get_zspage(page_zpdesc(page)); - - if (unlikely(ZsHugePage(zspage))) - return NULL; - - return (struct page *)page->index; -} - static struct zpdesc *get_next_zpdesc(struct zpdesc *zpdesc) { struct zspage *zspage =3D get_zspage(zpdesc); @@ -1964,13 +1941,13 @@ static void init_deferred_free(struct zs_pool *pool) =20 static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) { - struct page *page =3D get_first_page(zspage); + struct zpdesc *zpdesc =3D get_first_zpdesc(zspage); =20 do { - WARN_ON(!trylock_page(page)); - __SetPageMovable(page, &zsmalloc_mops); - unlock_page(page); - } while ((page =3D get_next_page(page)) !=3D NULL); + WARN_ON(!zpdesc_trylock(zpdesc)); + __zpdesc_set_movable(zpdesc, &zsmalloc_mops); + zpdesc_unlock(zpdesc); + } while ((zpdesc =3D get_next_zpdesc(zpdesc)) !=3D NULL); } #else static inline void zs_flush_migration(struct zs_pool *pool) { } --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7BA2019CCE8 for ; Tue, 13 Aug 2024 08:42:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538533; cv=none; b=RxTGS+enVds5IyVZctGb8sFGqPavcbdZVBN7xPCyjvE8dJ5NWkImNiZ31lFXmrWjLgTSQjbZcKFldlCk3ucPSbAICmkS/S9iU4tGhEBSUt3jk/kdR1l9VRBw8g5ilsa2AYiUUgYPDoCFPPhaZk+aCROs5qAfgbt6Y1F6zojGfgM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538533; c=relaxed/simple; bh=8TsEc1mQDjgsj8r155q+1srlMOAnmBBOJU38nCFETQE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=adxpGvfWaIs6xAMyBz+5yFnXykaOPFwBQrcNUo3CZuoOdti1pcITUO16YA/PxymNm/fyhMApkbe3UQV8Cxowrh8LHwXzcDQVm+TBHoBPSUVlsUm36ZLFKmvjOaPxM85/8Q16vaK6SL9Tc+R+CWL/5qRq7NOZU9SiB7uBv2Nbv3A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cyXENoc4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cyXENoc4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3D94C4AF09; Tue, 13 Aug 2024 08:42:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538533; bh=8TsEc1mQDjgsj8r155q+1srlMOAnmBBOJU38nCFETQE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cyXENoc4P62M2wfhgRRIdhledFsy/PPHOdILmE+RD6XzAahA3OzA3VF9DM6zOSCoy u1k5uThAaRqIGvfMJb3qwlxKDH5Bc+jo/ELrtOcius+pSxrZu+unAkJohA+TEjsomK Q9ugK5Af0stkeShlVpT7BbLXOsslLhd2x80E336Te/5DhiYu4aKBmX/nCYEFTpkeuJ ZGWQv62YyFDgnMRa1mnUu7X2rkdLlxqjuG+p0kUVj1S1T9k3GPAssIrfEgYe78oyib IWEIkNs5axTKVNgVzYx7J00DrOdE1P8HL+qy/HVN2sz4Nl6Sdwmq4j96BmCBTeY2wJ qVDdhTuO1dZAw== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 17/21] mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc Date: Tue, 13 Aug 2024 16:46:03 +0800 Message-ID: <20240813084611.4122571-18-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi Now that all users of get/set_first_obj_offset() are converted to use zpdesc, convert them to take zpdesc. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zsmalloc.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index e2b203db192a..8553542edacb 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -488,26 +488,26 @@ static struct zpdesc *get_first_zpdesc(struct zspage = *zspage) =20 #define FIRST_OBJ_PAGE_TYPE_MASK 0xffff =20 -static inline void reset_first_obj_offset(struct page *page) +static inline void reset_first_obj_offset(struct zpdesc *zpdesc) { - VM_WARN_ON_ONCE(!PageZsmalloc(page)); - page->page_type |=3D FIRST_OBJ_PAGE_TYPE_MASK; + VM_WARN_ON_ONCE(!PageZsmalloc(zpdesc_page(zpdesc))); + zpdesc->first_obj_offset |=3D FIRST_OBJ_PAGE_TYPE_MASK; } =20 -static inline unsigned int get_first_obj_offset(struct page *page) +static inline unsigned int get_first_obj_offset(struct zpdesc *zpdesc) { - VM_WARN_ON_ONCE(!PageZsmalloc(page)); - return page->page_type & FIRST_OBJ_PAGE_TYPE_MASK; + VM_WARN_ON_ONCE(!PageZsmalloc(zpdesc_page(zpdesc))); + return zpdesc->first_obj_offset & FIRST_OBJ_PAGE_TYPE_MASK; } =20 -static inline void set_first_obj_offset(struct page *page, unsigned int of= fset) +static inline void set_first_obj_offset(struct zpdesc *zpdesc, unsigned in= t offset) { /* With 16 bit available, we can support offsets into 64 KiB pages. */ BUILD_BUG_ON(PAGE_SIZE > SZ_64K); - VM_WARN_ON_ONCE(!PageZsmalloc(page)); + VM_WARN_ON_ONCE(!PageZsmalloc(zpdesc_page(zpdesc))); VM_WARN_ON_ONCE(offset & ~FIRST_OBJ_PAGE_TYPE_MASK); - page->page_type &=3D ~FIRST_OBJ_PAGE_TYPE_MASK; - page->page_type |=3D offset & FIRST_OBJ_PAGE_TYPE_MASK; + zpdesc->first_obj_offset &=3D ~FIRST_OBJ_PAGE_TYPE_MASK; + zpdesc->first_obj_offset |=3D offset & FIRST_OBJ_PAGE_TYPE_MASK; } =20 static inline unsigned int get_freeobj(struct zspage *zspage) @@ -844,7 +844,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc) ClearPagePrivate(page); zpdesc->zspage =3D NULL; zpdesc->next =3D NULL; - reset_first_obj_offset(page); + reset_first_obj_offset(zpdesc); __ClearPageZsmalloc(page); } =20 @@ -928,7 +928,7 @@ static void init_zspage(struct size_class *class, struc= t zspage *zspage) struct link_free *link; void *vaddr; =20 - set_first_obj_offset(zpdesc_page(zpdesc), off); + set_first_obj_offset(zpdesc, off); =20 vaddr =3D zpdesc_kmap_atomic(zpdesc); link =3D (struct link_free *)vaddr + off / sizeof(*link); @@ -1583,7 +1583,7 @@ static unsigned long find_alloced_obj(struct size_cla= ss *class, unsigned long handle =3D 0; void *addr =3D zpdesc_kmap_atomic(zpdesc); =20 - offset =3D get_first_obj_offset(zpdesc_page(zpdesc)); + offset =3D get_first_obj_offset(zpdesc); offset +=3D class->size * index; =20 while (offset < PAGE_SIZE) { @@ -1778,8 +1778,8 @@ static void replace_sub_page(struct size_class *class= , struct zspage *zspage, } while ((zpdesc =3D get_next_zpdesc(zpdesc)) !=3D NULL); =20 create_page_chain(class, zspage, zpdescs); - first_obj_offset =3D get_first_obj_offset(zpdesc_page(oldzpdesc)); - set_first_obj_offset(zpdesc_page(newzpdesc), first_obj_offset); + first_obj_offset =3D get_first_obj_offset(oldzpdesc); + set_first_obj_offset(newzpdesc, first_obj_offset); if (unlikely(ZsHugePage(zspage))) newzpdesc->handle =3D oldzpdesc->handle; __zpdesc_set_movable(newzpdesc, &zsmalloc_mops); @@ -1834,7 +1834,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, /* the migrate_write_lock protects zpage access via zs_map_object */ migrate_write_lock(zspage); =20 - offset =3D get_first_obj_offset(zpdesc_page(zpdesc)); + offset =3D get_first_obj_offset(zpdesc); s_addr =3D zpdesc_kmap_atomic(zpdesc); =20 /* --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B78319CD0E for ; Tue, 13 Aug 2024 08:42:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538537; cv=none; b=V36ILyIdY1ZIITct2dovVVCZl6D6qTBQjIx3MNXA7HnFRgG82MyZw4LQK/s1s3cNmaRFkO35viPqDz9DIyuM5Ze3FDQI2X8NVvx3EUiUb8n+rtX/htQawymb7xzAHG6riRg9LyWdDs2M0HfW30SAxJieXE6KzIzsJb3H8VTBmOQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538537; c=relaxed/simple; bh=xc1UgoycHzcMSLrT87O9Jysr++5t0k6ZYOyK9RWDCgc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OVKbkYz3v61OjJZIHVOF0uHwYYKE43F97Auxk4B7jNRVNJOwzGsNZoz6AolXB3tzcr7ln2eCPUxpav4+wm+mfpCEMoZ62v/jCHUWbBEltI/viAAR9dsWC7ds/2y+IgoP7YRn2w3JjWLkiiIcDnppJPx5GXnrnbBiYvVgROpZkUI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mxpHjX6D; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mxpHjX6D" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 053CBC4AF15; Tue, 13 Aug 2024 08:42:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538537; bh=xc1UgoycHzcMSLrT87O9Jysr++5t0k6ZYOyK9RWDCgc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mxpHjX6Dipw7WJGW9x/CzrdZjjx3J2swcfh5GtuduL50FV7HjHZSe+ro4MjvMCUyA PcMTPHKXGAN19nP01URDq5t9Iau8uj9OiNhIq5TxCQZY2yn6OSUljkeCv2ZtSh4LTj u04TdFn20fcNRN0p5vZ+wuShi2vcaTbze1tOo/JWqVvoHy8OyGilw1ZzeTE1cSskNZ 4JHIFSL7oKHyZIeEv5GStb+5Rn3G3hQdbzyFDPCE5QIgtRD1iQFYBOSbujz//injjQ G8i7trnT5ZET9efzMHq8kwGTqu7jjEMeVLgB09lYALGXQLbWfO/eCg9vElqgQcPykN cD7JvS6W96uRA== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 18/21] mm/zsmalloc: introduce __zpdesc_clear_movable Date: Tue, 13 Aug 2024 16:46:04 +0800 Message-ID: <20240813084611.4122571-19-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi Add a helper __zpdesc_clear_movable() for __ClearPageMovable(), and use it in callers to make code clear. Signed-off-by: Alex Shi --- mm/zpdesc.h | 5 +++++ mm/zsmalloc.c | 2 +- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index a1834d36ccfc..747a2d410a35 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -115,6 +115,11 @@ static inline void __zpdesc_set_movable(struct zpdesc = *zpdesc, __SetPageMovable(zpdesc_page(zpdesc), mops); } =20 +static inline void __zpdesc_clear_movable(struct zpdesc *zpdesc) +{ + __ClearPageMovable(zpdesc_page(zpdesc)); +} + static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc) { return PageIsolated(zpdesc_page(zpdesc)); diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 8553542edacb..64b9ea011111 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -840,7 +840,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc) { struct page *page =3D zpdesc_page(zpdesc); =20 - __ClearPageMovable(page); + __zpdesc_clear_movable(zpdesc); ClearPagePrivate(page); zpdesc->zspage =3D NULL; zpdesc->next =3D NULL; --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A7E817C228 for ; Tue, 13 Aug 2024 08:42:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538541; cv=none; b=r2OIXeshVvM4fh7saDBHZ4ZzFU1JWzy6MrA0AJp4FQqVqoCi8P0D4/e9xxi7zaKClWFhtd0nsGje4fh9EIU3+b/trKcQ30Sp4fK6KQYik2t1/K3U6Z3qV6vZAuW3DcvXGDzAgPSpWsO1TFOWwEr3tqGYhA8RN/ol7201u/K6l94= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538541; c=relaxed/simple; bh=7RDgs8VMI/6+eF2Z36b9xbjlJF/pcQYAzN7TZVLEquw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nZdXtSNdrj6JVkE1jDfb0x83WVliM7pS+7i/yUsy188f/YwLyIY32IahVtn+IoCfLyyhJoFLMUxogpBR1xDuFQeBOtJ7y2w5o45eC+CHbsIWMaZ3HnqzH2xo9Jbwp4LRpzCFPagi85Ko97Vp2HEnN1ANTyt8BKYsq6U6JWC/W3c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=e6kjY4QB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="e6kjY4QB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4173C4AF0B; Tue, 13 Aug 2024 08:42:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538541; bh=7RDgs8VMI/6+eF2Z36b9xbjlJF/pcQYAzN7TZVLEquw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e6kjY4QBWwfbNWiHe8VElP9KCnrcfr8ExLDYLTSGXAyZUKoHjuxSHsysbcFQW3sNz uJOJTUslbpGJu3jmOoWVmgIAI7VIY/0WRpaO/7Imdj3ZxKk0jNedzIdBKEcvY83QS9 YY9frlOlRS6WTCJCdPM6OB/PEnbl13D0XWfGM1Y8xZvUWK03GC3pmj/TKRyuK1gziy 8l38bTjL3SNjTgdTJmSzABhQZadMZ5IlqQy5ybd4c4HkOqJ7kNnzf/l71n8EdBaMeO 3HPTf+qZgh21Pf9MA4I3Qsixy8tr89Dq0GHnmIOqIZ049Ah98ulIdocJHoRVakaR8i p9USBCGEmOmCQ== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 19/21] mm/zsmalloc: introduce __zpdesc_clear/set_zsmalloc() Date: Tue, 13 Aug 2024 16:46:05 +0800 Message-ID: <20240813084611.4122571-20-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi Add helper __zpdesc_clear_zsmalloc() for __ClearPageZsmalloc(), __zpdesc_set_zsmalloc() for __SetPageZsmalloc(), and use them in callers. Signed-off-by: Alex Shi --- mm/zpdesc.h | 10 ++++++++++ mm/zsmalloc.c | 8 ++++---- 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 747a2d410a35..33f599081281 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -120,6 +120,16 @@ static inline void __zpdesc_clear_movable(struct zpdes= c *zpdesc) __ClearPageMovable(zpdesc_page(zpdesc)); } =20 +static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc) +{ + __SetPageZsmalloc(zpdesc_page(zpdesc)); +} + +static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc) +{ + __ClearPageZsmalloc(zpdesc_page(zpdesc)); +} + static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc) { return PageIsolated(zpdesc_page(zpdesc)); diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 64b9ea011111..6e7cd5acf5e5 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -845,7 +845,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc) zpdesc->zspage =3D NULL; zpdesc->next =3D NULL; reset_first_obj_offset(zpdesc); - __ClearPageZsmalloc(page); + __zpdesc_clear_zsmalloc(zpdesc); } =20 static int trylock_zspage(struct zspage *zspage) @@ -1018,13 +1018,13 @@ static struct zspage *alloc_zspage(struct zs_pool *= pool, if (!zpdesc) { while (--i >=3D 0) { zpdesc_dec_zone_page_state(zpdescs[i]); - __ClearPageZsmalloc(zpdesc_page(zpdescs[i])); + __zpdesc_clear_zsmalloc(zpdescs[i]); free_zpdesc(zpdescs[i]); } cache_free_zspage(pool, zspage); return NULL; } - __SetPageZsmalloc(zpdesc_page(zpdesc)); + __zpdesc_set_zsmalloc(zpdesc); =20 zpdesc_inc_zone_page_state(zpdesc); zpdescs[i] =3D zpdesc; @@ -1814,7 +1814,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc)); =20 /* We're committed, tell the world that this is a Zsmalloc page. */ - __SetPageZsmalloc(zpdesc_page(newzpdesc)); + __zpdesc_set_zsmalloc(newzpdesc); =20 /* The page is locked, so this pointer must remain valid */ zspage =3D get_zspage(zpdesc); --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DC1C183CDD for ; Tue, 13 Aug 2024 08:42:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538545; cv=none; b=nRARbZmHf9fKAgI4RfjFKCJ2cSjsBaN7B3v/37x7kpijOS4JvfjiAh+c2Bmh3884/WfPk0AOqT8Lgw1Tqg9tAQyOxMjYLFL3uH6XkJ9jz5paZsu7ZqVJ27Rr9ygc7Jl6tIeu73U5K9qi5e6e0xGpos7emdtd4fMOqPsvgVUeDcU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538545; c=relaxed/simple; bh=GGradnuK41+byBnHyefAqZD3LeHddFsDEO4CJxk+f0E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=haCNPfejcpiKYo+PZVJuS/d80WlRoWb5meO684XTVfCa0+iOM8OnUr2EiABDbmesvmmsGWm4M70TsNC2CblvdIUP5kpKt95dd3HK1OgglsUjHeRelMNTYR8404ZTkzC4QuZfFZkyL2qYK+m79LCM1kyHvRRP7ITpC7Out+rZ4mg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mejI+7CX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mejI+7CX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7BD2C4AF11; Tue, 13 Aug 2024 08:42:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538545; bh=GGradnuK41+byBnHyefAqZD3LeHddFsDEO4CJxk+f0E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mejI+7CXbaNN6QrWv+gyGHDhIeialjnXVUEkoTuuLiEMeYDjaTjw2VDn0SOFfEsh+ vkOy9tq9upX8uw+PGZwrRA7TL02hSe/SSLQu9ho+WBZ/gGhSAi0deOsMzgTWJj0qFK bAPIKaZJNX23wmCL/wjJgJyg+taUjPfBiJicIxiIJnRybb3u38WhJyPrZjzcKFdfnY dJhLNNOuTQDiHAN8zzMxGCNabpnbvFHoiTF8PrCj8dp/ZbafQ/mgFOcm1LRBCL0w4N TAkESIteUK13GxF+JSJWpscWFt5ewHxF8BP4wjSjo24OLkNaugLTprwReAlL22HdVS 6lhrf5QpTGRjQ== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 20/21] mm/zsmalloc: introduce zpdesc_clear_first() helper Date: Tue, 13 Aug 2024 16:46:06 +0800 Message-ID: <20240813084611.4122571-21-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi Like the zpdesc_set_first(), introduce zpdesc_clear_first() helper for ClearPagePrivate(), then clean up a 'struct page' usage in reset_zpdesc(). Signed-off-by: Alex Shi --- mm/zsmalloc.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 6e7cd5acf5e5..3dbe6bfa656b 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -253,6 +253,11 @@ static inline void zpdesc_set_first(struct zpdesc *zpd= esc) SetPagePrivate(zpdesc_page(zpdesc)); } =20 +static inline void zpdesc_clear_first(struct zpdesc *zpdesc) +{ + ClearPagePrivate(zpdesc_page(zpdesc)); +} + static inline void zpdesc_inc_zone_page_state(struct zpdesc *zpdesc) { inc_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES); @@ -838,10 +843,8 @@ static inline bool obj_allocated(struct zpdesc *zpdesc= , void *obj, =20 static void reset_zpdesc(struct zpdesc *zpdesc) { - struct page *page =3D zpdesc_page(zpdesc); - __zpdesc_clear_movable(zpdesc); - ClearPagePrivate(page); + zpdesc_clear_first(zpdesc); zpdesc->zspage =3D NULL; zpdesc->next =3D NULL; reset_first_obj_offset(zpdesc); --=20 2.43.0 From nobody Sat Feb 7 22:55:14 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E98E19D891 for ; Tue, 13 Aug 2024 08:42:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538549; cv=none; b=JEsCpbINKuVNwqyarVw3bVptJEhJOBRsPJacGobOj5IqAhoIHOW5WW6aoqha49BXMFQUpX34900x3DJMoQjLIMzXBuL+Uw6tvFDFOiUD/yU6YG7hyRxTUm3MAuferKFh0X1X+jQcjk8Wsd3hVnXloaGbxV6MsX8qa4kQhxUjwwk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723538549; c=relaxed/simple; bh=LliY088wtfrFrkEgAuCs8QB3X7nNhs8HeH2Nz6Y9mEw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XrSyDMnkAHj4YEbR59FA5B8qv1+Y0YQDS/O6nRFwH+8WoR2mnaTvb/pTPgZVLQdIki/Wcy7qbhxQ4G9gR1e3QIaPGKx59tTFvG/kTYPhpOvqJ4XPyafWw6Xx/3cIt1mgie5edunQR+6N2+nZGD3fyHmszMbzGXCdTFLV8SXdEOo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Nmh3whJ7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Nmh3whJ7" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B22CAC4AF0B; Tue, 13 Aug 2024 08:42:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723538549; bh=LliY088wtfrFrkEgAuCs8QB3X7nNhs8HeH2Nz6Y9mEw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Nmh3whJ7Km/PF9o1d6vjECtU9Q80lnjNCgYP9PH9/NyPaAXEUHXt67dUXagF2kxJN iRDLDAo3ER+9SiJxE5n9vGjNdT/ystypfHXN7rz2cDxps/B76wJ1GXiO9x7Svsqary uIeHzEY7e575DxDpwD7wIG5T4JVhCNm2AuRWmHrw4z+gMy9A7dSCRxFv7E1XjDJFyl 4aJmKyjQ4Miw0W6rZon7W/bWTIIPlWusKE7wnMSAoqCIsVfv2RBunbaMWUR2kJNKLl G5Ckm23NitU1OBGjeMAD202eP27uIhCa7dduav/25zMWGJX2zZuJJuyj0w8f4xmKiU yrOb2BecslR9g== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v6 21/21] mm/zsmalloc: update comments for page->zpdesc changes Date: Tue, 13 Aug 2024 16:46:07 +0800 Message-ID: <20240813084611.4122571-22-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813084611.4122571-1-alexs@kernel.org> References: <20240813084611.4122571-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Shi After the page to zpdesc conversion, there still left few comments or function named with page not zpdesc, let's update the comments and rename function create_page_chain() as create_zpdesc_chain(). Signed-off-by: Alex Shi --- mm/zsmalloc.c | 47 ++++++++++++++++++++++++++--------------------- 1 file changed, 26 insertions(+), 21 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 3dbe6bfa656b..37619f4b074b 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -17,14 +17,16 @@ * * Usage of struct zpdesc fields: * zpdesc->zspage: points to zspage - * zpdesc->next: links together all component pages of a zspage + * zpdesc->next: links together all component zpdescs of a zspage * For the huge page, this is always 0, so we use this field * to store handle. * zpdesc->first_obj_offset: PG_zsmalloc, lower 16 bit locate the first * object offset in a subpage of a zspage * * Usage of struct zpdesc(page) flags: - * PG_private: identifies the first component page + * PG_private: identifies the first component zpdesc + * PG_lock: lock all component zpdescs for a zspage free, serialize with + * migration */ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -191,7 +193,10 @@ struct size_class { */ int size; int objs_per_zspage; - /* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */ + /* + * Number of PAGE_SIZE sized zpdescs/pages to combine to + * form a 'zspage' + */ int pages_per_zspage; =20 unsigned int index; @@ -907,7 +912,7 @@ static void free_zspage(struct zs_pool *pool, struct si= ze_class *class, =20 /* * Since zs_free couldn't be sleepable, this function cannot call - * lock_page. The page locks trylock_zspage got will be released + * lock_page. The zpdesc locks trylock_zspage got will be released * by __free_zspage. */ if (!trylock_zspage(zspage)) { @@ -964,7 +969,7 @@ static void init_zspage(struct size_class *class, struc= t zspage *zspage) set_freeobj(zspage, 0); } =20 -static void create_page_chain(struct size_class *class, struct zspage *zsp= age, +static void create_zpdesc_chain(struct size_class *class, struct zspage *z= spage, struct zpdesc *zpdescs[]) { int i; @@ -973,9 +978,9 @@ static void create_page_chain(struct size_class *class,= struct zspage *zspage, int nr_zpdescs =3D class->pages_per_zspage; =20 /* - * Allocate individual pages and link them together as: - * 1. all pages are linked together using zpdesc->next - * 2. each sub-page point to zspage using zpdesc->zspage + * Allocate individual zpdescs and link them together as: + * 1. all zpdescs are linked together using zpdesc->next + * 2. each sub-zpdesc point to zspage using zpdesc->zspage * * we set PG_private to identify the first zpdesc (i.e. no other zpdesc * has this flag set). @@ -1033,7 +1038,7 @@ static struct zspage *alloc_zspage(struct zs_pool *po= ol, zpdescs[i] =3D zpdesc; } =20 - create_page_chain(class, zspage, zpdescs); + create_zpdesc_chain(class, zspage, zpdescs); init_zspage(class, zspage); zspage->pool =3D pool; zspage->class =3D class->index; @@ -1360,7 +1365,7 @@ static unsigned long obj_malloc(struct zs_pool *pool, /* record handle in the header of allocated chunk */ link->handle =3D handle | OBJ_ALLOCATED_TAG; else - /* record handle to page->index */ + /* record handle to zpdesc->handle */ zspage->first_zpdesc->handle =3D handle | OBJ_ALLOCATED_TAG; =20 kunmap_atomic(vaddr); @@ -1693,19 +1698,19 @@ static int putback_zspage(struct size_class *class,= struct zspage *zspage) #ifdef CONFIG_COMPACTION /* * To prevent zspage destroy during migration, zspage freeing should - * hold locks of all pages in the zspage. + * hold locks of all component zpdesc in the zspage. */ static void lock_zspage(struct zspage *zspage) { struct zpdesc *curr_zpdesc, *zpdesc; =20 /* - * Pages we haven't locked yet can be migrated off the list while we're + * Zpdesc we haven't locked yet can be migrated off the list while we're * trying to lock them, so we need to be careful and only attempt to - * lock each page under migrate_read_lock(). Otherwise, the page we lock - * may no longer belong to the zspage. This means that we may wait for - * the wrong page to unlock, so we must take a reference to the page - * prior to waiting for it to unlock outside migrate_read_lock(). + * lock each zpdesc under migrate_read_lock(). Otherwise, the zpdesc we + * lock may no longer belong to the zspage. This means that we may wait + * for the wrong zpdesc to unlock, so we must take a reference to the + * zpdesc prior to waiting for it to unlock outside migrate_read_lock(). */ while (1) { migrate_read_lock(zspage); @@ -1780,7 +1785,7 @@ static void replace_sub_page(struct size_class *class= , struct zspage *zspage, idx++; } while ((zpdesc =3D get_next_zpdesc(zpdesc)) !=3D NULL); =20 - create_page_chain(class, zspage, zpdescs); + create_zpdesc_chain(class, zspage, zpdescs); first_obj_offset =3D get_first_obj_offset(oldzpdesc); set_first_obj_offset(newzpdesc, first_obj_offset); if (unlikely(ZsHugePage(zspage))) @@ -1791,8 +1796,8 @@ static void replace_sub_page(struct size_class *class= , struct zspage *zspage, static bool zs_page_isolate(struct page *page, isolate_mode_t mode) { /* - * Page is locked so zspage couldn't be destroyed. For detail, look at - * lock_zspage in free_zspage. + * Page/zpdesc is locked so zspage couldn't be destroyed. For detail, + * look at lock_zspage in free_zspage. */ VM_BUG_ON_PAGE(PageIsolated(page), page); =20 @@ -1819,7 +1824,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, /* We're committed, tell the world that this is a Zsmalloc page. */ __zpdesc_set_zsmalloc(newzpdesc); =20 - /* The page is locked, so this pointer must remain valid */ + /* The zpdesc/page is locked, so this pointer must remain valid */ zspage =3D get_zspage(zpdesc); pool =3D zspage->pool; =20 @@ -1892,7 +1897,7 @@ static const struct movable_operations zsmalloc_mops = =3D { }; =20 /* - * Caller should hold page_lock of all pages in the zspage + * Caller should hold zpdesc locks of all in the zspage * In here, we cannot use zspage meta data. */ static void async_free_zspage(struct work_struct *work) --=20 2.43.0