From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99D6C13CF99 for ; Thu, 11 Apr 2024 06:14:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816057; cv=none; b=pY435sh77TRFClt79uXxCqOfcGyKhKOWV+el6Abqsoq0YjPi+oTZ0i2Riw+GiutdEIdvmnPAUTxm81ylb9f0tR3tmToYVXAryL0d6SBEAGD1JwWiItyx7YMXvl4Bfj5SCI/vxgV2O1Nv8ODHyUzYdl3W0uPkiBfLokuUrAJC2mk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816057; c=relaxed/simple; bh=oezv8n190iRAYJbqG1rSb+HEO3aakytr9BMxPO0jqOc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EqcIQOh3jkSfZVcMoarOHGd2m0Kh9omTasuFQQH9bsIREBAotaiSh7HEfYSr02e+Cp2s8/q6G5V5tDAV5k3xQZk66ANQ/rdX2ZZgyGL8WXqHB2lLDd3EevfTnTHOAS7N75YgPBcK7c7L+P6ePSKe23kNcW5zvlfHQ9bz66KqE8g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XgJlk6qU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XgJlk6qU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5A9EC433C7; Thu, 11 Apr 2024 06:14:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816057; bh=oezv8n190iRAYJbqG1rSb+HEO3aakytr9BMxPO0jqOc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XgJlk6qUo71If1Hxiz9eQsY0pFcpm4jNK7/rtAzAt/q4vCYQU03Bl1aZ5ZiY8T50U 3SNtQ75PjlVB+Y0VNKaGnr0UqLF1Phb9fwrNgXo18gnLvsJT7FChrz2OuTpUa9k6+A 7CLGzC5F3kZwMwngHWwW42Ddk91bRjqfI4zoGb//eWcMRxIqkPPFrsOWo8vv28pzB9 k5BOFJGgkRmvIxtiA3fPAT1wRfDn8OJXuOrXGutaJNGVLD6BdVs4HxQmCaTC15xVPX PyauH5xarZ2NzpCIuJlDGmEQXiWk3VzrrCtxAlc/AQ7XFAIZNY/yS3T5v/PSPTCZOl Ld+oDpm/7WNhg== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" , Izik Eidus Subject: [PATCH v5 01/10] mm/ksm: add ksm_get_folio Date: Thu, 11 Apr 2024 14:17:02 +0800 Message-ID: <20240411061713.1847574-2-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Alex Shi (tencent)" The ksm only contains single pages, so we could add a new func ksm_get_folio for get_ksm_page to use folio instead of pages to save a couple of compound_head calls. After all caller replaced, get_ksm_page will be removed. Cc: Izik Eidus Cc: Matthew Wilcox Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Chris Wright Signed-off-by: Alex Shi (tencent) Reviewed-by: David Hildenbrand --- mm/ksm.c | 42 +++++++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 17 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 8c001819cf10..ac126a4c245c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -897,7 +897,7 @@ enum get_ksm_page_flags { }; =20 /* - * get_ksm_page: checks if the page indicated by the stable node + * ksm_get_folio: checks if the page indicated by the stable node * is still its ksm page, despite having held no reference to it. * In which case we can trust the content of the page, and it * returns the gotten page; but if the page has now been zapped, @@ -915,10 +915,10 @@ enum get_ksm_page_flags { * a page to put something that might look like our key in page->mapping. * is on its way to being freed; but it is an anomaly to bear in mind. */ -static struct page *get_ksm_page(struct ksm_stable_node *stable_node, +static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node, enum get_ksm_page_flags flags) { - struct page *page; + struct folio *folio; void *expected_mapping; unsigned long kpfn; =20 @@ -926,8 +926,8 @@ static struct page *get_ksm_page(struct ksm_stable_node= *stable_node, PAGE_MAPPING_KSM); again: kpfn =3D READ_ONCE(stable_node->kpfn); /* Address dependency. */ - page =3D pfn_to_page(kpfn); - if (READ_ONCE(page->mapping) !=3D expected_mapping) + folio =3D pfn_folio(kpfn); + if (READ_ONCE(folio->mapping) !=3D expected_mapping) goto stale; =20 /* @@ -940,41 +940,41 @@ static struct page *get_ksm_page(struct ksm_stable_no= de *stable_node, * in folio_migrate_mapping(), it might still be our page, * in which case it's essential to keep the node. */ - while (!get_page_unless_zero(page)) { + while (!folio_try_get(folio)) { /* * Another check for page->mapping !=3D expected_mapping would * work here too. We have chosen the !PageSwapCache test to * optimize the common case, when the page is or is about to * be freed: PageSwapCache is cleared (under spin_lock_irq) * in the ref_freeze section of __remove_mapping(); but Anon - * page->mapping reset to NULL later, in free_pages_prepare(). + * folio->mapping reset to NULL later, in free_pages_prepare(). */ - if (!PageSwapCache(page)) + if (!folio_test_swapcache(folio)) goto stale; cpu_relax(); } =20 - if (READ_ONCE(page->mapping) !=3D expected_mapping) { - put_page(page); + if (READ_ONCE(folio->mapping) !=3D expected_mapping) { + folio_put(folio); goto stale; } =20 if (flags =3D=3D GET_KSM_PAGE_TRYLOCK) { - if (!trylock_page(page)) { - put_page(page); + if (!folio_trylock(folio)) { + folio_put(folio); return ERR_PTR(-EBUSY); } } else if (flags =3D=3D GET_KSM_PAGE_LOCK) - lock_page(page); + folio_lock(folio); =20 if (flags !=3D GET_KSM_PAGE_NOLOCK) { - if (READ_ONCE(page->mapping) !=3D expected_mapping) { - unlock_page(page); - put_page(page); + if (READ_ONCE(folio->mapping) !=3D expected_mapping) { + folio_unlock(folio); + folio_put(folio); goto stale; } } - return page; + return folio; =20 stale: /* @@ -990,6 +990,14 @@ static struct page *get_ksm_page(struct ksm_stable_nod= e *stable_node, return NULL; } =20 +static struct page *get_ksm_page(struct ksm_stable_node *stable_node, + enum get_ksm_page_flags flags) +{ + struct folio *folio =3D ksm_get_folio(stable_node, flags); + + return &folio->page; +} + /* * Removing rmap_item from stable or unstable tree. * This function will clean the information from the stable/unstable tree. --=20 2.43.0 From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FE9C13CAB3 for ; Thu, 11 Apr 2024 06:14:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816060; cv=none; b=O8jKn9I/G5A4yNhv+7UEOOfXLRxjA6e7HIxep+kcoWZ8qHLiamspZyI4Bah/rrO+Xee3jMLBNIMTKsagctm/D+rF/LUf3Yr3dfeTZFokuDo7j4xjShzf27qlDHlKTaVinjX4AoC8iZ0z0wovKiqxBwzQ1nH+3nVm8oeknx4KjF4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816060; c=relaxed/simple; bh=Ti8omFOFkWyaGFVyRP2niT/Lsh/B9sgK71P36pfGxwA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L8JLFC/ReexoLeMp3vm8nem5H6i3vZu0tjBfFtuL5ej0iUorr6XoqAROwaSVL6SdX0ZR6xszx51/j5NEyoK4GGXx3Q91/XAyW3Q+lVXs419T1QbB77ki86dnYlxyuuUBNQ7HrMIILf2ZcgnnokyUC97BnFWzRpoHBJIREB8OP/k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RG3iMi2h; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RG3iMi2h" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA75DC43399; Thu, 11 Apr 2024 06:14:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816060; bh=Ti8omFOFkWyaGFVyRP2niT/Lsh/B9sgK71P36pfGxwA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RG3iMi2h+4rJwzgmeoyIiZZmtvBYfoMXgDfJJgFS9lv62v+NcgJ4l1yZGNF37CwLt GLu7qoK1rTHy87WRBFLJeFfla6DXdLbnWVXmoHo+vutb25QVE7gpOvhH6sy4suZF7c fx9Xgk/iQscsOVBNRzbjxgLRYzjBEOdqoJfSq8ukSDTLbzmYjo3LRgVcgpPrJWJWPQ 4NwI6Vo5VI+85LFXnMpvl8FG8Ku/0SSrkpA9NTN1372DphlJLHBWeaEp+JxGd2Ml/J DYmWnmWPezuUFHIF1lxEuOA9TTvygMuWSBPT373PdQ/xEctBKLJkB1euNR1IV9NiWV UCWIpZCc1917A== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" , Izik Eidus Subject: [PATCH v5 02/10] mm/ksm: use folio in remove_rmap_item_from_tree Date: Thu, 11 Apr 2024 14:17:03 +0800 Message-ID: <20240411061713.1847574-3-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Alex Shi (tencent)" To save 2 compound_head calls. Signed-off-by: Alex Shi (tencent) Cc: Izik Eidus Cc: Matthew Wilcox Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Chris Wright Reviewed-by: David Hildenbrand --- mm/ksm.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index ac126a4c245c..ef5c4b6d377c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1006,16 +1006,16 @@ static void remove_rmap_item_from_tree(struct ksm_r= map_item *rmap_item) { if (rmap_item->address & STABLE_FLAG) { struct ksm_stable_node *stable_node; - struct page *page; + struct folio *folio; =20 stable_node =3D rmap_item->head; - page =3D get_ksm_page(stable_node, GET_KSM_PAGE_LOCK); - if (!page) + folio =3D ksm_get_folio(stable_node, GET_KSM_PAGE_LOCK); + if (!folio) goto out; =20 hlist_del(&rmap_item->hlist); - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); =20 if (!hlist_empty(&stable_node->hlist)) ksm_pages_sharing--; --=20 2.43.0 From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50E5413CAB8 for ; Thu, 11 Apr 2024 06:14:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816063; cv=none; b=j0xHWAHVryMnyTG5XJoeUVjABYRo68sKMIDSnhex4yv3pkeb+DhNUl3KQ2+XFh4SsuB6nnB69OQFldrhzaIXCD9a8CodjSEh9RtrT0mWIzUF8X1mftzGrkmoDq6o7wyC0HaSujkLaJ9eYsH7wybs9eJXxq7nO9ncwjBH3Vh425c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816063; c=relaxed/simple; bh=qC1mthl25nwOp6y3rOKx92fTxAINpkUFXHoU2+YaTrc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=V6eqGTLfoOvO7fXNnKERBcwzpdrlfyD6UXiLBdWpzfHiHyqZ+lrxtYSWYXNG9NUEPOqT+1i/5mbgxsTzLPf/BN8x8qMSi0NPlIsHi81uVDPVZvXjE/3R30jsa/o9gogkxF7lq/clPAIJhUGuBmBqtgiuYFmGACkNwX3FWfup0JU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IeCaLcCJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IeCaLcCJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AF6C0C433C7; Thu, 11 Apr 2024 06:14:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816062; bh=qC1mthl25nwOp6y3rOKx92fTxAINpkUFXHoU2+YaTrc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IeCaLcCJY1GT60Q7AK9seqx+UVCi+HEDRS5ESPdYFwWmaoRyhC+uf8a/tp1U0dwjT wt6uBhs92tZ7dBvQAr+9+kTUJhGOXqlsZ006oR64EZ1MC85axd3cARsi2cW3k23YRX uLnA3kT7VS+kVAk04VJJBr4LxERj6mfwFZ+2LoOMpc8wlCYJgEKWVGTgQVENb+Mems C3vD9bBLeWtulpk4I7Wa+g01SjslvX8DWmJvfnMgz8WhRM9emVqFyfZSEQpbnQzwTX lMZCl8JaL46hQAO17OlxhJS3dshevOSZB0qc9ifR5HdY5RwAu5x3ZEfH+wU6dzA+xI +s6u98GnpUpOw== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" , Izik Eidus Subject: [PATCH v5 03/10] mm/ksm: add folio_set_stable_node Date: Thu, 11 Apr 2024 14:17:04 +0800 Message-ID: <20240411061713.1847574-4-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Alex Shi (tencent)" Turn set_page_stable_node() into a wrapper folio_set_stable_node, and then use it to replace the former. we will merge them together after all place converted to folio. Signed-off-by: Alex Shi (tencent) Cc: Izik Eidus Cc: Matthew Wilcox Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Chris Wright Reviewed-by: David Hildenbrand --- mm/ksm.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/mm/ksm.c b/mm/ksm.c index ef5c4b6d377c..3c52bf9df84c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1109,6 +1109,12 @@ static inline void set_page_stable_node(struct page = *page, page->mapping =3D (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM); } =20 +static inline void folio_set_stable_node(struct folio *folio, + struct ksm_stable_node *stable_node) +{ + set_page_stable_node(&folio->page, stable_node); +} + #ifdef CONFIG_SYSFS /* * Only called through the sysfs control interface: @@ -3241,7 +3247,7 @@ void folio_migrate_ksm(struct folio *newfolio, struct= folio *folio) * has gone stale (or that folio_test_swapcache has been cleared). */ smp_wmb(); - set_page_stable_node(&folio->page, NULL); + folio_set_stable_node(folio, NULL); } } #endif /* CONFIG_MIGRATION */ --=20 2.43.0 From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FB6F13D528 for ; Thu, 11 Apr 2024 06:14:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816066; cv=none; b=nLd12UheKWr8VPoUczTwBMcUKRJ96u9osEJLaKG2l1OKXCX6TekflVsOEl7vxNoxwgutyhX099wHXTxOwhztrbVgydip6/J+5TMjnCf2zyV1Wza/M0PUC5N1MsVyavXuxZsd4Wo3i+5NgLm0QEHZU4GdIUOs5is9yP+jkNUPzRA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816066; c=relaxed/simple; bh=Psjp2Yi0v76d8BNfvxYBx/0pfRjAkX6bKnyisDETHPQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TW10POnAIT6CUDlZaWq0MU6OAwrTr+vey0oS9SXsMZARX/Avkf3JBxtPcNki+zUEAKQuERGACdcWu6FnXbHNrB6kuGmv8g3x70pYIEvuAVXbn3ouMf1VjUr4CLRMBpmcoixbdzXtazKvBmf4OvMhR6DhZOcdlms0OweawtxikhM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=AtcW68Rk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="AtcW68Rk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 740FCC43394; Thu, 11 Apr 2024 06:14:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816065; bh=Psjp2Yi0v76d8BNfvxYBx/0pfRjAkX6bKnyisDETHPQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AtcW68Rk2WAP6ax3CbHdN4r9JSIXDouQiaVp9hG8moEE+5MKD9ffWBUO5Tn/KJyGa NfMqOSI1Kg9SLWaCAt4mINhEpg8uA7V7AKNHrWSJYnO/ytFtlTDCyhvBN+CTNRELWd 0W7pOuXj7+4Qz1Yrt9MI3dBZ++m5FTGRNZwXgaBy7O6ENdkgvPdaLIokjOEX0Y1vEg sbXCZW8tv+k9RUzc3/1RDiZxV2ZLzjVu1im7P69RmdYadEdObhiNkPKUKPxbdSRiXH GTO+PjJ/hF0piGutbC7coJxsH5pq0KB1O1RE/qRF9ayCTCpxB/q5W/pf8Klp/KRFgE rF3mDjkj6WvAg== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" , Izik Eidus Subject: [PATCH v5 04/10] mm/ksm: use folio in remove_stable_node Date: Thu, 11 Apr 2024 14:17:05 +0800 Message-ID: <20240411061713.1847574-5-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Alex Shi (tencent)" pages in stable tree are all single normal page, so uses ksm_get_folio() and folio_set_stable_node(), also saves 3 calls to compound_head(). Cc: Izik Eidus Cc: Matthew Wilcox Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Chris Wright Signed-off-by: Alex Shi (tencent) Reviewed-by: David Hildenbrand --- mm/ksm.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 3c52bf9df84c..1a7b13004589 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1121,13 +1121,13 @@ static inline void folio_set_stable_node(struct fol= io *folio, */ static int remove_stable_node(struct ksm_stable_node *stable_node) { - struct page *page; + struct folio *folio; int err; =20 - page =3D get_ksm_page(stable_node, GET_KSM_PAGE_LOCK); - if (!page) { + folio =3D ksm_get_folio(stable_node, GET_KSM_PAGE_LOCK); + if (!folio) { /* - * get_ksm_page did remove_node_from_stable_tree itself. + * ksm_get_folio did remove_node_from_stable_tree itself. */ return 0; } @@ -1138,22 +1138,22 @@ static int remove_stable_node(struct ksm_stable_nod= e *stable_node) * merge_across_nodes/max_page_sharing be switched. */ err =3D -EBUSY; - if (!page_mapped(page)) { + if (!folio_mapped(folio)) { /* - * The stable node did not yet appear stale to get_ksm_page(), - * since that allows for an unmapped ksm page to be recognized + * The stable node did not yet appear stale to ksm_get_folio(), + * since that allows for an unmapped ksm folio to be recognized * right up until it is freed; but the node is safe to remove. - * This page might be in an LRU cache waiting to be freed, - * or it might be PageSwapCache (perhaps under writeback), + * This folio might be in an LRU cache waiting to be freed, + * or it might be in the swapcache (perhaps under writeback), * or it might have been removed from swapcache a moment ago. */ - set_page_stable_node(page, NULL); + folio_set_stable_node(folio, NULL); remove_node_from_stable_tree(stable_node); err =3D 0; } =20 - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); return err; } =20 --=20 2.43.0 From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1AB013CAA0 for ; Thu, 11 Apr 2024 06:14:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816068; cv=none; b=ocHqgeDeQJ9cymuDRgx+wbvyApcsRWZaBW5ngOc8rMcvFR3C22wZhhTii/2CrVMaOWtn5HBYV15M1ab813eUIlgX81zX/nuGoaBD7qY7kPXfZUK/gpCw8CiM0aBM0Sd3IJvdfuMbu26IlvOG2UvAa/6QQx2ano2ut+TL7bp+jZI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816068; c=relaxed/simple; bh=6Rigut9B0xei0o3RlU/2pvsNtNjz8VS58APxTOkaTzQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G/BRm5P/+RiH0zOJdf4rAsnOuucPUC3ViSkupaWNL1NovQpe5OsPv3fLFVP8uxojXFTQ8Ug9FV3aAuyfYxgkmyKgBZ6iQqEdFol8KNnjlApubA7b+zKOTFzSJsWc6RwQge3UhpQaSTFMFnq+P2R6gwCaJavbJaaGP2xU2eGiFiw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ohkAORx1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ohkAORx1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 37F82C433B2; Thu, 11 Apr 2024 06:14:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816068; bh=6Rigut9B0xei0o3RlU/2pvsNtNjz8VS58APxTOkaTzQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ohkAORx1XDD/r3zGvplob2RLchLHYpJtenMlCbrLa21NNHXYHsSFleenl2QJvPwDO BmJobQMM8/tiaJ2dc6Xqc1+pTXwNBj4/zO06Eeu4LRUemG65+A9n/DP1SoCXG6914z vmmFfjvM6AxZfKRbJh6c/IeVBz7S8/GXndViQaqA1DdKxNizoD+AfChC+db9N5Bnp9 7vRT2PZBknD6b550bTqQ9oSG/8iTN94euob/LzgnwW4VPDEHbwNrSjRjnWDXIqu14t sx0a9yK+5/uUZG1uXV0Si3q76gnQPcXFHtNdkX+DnqoIGAZzLazQIpwpJpOYxe4ryU EYYNzK4nJSBNA== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" , Izik Eidus Subject: [PATCH v5 05/10] mm/ksm: use folio in stable_node_dup Date: Thu, 11 Apr 2024 14:17:06 +0800 Message-ID: <20240411061713.1847574-6-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Alex Shi (tencent)" Use ksm_get_folio() and save 2 compound_head calls. Cc: Izik Eidus Cc: Matthew Wilcox Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Chris Wright Signed-off-by: Alex Shi (tencent) Reviewed-by: David Hildenbrand --- mm/ksm.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 1a7b13004589..654400f993fc 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1638,7 +1638,7 @@ static struct page *stable_node_dup(struct ksm_stable= _node **_stable_node_dup, { struct ksm_stable_node *dup, *found =3D NULL, *stable_node =3D *_stable_n= ode; struct hlist_node *hlist_safe; - struct page *_tree_page, *tree_page =3D NULL; + struct folio *folio, *tree_folio =3D NULL; int nr =3D 0; int found_rmap_hlist_len; =20 @@ -1657,24 +1657,24 @@ static struct page *stable_node_dup(struct ksm_stab= le_node **_stable_node_dup, * We must walk all stable_node_dup to prune the stale * stable nodes during lookup. * - * get_ksm_page can drop the nodes from the + * ksm_get_folio can drop the nodes from the * stable_node->hlist if they point to freed pages * (that's why we do a _safe walk). The "dup" * stable_node parameter itself will be freed from * under us if it returns NULL. */ - _tree_page =3D get_ksm_page(dup, GET_KSM_PAGE_NOLOCK); - if (!_tree_page) + folio =3D ksm_get_folio(dup, GET_KSM_PAGE_NOLOCK); + if (!folio) continue; nr +=3D 1; if (is_page_sharing_candidate(dup)) { if (!found || dup->rmap_hlist_len > found_rmap_hlist_len) { if (found) - put_page(tree_page); + folio_put(tree_folio); found =3D dup; found_rmap_hlist_len =3D found->rmap_hlist_len; - tree_page =3D _tree_page; + tree_folio =3D folio; =20 /* skip put_page for found dup */ if (!prune_stale_stable_nodes) @@ -1682,7 +1682,7 @@ static struct page *stable_node_dup(struct ksm_stable= _node **_stable_node_dup, continue; } } - put_page(_tree_page); + folio_put(folio); } =20 if (found) { @@ -1747,7 +1747,7 @@ static struct page *stable_node_dup(struct ksm_stable= _node **_stable_node_dup, } =20 *_stable_node_dup =3D found; - return tree_page; + return &tree_folio->page; } =20 static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node = *stable_node, --=20 2.43.0 From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F51613D887 for ; Thu, 11 Apr 2024 06:14:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816071; cv=none; b=nAj7PwlkjQxUnS02nqvB4NF9hVMCjEqJjMCEwPyYg5TRePUIBfcYZhpZeC7e6ELGCKHx15NoNEpfPunNOkI9a8FJMf2goJIzOIfvmKv8L3kdw6sdJ+YCWV6hoGroNnT39Gbde4utH1/V4FoNHhmRnVOlfbUQNEDQuH5ptVqrEVQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816071; c=relaxed/simple; bh=AGS71Z17Hu5djhgaE9FvKpI+lMFnYwLUHMLt7uG9UW0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RG3QL+/9Yd8ZdAoDM7hUs8TCwERKNOmQ/5mX7P4mDcQeYWXn0wmDq6NXsFbXAy8H0bxOpyHUoFTYcYZXKE/oaY2lqEearFBFNgnxDwXCla4pgrjD8ezS4Wd2S/xF27KdcZOxCqTdsHTSvdBwDPvOgDXd9N5w9zj8xpSMCfHmmUo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qrF2rdtE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qrF2rdtE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED760C4166A; Thu, 11 Apr 2024 06:14:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816071; bh=AGS71Z17Hu5djhgaE9FvKpI+lMFnYwLUHMLt7uG9UW0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qrF2rdtEXCaBbJxjV8S9drLU01OLf5ihIeBBhb8Xl/1R3u2e1+jNFOQHsgdzdnX/c /9dnAtjbA06xzJ7rdG4i6UHkCzg4bXFPhC5813GaUDWCrEDUDslsnKoU5MDzlxVJhe bQCIBihwPd2Yf6WqwLk7a/krz164Iyqhl7cguWbc+gN2CSIdqycaiZ3uvV4syIwWR+ OXLmzBAXdeJ7ihn/uRbrBCamUfBPTBOKUJhPH4u9x3jsuvLltT9DtBOtQqh6LWg+y1 SRxVpNTshHI8KEdoIV06pJfSrWMH/GZw+buDUlIu4gRdDDu7bDH4uAd9mjSAUmXkgq x2DLn4gkEPyOg== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" , Izik Eidus Subject: [PATCH v5 06/10] mm/ksm: use ksm_get_folio in scan_get_next_rmap_item Date: Thu, 11 Apr 2024 14:17:07 +0800 Message-ID: <20240411061713.1847574-7-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Alex Shi (tencent)" Save a compound calls. Cc: Izik Eidus Cc: Matthew Wilcox Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Chris Wright Signed-off-by: Alex Shi (tencent) Reviewed-by: David Hildenbrand --- mm/ksm.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 654400f993fc..b127d39c9af0 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2611,14 +2611,14 @@ static struct ksm_rmap_item *scan_get_next_rmap_ite= m(struct page **page) */ if (!ksm_merge_across_nodes) { struct ksm_stable_node *stable_node, *next; - struct page *page; + struct folio *folio; =20 list_for_each_entry_safe(stable_node, next, &migrate_nodes, list) { - page =3D get_ksm_page(stable_node, - GET_KSM_PAGE_NOLOCK); - if (page) - put_page(page); + folio =3D ksm_get_folio(stable_node, + GET_KSM_PAGE_NOLOCK); + if (folio) + folio_put(folio); cond_resched(); } } --=20 2.43.0 From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D58413DB8A for ; Thu, 11 Apr 2024 06:14:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816074; cv=none; b=ny4lmipqq2S8ENxlKQoFvVMkZx6Msdp4BqFyBHN+l1/0fksVw/+sYkN86xaM25NR6bi4L7bZeAiVdKK/XrXEEZWneWFTRqf2NDRrE4QOBSqEmgC7y0Yt8JEM1gz5+NlnW38tzkCDnp1+TM5kKYeBb3xei8DMJqGQH/i1OKsmozM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816074; c=relaxed/simple; bh=sFkZO7Zaxh50oOXAu6R13zvDS+luqAu687nTeql3yIQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GWhae2+9F5GQEmH1DAdz9WoxBMU0FsKRc7uYeqKSYZ/ICWgxk3/afn6Ygl3eihZIKCK+YWJgNyqPeJ1EJOb+B6DZKe2TTMT2HKznZEI++KXqdLW8+2WN7121Zo/zEjyQmjIAmgiZf58lKUT6q+uAejln3D1eNEzHKa4mibE6ATA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KC283FNT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KC283FNT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B169CC43390; Thu, 11 Apr 2024 06:14:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816073; bh=sFkZO7Zaxh50oOXAu6R13zvDS+luqAu687nTeql3yIQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KC283FNT4ao2MpxFeaTvbWB0emRIG60QNJYtYHqPfmeIsjOwDnje007my1mGCHbr7 nfRzlHgiEt+8nqSnI2Rzz9caW/QhUDbKMurWAV3KSZVTF7Bu0a7LfeQ+o50k7IS9Tu G5siZcOLRKQm0js4dUzp+np8Rd7N+17FhMh/N+FoMO2LSUZn2uFA0Lqweak+78LTb3 NZjPajid42oiT2MINzFrl9LaG0exQ4Ynawq4rEopwlt/4GHR3kwEg6kImr4HFqwzMm Gq3jQqw9sQkIsGqZmUvKCqzJRkfX+JmRuRT797is7pY5H46bZ+ES7NHm55oNuHWOfq JzVnxlijt2ekw== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" , Izik Eidus Subject: [PATCH v5 07/10] mm/ksm: use folio in write_protect_page Date: Thu, 11 Apr 2024 14:17:08 +0800 Message-ID: <20240411061713.1847574-8-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Alex Shi (tencent)" Compound page is checked and skipped before write_protect_page() called, use folio to save a few compound_head checking. Cc: Izik Eidus Cc: Matthew Wilcox Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Chris Wright Signed-off-by: Alex Shi (tencent) Reviewed-by: David Hildenbrand --- mm/ksm.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index b127d39c9af0..2fdd6586a3a7 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1289,23 +1289,24 @@ static u32 calc_checksum(struct page *page) return checksum; } =20 -static int write_protect_page(struct vm_area_struct *vma, struct page *pag= e, +static int write_protect_page(struct vm_area_struct *vma, struct folio *fo= lio, pte_t *orig_pte) { struct mm_struct *mm =3D vma->vm_mm; - DEFINE_PAGE_VMA_WALK(pvmw, page, vma, 0, 0); + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, 0, 0); int swapped; int err =3D -EFAULT; struct mmu_notifier_range range; bool anon_exclusive; pte_t entry; =20 - pvmw.address =3D page_address_in_vma(page, vma); + if (WARN_ON_ONCE(folio_test_large(folio))) + return err; + + pvmw.address =3D page_address_in_vma(&folio->page, vma); if (pvmw.address =3D=3D -EFAULT) goto out; =20 - BUG_ON(PageTransCompound(page)); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, pvmw.address, pvmw.address + PAGE_SIZE); mmu_notifier_invalidate_range_start(&range); @@ -1315,12 +1316,12 @@ static int write_protect_page(struct vm_area_struct= *vma, struct page *page, if (WARN_ONCE(!pvmw.pte, "Unexpected PMD mapping?")) goto out_unlock; =20 - anon_exclusive =3D PageAnonExclusive(page); + anon_exclusive =3D PageAnonExclusive(&folio->page); entry =3D ptep_get(pvmw.pte); if (pte_write(entry) || pte_dirty(entry) || anon_exclusive || mm_tlb_flush_pending(mm)) { - swapped =3D PageSwapCache(page); - flush_cache_page(vma, pvmw.address, page_to_pfn(page)); + swapped =3D folio_test_swapcache(folio); + flush_cache_page(vma, pvmw.address, folio_pfn(folio)); /* * Ok this is tricky, when get_user_pages_fast() run it doesn't * take any lock, therefore the check that we are going to make @@ -1340,20 +1341,20 @@ static int write_protect_page(struct vm_area_struct= *vma, struct page *page, * Check that no O_DIRECT or similar I/O is in progress on the * page */ - if (page_mapcount(page) + 1 + swapped !=3D page_count(page)) { + if (folio_mapcount(folio) + 1 + swapped !=3D folio_ref_count(folio)) { set_pte_at(mm, pvmw.address, pvmw.pte, entry); goto out_unlock; } =20 /* See folio_try_share_anon_rmap_pte(): clear PTE first. */ if (anon_exclusive && - folio_try_share_anon_rmap_pte(page_folio(page), page)) { + folio_try_share_anon_rmap_pte(folio, &folio->page)) { set_pte_at(mm, pvmw.address, pvmw.pte, entry); goto out_unlock; } =20 if (pte_dirty(entry)) - set_page_dirty(page); + folio_mark_dirty(folio); entry =3D pte_mkclean(entry); =20 if (pte_write(entry)) @@ -1519,7 +1520,7 @@ static int try_to_merge_one_page(struct vm_area_struc= t *vma, * ptes are necessarily already write-protected. But in either * case, we need to lock and check page_count is not raised. */ - if (write_protect_page(vma, page, &orig_pte) =3D=3D 0) { + if (write_protect_page(vma, page_folio(page), &orig_pte) =3D=3D 0) { if (!kpage) { /* * While we hold page lock, upgrade page from --=20 2.43.0 From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC23713CAAB for ; Thu, 11 Apr 2024 06:14:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816076; cv=none; b=Jcv1kUueS4xeuUXnguLGJ01w91Mw96FJKN25qILsNYLM/ffI7PcEOQGPgP3W+W4cn4BaN9osnTNy3Y6K35ak+KY7VR4xWbn90YMpysYsWRHeX5FtSMb9QQUOy5NkMreQZSADVn2NVlbIFg0qoVVRnlTHgFf/HD0H8V3lgK34s8E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816076; c=relaxed/simple; bh=2GVVNpZMi5SoTTNTvQBXF2hh1UdkIACaQFTD3ARx8EY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fNDcQGV1JaSkZ1sQnYKNj7zbnj/nvtPh6YPcE2QdOEV0g7wxBoIFvZow6KDxRfZWSMc1cRS3y4/9G7LKDH2htgiXR2nqzAp/LV5l9HyPM7HZn6mCXb4HHuGeFTjFgomJAp3q7tTINgdhVJD+T7T+rAlTRmJbEFPVauc+l2LP+5M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tZbDdNNv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tZbDdNNv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7467CC433A6; Thu, 11 Apr 2024 06:14:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816076; bh=2GVVNpZMi5SoTTNTvQBXF2hh1UdkIACaQFTD3ARx8EY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tZbDdNNv+PdK5e/4usAYNkLbBkxlnYZtRLhqtuKjdrjfruV8UtzX0jKTK/9QF9C7T 2izxyzQb3XkkUTQfYGWSJltVH7hJNqdRbI5ufVdeGrZ4eE8iRVg92gfk37kcgSam8R sNdH5DbwlRmSHHz70fyi4kA3ZzSGdp7Qer9Neo6/q1mEippdSBAO/6Nc4AnMd59Y1z /K31XoL8y/oSn3Urp0w8+MzRhKYkAkyREehCPKr6uavwQpqcYJB5SCkFK2Aw6tjZCo lxhClL9gOgqu39nQ+tQo+6z847FWXusNLdZ+I0PKDyyd8j9g485Cdoyg7ukJtuVfZ0 nGMvqhqYmaXdg== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" , Izik Eidus Subject: [PATCH v5 08/10] mm/ksm: Convert chain series funcs and replace get_ksm_page Date: Thu, 11 Apr 2024 14:17:09 +0800 Message-ID: <20240411061713.1847574-9-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Alex Shi (tencent)" In ksm stable tree all page are single, let's convert them to use and folios as well as stable_tree_insert/stable_tree_search funcs. And replace get_ksm_page() by ksm_get_folio() since there is no more needs. It could save a few compound_head calls. Cc: Izik Eidus Cc: Matthew Wilcox Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Chris Wright Signed-off-by: Alex Shi (tencent) Reviewed-by: David Hildenbrand --- mm/ksm.c | 136 ++++++++++++++++++++++++--------------------------- mm/migrate.c | 2 +- 2 files changed, 66 insertions(+), 72 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 2fdd6586a3a7..61a7b5b037a6 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -990,14 +990,6 @@ static struct folio *ksm_get_folio(struct ksm_stable_n= ode *stable_node, return NULL; } =20 -static struct page *get_ksm_page(struct ksm_stable_node *stable_node, - enum get_ksm_page_flags flags) -{ - struct folio *folio =3D ksm_get_folio(stable_node, flags); - - return &folio->page; -} - /* * Removing rmap_item from stable or unstable tree. * This function will clean the information from the stable/unstable tree. @@ -1632,10 +1624,10 @@ bool is_page_sharing_candidate(struct ksm_stable_no= de *stable_node) return __is_page_sharing_candidate(stable_node, 0); } =20 -static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_= dup, - struct ksm_stable_node **_stable_node, - struct rb_root *root, - bool prune_stale_stable_nodes) +static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node= _dup, + struct ksm_stable_node **_stable_node, + struct rb_root *root, + bool prune_stale_stable_nodes) { struct ksm_stable_node *dup, *found =3D NULL, *stable_node =3D *_stable_n= ode; struct hlist_node *hlist_safe; @@ -1748,7 +1740,7 @@ static struct page *stable_node_dup(struct ksm_stable= _node **_stable_node_dup, } =20 *_stable_node_dup =3D found; - return &tree_folio->page; + return tree_folio; } =20 static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node = *stable_node, @@ -1765,7 +1757,7 @@ static struct ksm_stable_node *stable_node_dup_any(st= ruct ksm_stable_node *stabl } =20 /* - * Like for get_ksm_page, this function can free the *_stable_node and + * Like for ksm_get_folio, this function can free the *_stable_node and * *_stable_node_dup if the returned tree_page is NULL. * * It can also free and overwrite *_stable_node with the found @@ -1778,16 +1770,16 @@ static struct ksm_stable_node *stable_node_dup_any(= struct ksm_stable_node *stabl * function and will be overwritten in all cases, the caller doesn't * need to initialize it. */ -static struct page *__stable_node_chain(struct ksm_stable_node **_stable_n= ode_dup, - struct ksm_stable_node **_stable_node, - struct rb_root *root, - bool prune_stale_stable_nodes) +static struct folio *__stable_node_chain(struct ksm_stable_node **_stable_= node_dup, + struct ksm_stable_node **_stable_node, + struct rb_root *root, + bool prune_stale_stable_nodes) { struct ksm_stable_node *stable_node =3D *_stable_node; if (!is_stable_node_chain(stable_node)) { if (is_page_sharing_candidate(stable_node)) { *_stable_node_dup =3D stable_node; - return get_ksm_page(stable_node, GET_KSM_PAGE_NOLOCK); + return ksm_get_folio(stable_node, GET_KSM_PAGE_NOLOCK); } /* * _stable_node_dup set to NULL means the stable_node @@ -1800,24 +1792,24 @@ static struct page *__stable_node_chain(struct ksm_= stable_node **_stable_node_du prune_stale_stable_nodes); } =20 -static __always_inline struct page *chain_prune(struct ksm_stable_node **s= _n_d, - struct ksm_stable_node **s_n, - struct rb_root *root) +static __always_inline struct folio *chain_prune(struct ksm_stable_node **= s_n_d, + struct ksm_stable_node **s_n, + struct rb_root *root) { return __stable_node_chain(s_n_d, s_n, root, true); } =20 -static __always_inline struct page *chain(struct ksm_stable_node **s_n_d, - struct ksm_stable_node *s_n, - struct rb_root *root) +static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d, + struct ksm_stable_node *s_n, + struct rb_root *root) { struct ksm_stable_node *old_stable_node =3D s_n; - struct page *tree_page; + struct folio *tree_folio; =20 - tree_page =3D __stable_node_chain(s_n_d, &s_n, root, false); + tree_folio =3D __stable_node_chain(s_n_d, &s_n, root, false); /* not pruning dups so s_n cannot have changed */ VM_BUG_ON(s_n !=3D old_stable_node); - return tree_page; + return tree_folio; } =20 /* @@ -1837,28 +1829,30 @@ static struct page *stable_tree_search(struct page = *page) struct rb_node *parent; struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any; struct ksm_stable_node *page_node; + struct folio *folio; =20 - page_node =3D page_stable_node(page); + folio =3D page_folio(page); + page_node =3D folio_stable_node(folio); if (page_node && page_node->head !=3D &migrate_nodes) { /* ksm page forked */ - get_page(page); - return page; + folio_get(folio); + return &folio->page; } =20 - nid =3D get_kpfn_nid(page_to_pfn(page)); + nid =3D get_kpfn_nid(folio_pfn(folio)); root =3D root_stable_tree + nid; again: new =3D &root->rb_node; parent =3D NULL; =20 while (*new) { - struct page *tree_page; + struct folio *tree_folio; int ret; =20 cond_resched(); stable_node =3D rb_entry(*new, struct ksm_stable_node, node); stable_node_any =3D NULL; - tree_page =3D chain_prune(&stable_node_dup, &stable_node, root); + tree_folio =3D chain_prune(&stable_node_dup, &stable_node, root); /* * NOTE: stable_node may have been freed by * chain_prune() if the returned stable_node_dup is @@ -1892,14 +1886,14 @@ static struct page *stable_tree_search(struct page = *page) * write protected at all times. Any will work * fine to continue the walk. */ - tree_page =3D get_ksm_page(stable_node_any, - GET_KSM_PAGE_NOLOCK); + tree_folio =3D ksm_get_folio(stable_node_any, + GET_KSM_PAGE_NOLOCK); } VM_BUG_ON(!stable_node_dup ^ !!stable_node_any); - if (!tree_page) { + if (!tree_folio) { /* * If we walked over a stale stable_node, - * get_ksm_page() will call rb_erase() and it + * ksm_get_folio() will call rb_erase() and it * may rebalance the tree from under us. So * restart the search from scratch. Returning * NULL would be safe too, but we'd generate @@ -1909,8 +1903,8 @@ static struct page *stable_tree_search(struct page *p= age) goto again; } =20 - ret =3D memcmp_pages(page, tree_page); - put_page(tree_page); + ret =3D memcmp_pages(page, &tree_folio->page); + folio_put(tree_folio); =20 parent =3D *new; if (ret < 0) @@ -1953,26 +1947,26 @@ static struct page *stable_tree_search(struct page = *page) * It would be more elegant to return stable_node * than kpage, but that involves more changes. */ - tree_page =3D get_ksm_page(stable_node_dup, - GET_KSM_PAGE_TRYLOCK); + tree_folio =3D ksm_get_folio(stable_node_dup, + GET_KSM_PAGE_TRYLOCK); =20 - if (PTR_ERR(tree_page) =3D=3D -EBUSY) + if (PTR_ERR(tree_folio) =3D=3D -EBUSY) return ERR_PTR(-EBUSY); =20 - if (unlikely(!tree_page)) + if (unlikely(!tree_folio)) /* * The tree may have been rebalanced, * so re-evaluate parent and new. */ goto again; - unlock_page(tree_page); + folio_unlock(tree_folio); =20 if (get_kpfn_nid(stable_node_dup->kpfn) !=3D NUMA(stable_node_dup->nid)) { - put_page(tree_page); + folio_put(tree_folio); goto replace; } - return tree_page; + return &tree_folio->page; } } =20 @@ -1985,8 +1979,8 @@ static struct page *stable_tree_search(struct page *p= age) rb_insert_color(&page_node->node, root); out: if (is_page_sharing_candidate(page_node)) { - get_page(page); - return page; + folio_get(folio); + return &folio->page; } else return NULL; =20 @@ -2011,12 +2005,12 @@ static struct page *stable_tree_search(struct page = *page) &page_node->node, root); if (is_page_sharing_candidate(page_node)) - get_page(page); + folio_get(folio); else - page =3D NULL; + folio =3D NULL; } else { rb_erase(&stable_node_dup->node, root); - page =3D NULL; + folio =3D NULL; } } else { VM_BUG_ON(!is_stable_node_chain(stable_node)); @@ -2027,16 +2021,16 @@ static struct page *stable_tree_search(struct page = *page) DO_NUMA(page_node->nid =3D nid); stable_node_chain_add_dup(page_node, stable_node); if (is_page_sharing_candidate(page_node)) - get_page(page); + folio_get(folio); else - page =3D NULL; + folio =3D NULL; } else { - page =3D NULL; + folio =3D NULL; } } stable_node_dup->head =3D &migrate_nodes; list_add(&stable_node_dup->list, stable_node_dup->head); - return page; + return &folio->page; =20 chain_append: /* stable_node_dup could be null if it reached the limit */ @@ -2079,7 +2073,7 @@ static struct page *stable_tree_search(struct page *p= age) * This function returns the stable tree node just allocated on success, * NULL otherwise. */ -static struct ksm_stable_node *stable_tree_insert(struct page *kpage) +static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio) { int nid; unsigned long kpfn; @@ -2089,7 +2083,7 @@ static struct ksm_stable_node *stable_tree_insert(str= uct page *kpage) struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any; bool need_chain =3D false; =20 - kpfn =3D page_to_pfn(kpage); + kpfn =3D folio_pfn(kfolio); nid =3D get_kpfn_nid(kpfn); root =3D root_stable_tree + nid; again: @@ -2097,13 +2091,13 @@ static struct ksm_stable_node *stable_tree_insert(s= truct page *kpage) new =3D &root->rb_node; =20 while (*new) { - struct page *tree_page; + struct folio *tree_folio; int ret; =20 cond_resched(); stable_node =3D rb_entry(*new, struct ksm_stable_node, node); stable_node_any =3D NULL; - tree_page =3D chain(&stable_node_dup, stable_node, root); + tree_folio =3D chain(&stable_node_dup, stable_node, root); if (!stable_node_dup) { /* * Either all stable_node dups were full in @@ -2125,14 +2119,14 @@ static struct ksm_stable_node *stable_tree_insert(s= truct page *kpage) * write protected at all times. Any will work * fine to continue the walk. */ - tree_page =3D get_ksm_page(stable_node_any, - GET_KSM_PAGE_NOLOCK); + tree_folio =3D ksm_get_folio(stable_node_any, + GET_KSM_PAGE_NOLOCK); } VM_BUG_ON(!stable_node_dup ^ !!stable_node_any); - if (!tree_page) { + if (!tree_folio) { /* * If we walked over a stale stable_node, - * get_ksm_page() will call rb_erase() and it + * ksm_get_folio() will call rb_erase() and it * may rebalance the tree from under us. So * restart the search from scratch. Returning * NULL would be safe too, but we'd generate @@ -2142,8 +2136,8 @@ static struct ksm_stable_node *stable_tree_insert(str= uct page *kpage) goto again; } =20 - ret =3D memcmp_pages(kpage, tree_page); - put_page(tree_page); + ret =3D memcmp_pages(&kfolio->page, &tree_folio->page); + folio_put(tree_folio); =20 parent =3D *new; if (ret < 0) @@ -2162,7 +2156,7 @@ static struct ksm_stable_node *stable_tree_insert(str= uct page *kpage) =20 INIT_HLIST_HEAD(&stable_node_dup->hlist); stable_node_dup->kpfn =3D kpfn; - set_page_stable_node(kpage, stable_node_dup); + folio_set_stable_node(kfolio, stable_node_dup); stable_node_dup->rmap_hlist_len =3D 0; DO_NUMA(stable_node_dup->nid =3D nid); if (!need_chain) { @@ -2440,7 +2434,7 @@ static void cmp_and_merge_page(struct page *page, str= uct ksm_rmap_item *rmap_ite * node in the stable tree and add both rmap_items. */ lock_page(kpage); - stable_node =3D stable_tree_insert(kpage); + stable_node =3D stable_tree_insert(page_folio(kpage)); if (stable_node) { stable_tree_append(tree_rmap_item, stable_node, false); @@ -3244,7 +3238,7 @@ void folio_migrate_ksm(struct folio *newfolio, struct= folio *folio) /* * newfolio->mapping was set in advance; now we need smp_wmb() * to make sure that the new stable_node->kpfn is visible - * to get_ksm_page() before it can see that folio->mapping + * to ksm_get_folio() before it can see that folio->mapping * has gone stale (or that folio_test_swapcache has been cleared). */ smp_wmb(); @@ -3271,7 +3265,7 @@ static bool stable_node_dup_remove_range(struct ksm_s= table_node *stable_node, if (stable_node->kpfn >=3D start_pfn && stable_node->kpfn < end_pfn) { /* - * Don't get_ksm_page, page has already gone: + * Don't ksm_get_folio, page has already gone: * which is why we keep kpfn instead of page* */ remove_node_from_stable_tree(stable_node); @@ -3359,7 +3353,7 @@ static int ksm_memory_callback(struct notifier_block = *self, * Most of the work is done by page migration; but there might * be a few stable_nodes left over, still pointing to struct * pages which have been offlined: prune those from the tree, - * otherwise get_ksm_page() might later try to access a + * otherwise ksm_get_folio() might later try to access a * non-existent struct page. */ ksm_check_stable_tree(mn->start_pfn, diff --git a/mm/migrate.c b/mm/migrate.c index 73a052a382f1..9f0494fd902c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -616,7 +616,7 @@ void folio_migrate_flags(struct folio *newfolio, struct= folio *folio) folio_migrate_ksm(newfolio, folio); /* * Please do not reorder this without considering how mm/ksm.c's - * get_ksm_page() depends upon ksm_migrate_page() and PageSwapCache(). + * ksm_get_folio() depends upon ksm_migrate_page() and PageSwapCache(). */ if (folio_test_swapcache(folio)) folio_clear_swapcache(folio); --=20 2.43.0 From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 718BBE57F for ; Thu, 11 Apr 2024 06:14:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816079; cv=none; b=WkxG7wSiXaOaDEaLvYaqm6XpoRt9IPwiaNV1C0ms3bnGyPz/k18Rysq/6UKj1pTXjKP5KNjf+t+BooBpEAqo0qmqZ1KHqwGHwS4PG0NIVJC8tgwkLo1W0QnMFrl5i0Wfr/gvYx0W9cudYWDBpXNcJowL0sBY4LXvTvM/nnKnJuE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816079; c=relaxed/simple; bh=wwxxrSyMrjaoURhO6OXX+a2E0cDwbJxoGFhUDcJuPic=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=m7Hvl2bmWCc+4CwJHViVCsMShJ2xvWmu6901HUlDhbWC3rTiqkAiIn4DYD4258QqTph1J7P86p2iT26jBRQCtQ1F4Zx7WMJdJsstRhZSIdyJrGjT7j98kIwKP/eav03l4Kj3e2mK7ELDs52FYh8jijOUzc9vfpKylcdTD3+2onc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dDQK7Bqf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dDQK7Bqf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59CA9C43142; Thu, 11 Apr 2024 06:14:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816079; bh=wwxxrSyMrjaoURhO6OXX+a2E0cDwbJxoGFhUDcJuPic=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dDQK7BqfFTyKR3n/R+YvyO4I55kJp28xKTt8gOMJzVuB5hAcmSDUzIwLWBVsgvyfX Z6RjymuU0IhJe/n62nYuMNFQ5PMVSYhI5XxpLhIThOrYM4D92pBZP3z58FxKyiU3Qq 1kVWLlrq++6LvXjzYqEIHQEA9vJYF8cfrr5QHj1c1Eiu03aUiYbNOUdA4VGxgbLgXE /TPzMg7BdEdke3czh9Uxlrs/bG+nCAlWaMRO436c7KOe8uKYBOEGgAJECYwXFIdbxH i5gYTYmC76/TLxM5nbCeHVcTdn6lIK53zFtmshPmYl6SjDCvWHl/Ox68b39PndsJyM NAhveKN2mU7yw== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" Subject: [PATCH v5 09/10] mm/ksm: rename get_ksm_page_flags() to ksm_get_folio_flags Date: Thu, 11 Apr 2024 14:17:10 +0800 Message-ID: <20240411061713.1847574-10-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: David Hildenbrand As we are removing get_ksm_page_flags(), make the flags match the new function name. Signed-off-by: David Hildenbrand Reviewed-by: Alex Shi --- mm/ksm.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 61a7b5b037a6..662fdaaf3ea3 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -890,10 +890,10 @@ static void remove_node_from_stable_tree(struct ksm_s= table_node *stable_node) free_stable_node(stable_node); } =20 -enum get_ksm_page_flags { - GET_KSM_PAGE_NOLOCK, - GET_KSM_PAGE_LOCK, - GET_KSM_PAGE_TRYLOCK +enum ksm_get_folio_flags { + KSM_GET_FOLIO_NOLOCK, + KSM_GET_FOLIO_LOCK, + KSM_GET_FOLIO_TRYLOCK }; =20 /* @@ -916,7 +916,7 @@ enum get_ksm_page_flags { * is on its way to being freed; but it is an anomaly to bear in mind. */ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node, - enum get_ksm_page_flags flags) + enum ksm_get_folio_flags flags) { struct folio *folio; void *expected_mapping; @@ -959,15 +959,15 @@ static struct folio *ksm_get_folio(struct ksm_stable_= node *stable_node, goto stale; } =20 - if (flags =3D=3D GET_KSM_PAGE_TRYLOCK) { + if (flags =3D=3D KSM_GET_FOLIO_TRYLOCK) { if (!folio_trylock(folio)) { folio_put(folio); return ERR_PTR(-EBUSY); } - } else if (flags =3D=3D GET_KSM_PAGE_LOCK) + } else if (flags =3D=3D KSM_GET_FOLIO_LOCK) folio_lock(folio); =20 - if (flags !=3D GET_KSM_PAGE_NOLOCK) { + if (flags !=3D KSM_GET_FOLIO_NOLOCK) { if (READ_ONCE(folio->mapping) !=3D expected_mapping) { folio_unlock(folio); folio_put(folio); @@ -1001,7 +1001,7 @@ static void remove_rmap_item_from_tree(struct ksm_rma= p_item *rmap_item) struct folio *folio; =20 stable_node =3D rmap_item->head; - folio =3D ksm_get_folio(stable_node, GET_KSM_PAGE_LOCK); + folio =3D ksm_get_folio(stable_node, KSM_GET_FOLIO_LOCK); if (!folio) goto out; =20 @@ -1116,7 +1116,7 @@ static int remove_stable_node(struct ksm_stable_node = *stable_node) struct folio *folio; int err; =20 - folio =3D ksm_get_folio(stable_node, GET_KSM_PAGE_LOCK); + folio =3D ksm_get_folio(stable_node, KSM_GET_FOLIO_LOCK); if (!folio) { /* * ksm_get_folio did remove_node_from_stable_tree itself. @@ -1656,7 +1656,7 @@ static struct folio *stable_node_dup(struct ksm_stabl= e_node **_stable_node_dup, * stable_node parameter itself will be freed from * under us if it returns NULL. */ - folio =3D ksm_get_folio(dup, GET_KSM_PAGE_NOLOCK); + folio =3D ksm_get_folio(dup, KSM_GET_FOLIO_NOLOCK); if (!folio) continue; nr +=3D 1; @@ -1779,7 +1779,7 @@ static struct folio *__stable_node_chain(struct ksm_s= table_node **_stable_node_d if (!is_stable_node_chain(stable_node)) { if (is_page_sharing_candidate(stable_node)) { *_stable_node_dup =3D stable_node; - return ksm_get_folio(stable_node, GET_KSM_PAGE_NOLOCK); + return ksm_get_folio(stable_node, KSM_GET_FOLIO_NOLOCK); } /* * _stable_node_dup set to NULL means the stable_node @@ -1887,7 +1887,7 @@ static struct page *stable_tree_search(struct page *p= age) * fine to continue the walk. */ tree_folio =3D ksm_get_folio(stable_node_any, - GET_KSM_PAGE_NOLOCK); + KSM_GET_FOLIO_NOLOCK); } VM_BUG_ON(!stable_node_dup ^ !!stable_node_any); if (!tree_folio) { @@ -1948,7 +1948,7 @@ static struct page *stable_tree_search(struct page *p= age) * than kpage, but that involves more changes. */ tree_folio =3D ksm_get_folio(stable_node_dup, - GET_KSM_PAGE_TRYLOCK); + KSM_GET_FOLIO_TRYLOCK); =20 if (PTR_ERR(tree_folio) =3D=3D -EBUSY) return ERR_PTR(-EBUSY); @@ -2120,7 +2120,7 @@ static struct ksm_stable_node *stable_tree_insert(str= uct folio *kfolio) * fine to continue the walk. */ tree_folio =3D ksm_get_folio(stable_node_any, - GET_KSM_PAGE_NOLOCK); + KSM_GET_FOLIO_NOLOCK); } VM_BUG_ON(!stable_node_dup ^ !!stable_node_any); if (!tree_folio) { @@ -2611,7 +2611,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(= struct page **page) list_for_each_entry_safe(stable_node, next, &migrate_nodes, list) { folio =3D ksm_get_folio(stable_node, - GET_KSM_PAGE_NOLOCK); + KSM_GET_FOLIO_NOLOCK); if (folio) folio_put(folio); cond_resched(); --=20 2.43.0 From nobody Sat Feb 7 12:11:30 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 377AD13DDD8 for ; Thu, 11 Apr 2024 06:14:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816082; cv=none; b=RwnqeM+SmQigE5XiOGMA+j2VEvBIyesMZ0i03QJySllWRKKvF5nUNxQxiDeCyA/tn5InQ6Jxlh9oJVI1NQYUjIuT/GTdq2vuEhl3wmHGoIG/iqh22v4MKkr5/uDZQRjLTedtFB5RNzQiopTADvk+l+zH6atydCHo3/vGP/jHdoo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712816082; c=relaxed/simple; bh=2BbBT91rO5EKEsTUfQ1HnSSvv2jhr/Cd5b48tiAufmA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GB1oygyXhkWwvG80KiDCShrOJXAg/P+tcwU8Tg8Npl9hQ5luZTxld97ab35L/CGApvyDfuyNa+Jk7eMkXhuxIW3RNlQj0eNCoEQCAorvLfSuhcTVeeWmPD8rvjrAVL/tp2nKja3I9nwK0dIQKesM7daAM05bCyinuByzKSjyd3k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=p8rhinow; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="p8rhinow" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3C97C433C7; Thu, 11 Apr 2024 06:14:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712816082; bh=2BbBT91rO5EKEsTUfQ1HnSSvv2jhr/Cd5b48tiAufmA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p8rhinowpXykMIv8xrZeYQ4/sM/rBmUtzroByI24TOcabbu/AkuHzwOo+wLvLyBpS ei0r0IEsaUt6MiLYL78P4HxgT4D0qn7aVE0oPYB+vS8d+2JgmWR7RC6mVVCbCYkxJS d96sO+9dyG3l6GtE7RAf2pH1a4qCulOXq6jesyBIpnKlqbyvXaujgi2UkUOtJ9tsME 3vjChWTYULjRrUtk/iQvsOXiOwgV8vZobvEoLodOJdwvRydog0NJLVNYYyq2vcvoi2 U4GARspuxshWeFuJK3WhJGdkBjbfZ05BoReePvWFmPG2XqxozAr9Nz/Z39EAWC3ew+ x5SoVAJSv6oPA== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, hughd@google.com, chrisw@sous-sol.org, david@redhat.com Cc: "Alex Shi (tencent)" , Izik Eidus Subject: [PATCH v5 10/10] mm/ksm: replace set_page_stable_node by folio_set_stable_node Date: Thu, 11 Apr 2024 14:17:11 +0800 Message-ID: <20240411061713.1847574-11-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411061713.1847574-1-alexs@kernel.org> References: <20240411061713.1847574-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Alex Shi (tencent)" Only single page could be reached where we set stable node after write protect, so use folio converted func to replace page's. And remove the unused func set_page_stable_node(). Cc: Izik Eidus Cc: Matthew Wilcox Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Chris Wright Signed-off-by: Alex Shi (tencent) Reviewed-by: David Hildenbrand --- mm/ksm.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 662fdaaf3ea3..486c9974f8e2 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1094,17 +1094,11 @@ static inline struct ksm_stable_node *page_stable_n= ode(struct page *page) return folio_stable_node(page_folio(page)); } =20 -static inline void set_page_stable_node(struct page *page, - struct ksm_stable_node *stable_node) -{ - VM_BUG_ON_PAGE(PageAnon(page) && PageAnonExclusive(page), page); - page->mapping =3D (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM); -} - static inline void folio_set_stable_node(struct folio *folio, struct ksm_stable_node *stable_node) { - set_page_stable_node(&folio->page, stable_node); + VM_WARN_ON_FOLIO(folio_test_anon(folio) && PageAnonExclusive(&folio->page= ), folio); + folio->mapping =3D (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM= ); } =20 #ifdef CONFIG_SYSFS @@ -1519,7 +1513,7 @@ static int try_to_merge_one_page(struct vm_area_struc= t *vma, * PageAnon+anon_vma to PageKsm+NULL stable_node: * stable_tree_insert() will update stable_node. */ - set_page_stable_node(page, NULL); + folio_set_stable_node(page_folio(page), NULL); mark_page_accessed(page); /* * Page reclaim just frees a clean page with no dirty --=20 2.43.0