From nobody Mon Sep 15 23:23:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04BEAC5479D for ; Mon, 9 Jan 2023 21:34:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236909AbjAIVeG (ORCPT ); Mon, 9 Jan 2023 16:34:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235023AbjAIVdv (ORCPT ); Mon, 9 Jan 2023 16:33:51 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D005D67 for ; Mon, 9 Jan 2023 13:33:48 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B2064B80D50 for ; Mon, 9 Jan 2023 21:33:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F18BDC433F1; Mon, 9 Jan 2023 21:33:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673300026; bh=8zePhCnELAS1QJzxrjEP75j3wPseZorjwjhVisLqJLk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iI0kvQY14VYnZdX+nOXPYkE3NYQ684HNWh99BZ3HGSB44hAC1XoKXeyhI00O79K6w baT02EhqlaaajEh5EOvLemJQ/HQvCXaL6sXFaCAqhiR5TnV4vHFxqkYJsxwHwsiUbL RRxzElNee++MXz9VfFsYQ8CR47R+UgOqES3k8M4JMiDSkcE+7rSG6/r4dpWWLTprUL DXXWH5WGqOtB3POAMRJ/7HmiW0iPUezsF6QKlwQVYwZcYHaD93ZvADEpOP5D6eBoO1 rDzq6VTp6zIC3RvJ28SRNRMF5F3miHkJE18KNJl7SuBX8OaT9jC4tF4OzqsUfIcI9O vCW3K43C/4img== From: SeongJae Park To: Andrew Morton Cc: SeongJae Park , damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/6] mm/damon/vaddr: rename 'damon_young_walk_private->page_sz' to 'folio_sz' Date: Mon, 9 Jan 2023 21:33:30 +0000 Message-Id: <20230109213335.62525-2-sj@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230109213335.62525-1-sj@kernel.org> References: <20230109213335.62525-1-sj@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" DAMON's virtual address space monitoring operations set is using folio now. Rename 'damon_pa_access_chk_result->page_sz' to reflect the fact. Signed-off-by: SeongJae Park --- mm/damon/vaddr.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 9d92c5eb3a1f..d6cb1fca1769 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -422,7 +422,8 @@ static void damon_va_prepare_access_checks(struct damon= _ctx *ctx) } =20 struct damon_young_walk_private { - unsigned long *page_sz; + /* size of the folio for the access checked virtual memory address */ + unsigned long *folio_sz; bool young; }; =20 @@ -452,7 +453,7 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned l= ong addr, if (pmd_young(*pmd) || !folio_test_idle(folio) || mmu_notifier_test_young(walk->mm, addr)) { - *priv->page_sz =3D HPAGE_PMD_SIZE; + *priv->folio_sz =3D HPAGE_PMD_SIZE; priv->young =3D true; } folio_put(folio); @@ -474,7 +475,7 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned l= ong addr, goto out; if (pte_young(*pte) || !folio_test_idle(folio) || mmu_notifier_test_young(walk->mm, addr)) { - *priv->page_sz =3D PAGE_SIZE; + *priv->folio_sz =3D PAGE_SIZE; priv->young =3D true; } folio_put(folio); @@ -504,7 +505,7 @@ static int damon_young_hugetlb_entry(pte_t *pte, unsign= ed long hmask, =20 if (pte_young(entry) || !folio_test_idle(folio) || mmu_notifier_test_young(walk->mm, addr)) { - *priv->page_sz =3D huge_page_size(h); + *priv->folio_sz =3D huge_page_size(h); priv->young =3D true; } =20 @@ -524,10 +525,10 @@ static const struct mm_walk_ops damon_young_ops =3D { }; =20 static bool damon_va_young(struct mm_struct *mm, unsigned long addr, - unsigned long *page_sz) + unsigned long *folio_sz) { struct damon_young_walk_private arg =3D { - .page_sz =3D page_sz, + .folio_sz =3D folio_sz, .young =3D false, }; =20 @@ -547,18 +548,18 @@ static void __damon_va_check_access(struct mm_struct = *mm, struct damon_region *r, bool same_target) { static unsigned long last_addr; - static unsigned long last_page_sz =3D PAGE_SIZE; + static unsigned long last_folio_sz =3D PAGE_SIZE; static bool last_accessed; =20 /* If the region is in the last checked page, reuse the result */ - if (same_target && (ALIGN_DOWN(last_addr, last_page_sz) =3D=3D - ALIGN_DOWN(r->sampling_addr, last_page_sz))) { + if (same_target && (ALIGN_DOWN(last_addr, last_folio_sz) =3D=3D + ALIGN_DOWN(r->sampling_addr, last_folio_sz))) { if (last_accessed) r->nr_accesses++; return; } =20 - last_accessed =3D damon_va_young(mm, r->sampling_addr, &last_page_sz); + last_accessed =3D damon_va_young(mm, r->sampling_addr, &last_folio_sz); if (last_accessed) r->nr_accesses++; =20 --=20 2.25.1