From nobody Wed Apr 29 09:34:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA5B8CCA47A for ; Sat, 11 Jun 2022 08:47:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231683AbiFKIre (ORCPT ); Sat, 11 Jun 2022 04:47:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231593AbiFKIrc (ORCPT ); Sat, 11 Jun 2022 04:47:32 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70B7022BC1 for ; Sat, 11 Jun 2022 01:47:30 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LKrz728B9zjX0R; Sat, 11 Jun 2022 16:46:27 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 11 Jun 2022 16:47:27 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , Subject: [PATCH 1/7] mm/khugepaged: remove unneeded shmem_huge_enabled() check Date: Sat, 11 Jun 2022 16:47:25 +0800 Message-ID: <20220611084731.55155-2-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220611084731.55155-1-linmiaohe@huawei.com> References: <20220611084731.55155-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If we reach here, hugepage_vma_check() has already made sure that hugepage is enabled for shmem. Remove this duplicated check. Signed-off-by: Miaohe Lin Reviewed-by: Yang Shi Reviewed-by: Zach O'Keefe --- mm/khugepaged.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 476d79360101..73570dfffcec 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2153,8 +2153,6 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, if (khugepaged_scan.address < hstart) khugepaged_scan.address =3D hstart; VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK); - if (shmem_file(vma->vm_file) && !shmem_huge_enabled(vma)) - goto skip; =20 while (khugepaged_scan.address < hend) { int ret; --=20 2.23.0 From nobody Wed Apr 29 09:34:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 204A1C433EF for ; Sat, 11 Jun 2022 08:47:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231706AbiFKIri (ORCPT ); Sat, 11 Jun 2022 04:47:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231479AbiFKIrc (ORCPT ); Sat, 11 Jun 2022 04:47:32 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 416861ADB1 for ; Sat, 11 Jun 2022 01:47:30 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LKrz7716NzjWw6; Sat, 11 Jun 2022 16:46:27 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 11 Jun 2022 16:47:28 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , Subject: [PATCH 2/7] mm/khugepaged: stop swapping in page when VM_FAULT_RETRY occurs Date: Sat, 11 Jun 2022 16:47:26 +0800 Message-ID: <20220611084731.55155-3-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220611084731.55155-1-linmiaohe@huawei.com> References: <20220611084731.55155-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When do_swap_page returns VM_FAULT_RETRY, we do not retry here and thus swap entry will remain in pagetable. This will result in later failure. So stop swapping in pages in this case to save cpu cycles. Signed-off-by: Miaohe Lin --- mm/khugepaged.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 73570dfffcec..a8adb2d1e9c6 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1003,19 +1003,16 @@ static bool __collapse_huge_page_swapin(struct mm_s= truct *mm, swapped_in++; ret =3D do_swap_page(&vmf); =20 - /* do_swap_page returns VM_FAULT_RETRY with released mmap_lock */ + /* + * do_swap_page returns VM_FAULT_RETRY with released mmap_lock. + * Note we treat VM_FAULT_RETRY as VM_FAULT_ERROR here because + * we do not retry here and swap entry will remain in pagetable + * resulting in later failure. + */ if (ret & VM_FAULT_RETRY) { mmap_read_lock(mm); - if (hugepage_vma_revalidate(mm, haddr, &vma)) { - /* vma is no longer available, don't continue to swapin */ - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); - return false; - } - /* check if the pmd is still valid */ - if (mm_find_pmd(mm, haddr) !=3D pmd) { - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); - return false; - } + trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); + return false; } if (ret & VM_FAULT_ERROR) { trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); --=20 2.23.0 From nobody Wed Apr 29 09:34:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A36FCC43334 for ; Sat, 11 Jun 2022 08:47:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231839AbiFKIrr (ORCPT ); Sat, 11 Jun 2022 04:47:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231672AbiFKIre (ORCPT ); Sat, 11 Jun 2022 04:47:34 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37C44DF26 for ; Sat, 11 Jun 2022 01:47:31 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LKrwZ53CQzRhQm; Sat, 11 Jun 2022 16:44:14 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 11 Jun 2022 16:47:28 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , Subject: [PATCH 3/7] mm/khugepaged: trivial typo and codestyle cleanup Date: Sat, 11 Jun 2022 16:47:27 +0800 Message-ID: <20220611084731.55155-4-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220611084731.55155-1-linmiaohe@huawei.com> References: <20220611084731.55155-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Fix some typos and tweak the code to meet codestyle. No functional change intended. Signed-off-by: Miaohe Lin Reviewed-by: Yang Shi Reviewed-by: Zach O'Keefe --- mm/khugepaged.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a8adb2d1e9c6..1b5dd3820eac 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -260,7 +260,7 @@ static ssize_t khugepaged_max_ptes_none_store(struct ko= bject *kobj, unsigned long max_ptes_none; =20 err =3D kstrtoul(buf, 10, &max_ptes_none); - if (err || max_ptes_none > HPAGE_PMD_NR-1) + if (err || max_ptes_none > HPAGE_PMD_NR - 1) return -EINVAL; =20 khugepaged_max_ptes_none =3D max_ptes_none; @@ -286,7 +286,7 @@ static ssize_t khugepaged_max_ptes_swap_store(struct ko= bject *kobj, unsigned long max_ptes_swap; =20 err =3D kstrtoul(buf, 10, &max_ptes_swap); - if (err || max_ptes_swap > HPAGE_PMD_NR-1) + if (err || max_ptes_swap > HPAGE_PMD_NR - 1) return -EINVAL; =20 khugepaged_max_ptes_swap =3D max_ptes_swap; @@ -313,7 +313,7 @@ static ssize_t khugepaged_max_ptes_shared_store(struct = kobject *kobj, unsigned long max_ptes_shared; =20 err =3D kstrtoul(buf, 10, &max_ptes_shared); - if (err || max_ptes_shared > HPAGE_PMD_NR-1) + if (err || max_ptes_shared > HPAGE_PMD_NR - 1) return -EINVAL; =20 khugepaged_max_ptes_shared =3D max_ptes_shared; @@ -599,7 +599,7 @@ static int __collapse_huge_page_isolate(struct vm_area_= struct *vma, int none_or_zero =3D 0, shared =3D 0, result =3D 0, referenced =3D 0; bool writable =3D false; =20 - for (_pte =3D pte; _pte < pte+HPAGE_PMD_NR; + for (_pte =3D pte; _pte < pte + HPAGE_PMD_NR; _pte++, address +=3D PAGE_SIZE) { pte_t pteval =3D *_pte; if (pte_none(pteval) || (pte_present(pteval) && @@ -1216,7 +1216,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, =20 memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load)); pte =3D pte_offset_map_lock(mm, pmd, address, &ptl); - for (_address =3D address, _pte =3D pte; _pte < pte+HPAGE_PMD_NR; + for (_address =3D address, _pte =3D pte; _pte < pte + HPAGE_PMD_NR; _pte++, _address +=3D PAGE_SIZE) { pte_t pteval =3D *_pte; if (is_swap_pte(pteval)) { @@ -1306,7 +1306,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, /* * Check if the page has any GUP (or other external) pins. * - * Here the check is racy it may see totmal_mapcount > refcount + * Here the check is racy it may see total_mapcount > refcount * in some cases. * For example, one process with one forked child process. * The parent has the PMD split due to MADV_DONTNEED, then @@ -1557,7 +1557,7 @@ static void retract_page_tables(struct address_space = *mapping, pgoff_t pgoff) * mmap_write_lock(mm) as PMD-mapping is likely to be split * later. * - * Not that vma->anon_vma check is racy: it can be set up after + * Note that vma->anon_vma check is racy: it can be set up after * the check but before we took mmap_lock by the fault path. * But page lock would prevent establishing any new ptes of the * page, so we are safe. --=20 2.23.0 From nobody Wed Apr 29 09:34:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81A34C43334 for ; Sat, 11 Jun 2022 08:47:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231778AbiFKIrn (ORCPT ); Sat, 11 Jun 2022 04:47:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231648AbiFKIrd (ORCPT ); Sat, 11 Jun 2022 04:47:33 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 384F822537 for ; Sat, 11 Jun 2022 01:47:31 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LKryj3C3lzjXBj; Sat, 11 Jun 2022 16:46:05 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 11 Jun 2022 16:47:29 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , Subject: [PATCH 4/7] mm/khugepaged: minor cleanup for collapse_file Date: Sat, 11 Jun 2022 16:47:28 +0800 Message-ID: <20220611084731.55155-5-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220611084731.55155-1-linmiaohe@huawei.com> References: <20220611084731.55155-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" nr_none is always 0 for non-shmem case because the page can be read from the backend store. So when nr_none ! =3D 0, it must be in is_shmem case. Also only adjust the nrpages and uncharge shmem when nr_none !=3D 0 to save cpu cycles. Signed-off-by: Miaohe Lin Reviewed-by: Zach O'Keefe --- mm/khugepaged.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1b5dd3820eac..8e6fad7c7bd9 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1885,8 +1885,7 @@ static void collapse_file(struct mm_struct *mm, =20 if (nr_none) { __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none); - if (is_shmem) - __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none); + __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none); } =20 /* Join all the small entries into a single multi-index entry */ @@ -1950,10 +1949,10 @@ static void collapse_file(struct mm_struct *mm, =20 /* Something went wrong: roll back page cache changes */ xas_lock_irq(&xas); - mapping->nrpages -=3D nr_none; - - if (is_shmem) + if (nr_none) { + mapping->nrpages -=3D nr_none; shmem_uncharge(mapping->host, nr_none); + } =20 xas_set(&xas, start); xas_for_each(&xas, page, end - 1) { --=20 2.23.0 From nobody Wed Apr 29 09:34:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 473A7C43334 for ; Sat, 11 Jun 2022 08:48:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232182AbiFKIr7 (ORCPT ); Sat, 11 Jun 2022 04:47:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231663AbiFKIre (ORCPT ); Sat, 11 Jun 2022 04:47:34 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A37FD3193F for ; Sat, 11 Jun 2022 01:47:32 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LKry96R2JzgYDL; Sat, 11 Jun 2022 16:45:37 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 11 Jun 2022 16:47:30 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , Subject: [PATCH 5/7] mm/khugepaged: use helper macro __ATTR_RW Date: Sat, 11 Jun 2022 16:47:29 +0800 Message-ID: <20220611084731.55155-6-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220611084731.55155-1-linmiaohe@huawei.com> References: <20220611084731.55155-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper macro __ATTR_RW to define the khugepaged attributes. Minor readability improvement. Signed-off-by: Miaohe Lin Reviewed-by: Yang Shi --- mm/khugepaged.c | 37 +++++++++++++++---------------------- 1 file changed, 15 insertions(+), 22 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 8e6fad7c7bd9..142e26e4bdbf 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -147,8 +147,7 @@ static ssize_t scan_sleep_millisecs_store(struct kobjec= t *kobj, return count; } static struct kobj_attribute scan_sleep_millisecs_attr =3D - __ATTR(scan_sleep_millisecs, 0644, scan_sleep_millisecs_show, - scan_sleep_millisecs_store); + __ATTR_RW(scan_sleep_millisecs); =20 static ssize_t alloc_sleep_millisecs_show(struct kobject *kobj, struct kobj_attribute *attr, @@ -175,8 +174,7 @@ static ssize_t alloc_sleep_millisecs_store(struct kobje= ct *kobj, return count; } static struct kobj_attribute alloc_sleep_millisecs_attr =3D - __ATTR(alloc_sleep_millisecs, 0644, alloc_sleep_millisecs_show, - alloc_sleep_millisecs_store); + __ATTR_RW(alloc_sleep_millisecs); =20 static ssize_t pages_to_scan_show(struct kobject *kobj, struct kobj_attribute *attr, @@ -200,8 +198,7 @@ static ssize_t pages_to_scan_store(struct kobject *kobj, return count; } static struct kobj_attribute pages_to_scan_attr =3D - __ATTR(pages_to_scan, 0644, pages_to_scan_show, - pages_to_scan_store); + __ATTR_RW(pages_to_scan); =20 static ssize_t pages_collapsed_show(struct kobject *kobj, struct kobj_attribute *attr, @@ -221,13 +218,13 @@ static ssize_t full_scans_show(struct kobject *kobj, static struct kobj_attribute full_scans_attr =3D __ATTR_RO(full_scans); =20 -static ssize_t khugepaged_defrag_show(struct kobject *kobj, +static ssize_t defrag_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return single_hugepage_flag_show(kobj, attr, buf, TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG); } -static ssize_t khugepaged_defrag_store(struct kobject *kobj, +static ssize_t defrag_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { @@ -235,8 +232,7 @@ static ssize_t khugepaged_defrag_store(struct kobject *= kobj, TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG); } static struct kobj_attribute khugepaged_defrag_attr =3D - __ATTR(defrag, 0644, khugepaged_defrag_show, - khugepaged_defrag_store); + __ATTR_RW(defrag); =20 /* * max_ptes_none controls if khugepaged should collapse hugepages over @@ -246,13 +242,13 @@ static struct kobj_attribute khugepaged_defrag_attr = =3D * runs. Increasing max_ptes_none will instead potentially reduce the * free memory in the system during the khugepaged scan. */ -static ssize_t khugepaged_max_ptes_none_show(struct kobject *kobj, +static ssize_t max_ptes_none_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return sysfs_emit(buf, "%u\n", khugepaged_max_ptes_none); } -static ssize_t khugepaged_max_ptes_none_store(struct kobject *kobj, +static ssize_t max_ptes_none_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { @@ -268,17 +264,16 @@ static ssize_t khugepaged_max_ptes_none_store(struct = kobject *kobj, return count; } static struct kobj_attribute khugepaged_max_ptes_none_attr =3D - __ATTR(max_ptes_none, 0644, khugepaged_max_ptes_none_show, - khugepaged_max_ptes_none_store); + __ATTR_RW(max_ptes_none); =20 -static ssize_t khugepaged_max_ptes_swap_show(struct kobject *kobj, +static ssize_t max_ptes_swap_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return sysfs_emit(buf, "%u\n", khugepaged_max_ptes_swap); } =20 -static ssize_t khugepaged_max_ptes_swap_store(struct kobject *kobj, +static ssize_t max_ptes_swap_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { @@ -295,17 +290,16 @@ static ssize_t khugepaged_max_ptes_swap_store(struct = kobject *kobj, } =20 static struct kobj_attribute khugepaged_max_ptes_swap_attr =3D - __ATTR(max_ptes_swap, 0644, khugepaged_max_ptes_swap_show, - khugepaged_max_ptes_swap_store); + __ATTR_RW(max_ptes_swap); =20 -static ssize_t khugepaged_max_ptes_shared_show(struct kobject *kobj, +static ssize_t max_ptes_shared_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return sysfs_emit(buf, "%u\n", khugepaged_max_ptes_shared); } =20 -static ssize_t khugepaged_max_ptes_shared_store(struct kobject *kobj, +static ssize_t max_ptes_shared_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { @@ -322,8 +316,7 @@ static ssize_t khugepaged_max_ptes_shared_store(struct = kobject *kobj, } =20 static struct kobj_attribute khugepaged_max_ptes_shared_attr =3D - __ATTR(max_ptes_shared, 0644, khugepaged_max_ptes_shared_show, - khugepaged_max_ptes_shared_store); + __ATTR_RW(max_ptes_shared); =20 static struct attribute *khugepaged_attr[] =3D { &khugepaged_defrag_attr.attr, --=20 2.23.0 From nobody Wed Apr 29 09:34:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92815C433EF for ; Sat, 11 Jun 2022 08:48:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232313AbiFKIsC (ORCPT ); Sat, 11 Jun 2022 04:48:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231615AbiFKIre (ORCPT ); Sat, 11 Jun 2022 04:47:34 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D1F65F67 for ; Sat, 11 Jun 2022 01:47:34 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LKryB44FLzgYGf; Sat, 11 Jun 2022 16:45:38 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 11 Jun 2022 16:47:30 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , Subject: [PATCH 6/7] mm/khugepaged: remove unneeded return value of khugepaged_add_pte_mapped_thp() Date: Sat, 11 Jun 2022 16:47:30 +0800 Message-ID: <20220611084731.55155-7-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220611084731.55155-1-linmiaohe@huawei.com> References: <20220611084731.55155-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The return value of khugepaged_add_pte_mapped_thp() is always 0 and also ignored. Remove it to clean up the code. Signed-off-by: Miaohe Lin Reviewed-by: Yang Shi --- mm/khugepaged.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 142e26e4bdbf..ee0a719c8be9 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1372,7 +1372,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot) * Notify khugepaged that given addr of the mm is pte-mapped THP. Then * khugepaged should try to collapse the page table. */ -static int khugepaged_add_pte_mapped_thp(struct mm_struct *mm, +static void khugepaged_add_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) { struct mm_slot *mm_slot; @@ -1384,7 +1384,6 @@ static int khugepaged_add_pte_mapped_thp(struct mm_st= ruct *mm, if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP)) mm_slot->pte_mapped_thp[mm_slot->nr_pte_mapped_thp++] =3D addr; spin_unlock(&khugepaged_mm_lock); - return 0; } =20 static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_str= uct *vma, --=20 2.23.0 From nobody Wed Apr 29 09:34:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DE67C43334 for ; Sat, 11 Jun 2022 08:47:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232048AbiFKIr4 (ORCPT ); Sat, 11 Jun 2022 04:47:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231689AbiFKIre (ORCPT ); Sat, 11 Jun 2022 04:47:34 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C82053AD for ; Sat, 11 Jun 2022 01:47:33 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LKrzC2SkqzjXRb; Sat, 11 Jun 2022 16:46:31 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 11 Jun 2022 16:47:31 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , Subject: [PATCH 7/7] mm/khugepaged: try to free transhuge swapcache when possible Date: Sat, 11 Jun 2022 16:47:31 +0800 Message-ID: <20220611084731.55155-8-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220611084731.55155-1-linmiaohe@huawei.com> References: <20220611084731.55155-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Transhuge swapcaches won't be freed in __collapse_huge_page_copy(). It's because release_pte_page() is not called for these pages and thus free_page_and_swap_cache can't grab the page lock. These pages won't be freed from swap cache even if we are the only user until next time reclaim. It shouldn't hurt indeed, but we could try to free these pages to save more memory for system. Signed-off-by: Miaohe Lin --- include/linux/swap.h | 5 +++++ mm/khugepaged.c | 1 + mm/swap.h | 5 ----- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 8672a7123ccd..ccb83b12b724 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -456,6 +456,7 @@ static inline unsigned long total_swapcache_pages(void) return global_node_page_state(NR_SWAPCACHE); } =20 +extern void free_swap_cache(struct page *page); extern void free_page_and_swap_cache(struct page *); extern void free_pages_and_swap_cache(struct page **, int); /* linux/mm/swapfile.c */ @@ -540,6 +541,10 @@ static inline void put_swap_device(struct swap_info_st= ruct *si) /* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=3D0 */ #define free_swap_and_cache(e) is_pfn_swap_entry(e) =20 +static inline void free_swap_cache(struct page *page) +{ +} + static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_m= ask) { return 0; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index ee0a719c8be9..52109ad13f78 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -756,6 +756,7 @@ static void __collapse_huge_page_copy(pte_t *pte, struc= t page *page, list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { list_del(&src_page->lru); release_pte_page(src_page); + free_swap_cache(src_page); } } =20 diff --git a/mm/swap.h b/mm/swap.h index 0193797b0c92..863f6086c916 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -41,7 +41,6 @@ void __delete_from_swap_cache(struct page *page, void delete_from_swap_cache(struct page *page); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); -void free_swap_cache(struct page *page); struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr); @@ -81,10 +80,6 @@ static inline struct address_space *swap_address_space(s= wp_entry_t entry) return NULL; } =20 -static inline void free_swap_cache(struct page *page) -{ -} - static inline void show_swap_cache_info(void) { } --=20 2.23.0