From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2138C43334 for ; Mon, 4 Jul 2022 13:23:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234185AbiGDNXZ (ORCPT ); Mon, 4 Jul 2022 09:23:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233742AbiGDNWl (ORCPT ); Mon, 4 Jul 2022 09:22:41 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1B97F32 for ; Mon, 4 Jul 2022 06:22:11 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Lc5wW4fm5zTgWm; Mon, 4 Jul 2022 21:18:35 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:09 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 01/16] mm/huge_memory: use flush_pmd_tlb_range in move_huge_pmd Date: Mon, 4 Jul 2022 21:21:46 +0800 Message-ID: <20220704132201.14611-2-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" ARCHes with special requirements for evicting THP backing TLB entries can implement flush_pmd_tlb_range. Otherwise also, it can help optimize TLB flush in THP regime. Using flush_pmd_tlb_range to take advantage of this in move_huge_pmd. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song Reviewed-by: Zach O'Keefe --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0243105d0cc6..f4e581eefb67 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1850,7 +1850,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsign= ed long old_addr, pmd =3D move_soft_dirty_pmd(pmd); set_pmd_at(mm, new_addr, new_pmd, pmd); if (force_flush) - flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); + flush_pmd_tlb_range(vma, old_addr, old_addr + PMD_SIZE); if (new_ptl !=3D old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl); --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 325B5C433EF for ; Mon, 4 Jul 2022 13:23:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234303AbiGDNXg (ORCPT ); Mon, 4 Jul 2022 09:23:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233761AbiGDNWl (ORCPT ); Mon, 4 Jul 2022 09:22:41 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C85F10A6 for ; Mon, 4 Jul 2022 06:22:12 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Lc5wX1FZqzTgWr; Mon, 4 Jul 2022 21:18:36 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:10 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 02/16] mm/huge_memory: access vm_page_prot with READ_ONCE in remove_migration_pmd Date: Mon, 4 Jul 2022 21:21:47 +0800 Message-ID: <20220704132201.14611-3-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" vma->vm_page_prot is read lockless from the rmap_walk, it may be updated concurrently. Using READ_ONCE to prevent the risk of reading intermediate values. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f4e581eefb67..a010f9ba15ce 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3309,7 +3309,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk= *pvmw, struct page *new) =20 entry =3D pmd_to_swp_entry(*pvmw->pmd); get_page(new); - pmde =3D pmd_mkold(mk_huge_pmd(new, vma->vm_page_prot)); + pmde =3D pmd_mkold(mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot))); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde =3D pmd_mksoft_dirty(pmde); if (is_writable_migration_entry(entry)) --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E081C433EF for ; Mon, 4 Jul 2022 13:24:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234696AbiGDNYK (ORCPT ); Mon, 4 Jul 2022 09:24:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233847AbiGDNWl (ORCPT ); Mon, 4 Jul 2022 09:22:41 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18CAF10C5 for ; Mon, 4 Jul 2022 06:22:13 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Lc5xw0Rr4zhZ1F; Mon, 4 Jul 2022 21:19:48 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:10 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 03/16] mm/huge_memory: fix comment of __pud_trans_huge_lock Date: Mon, 4 Jul 2022 21:21:48 +0800 Message-ID: <20220704132201.14611-4-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" __pud_trans_huge_lock returns page table lock pointer if a given pud maps a thp instead of 'true' since introduced. Fix corresponding comments. Signed-off-by: Miaohe Lin Acked-by: Muchun Song --- mm/huge_memory.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a010f9ba15ce..212e092d8ad0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2007,10 +2007,10 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struc= t vm_area_struct *vma) } =20 /* - * Returns true if a given pud maps a thp, false otherwise. + * Returns page table lock pointer if a given pud maps a thp, NULL otherwi= se. * - * Note that if it returns true, this routine returns without unlocking pa= ge - * table lock. So callers must unlock it. + * Note that if it returns page table lock pointer, this routine returns w= ithout + * unlocking page table lock. So callers must unlock it. */ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F042C43334 for ; Mon, 4 Jul 2022 13:23:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233711AbiGDNXE (ORCPT ); Mon, 4 Jul 2022 09:23:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233944AbiGDNWl (ORCPT ); Mon, 4 Jul 2022 09:22:41 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B4A310E2 for ; Mon, 4 Jul 2022 06:22:14 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Lc5wY35B9zTgXN; Mon, 4 Jul 2022 21:18:37 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:11 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 04/16] mm/huge_memory: use helper touch_pud in huge_pud_set_accessed Date: Mon, 4 Jul 2022 21:21:49 +0800 Message-ID: <20220704132201.14611-5-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper touch_pud to set pud accessed to simplify the code and improve the readability. No functional change intended. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/huge_memory.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 212e092d8ad0..30acb3b994cf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1285,15 +1285,15 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct = mm_struct *src_mm, =20 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static void touch_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags) + pud_t *pud, bool write) { pud_t _pud; =20 _pud =3D pud_mkyoung(*pud); - if (flags & FOLL_WRITE) + if (write) _pud =3D pud_mkdirty(_pud); if (pudp_set_access_flags(vma, addr & HPAGE_PUD_MASK, - pud, _pud, flags & FOLL_WRITE)) + pud, _pud, write)) update_mmu_cache_pud(vma, addr, pud); } =20 @@ -1320,7 +1320,7 @@ struct page *follow_devmap_pud(struct vm_area_struct = *vma, unsigned long addr, return NULL; =20 if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pud, flags); + touch_pud(vma, addr, pud, flags & FOLL_WRITE); =20 /* * device mapped pages can only be returned if the @@ -1385,21 +1385,13 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct = mm_struct *src_mm, =20 void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud) { - pud_t entry; - unsigned long haddr; bool write =3D vmf->flags & FAULT_FLAG_WRITE; =20 vmf->ptl =3D pud_lock(vmf->vma->vm_mm, vmf->pud); if (unlikely(!pud_same(*vmf->pud, orig_pud))) goto unlock; =20 - entry =3D pud_mkyoung(orig_pud); - if (write) - entry =3D pud_mkdirty(entry); - haddr =3D vmf->address & HPAGE_PUD_MASK; - if (pudp_set_access_flags(vmf->vma, haddr, vmf->pud, entry, write)) - update_mmu_cache_pud(vmf->vma, vmf->address, vmf->pud); - + touch_pud(vmf->vma, vmf->address, vmf->pud, write); unlock: spin_unlock(vmf->ptl); } --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3DF5CCA479 for ; Mon, 4 Jul 2022 13:23:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231842AbiGDNW5 (ORCPT ); Mon, 4 Jul 2022 09:22:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233961AbiGDNWl (ORCPT ); Mon, 4 Jul 2022 09:22:41 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13C4221B0 for ; Mon, 4 Jul 2022 06:22:15 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Lc5wY6hkDzTgX4; Mon, 4 Jul 2022 21:18:37 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:12 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 05/16] mm/huge_memory: use helper touch_pmd in huge_pmd_set_accessed Date: Mon, 4 Jul 2022 21:21:50 +0800 Message-ID: <20220704132201.14611-6-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper touch_pmd to set pmd accessed to simplify the code and improve the readability. No functional change intended. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/huge_memory.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 30acb3b994cf..f9b6eb3f2215 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1121,15 +1121,15 @@ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud_prot); #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ =20 static void touch_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, int flags) + pmd_t *pmd, bool write) { pmd_t _pmd; =20 _pmd =3D pmd_mkyoung(*pmd); - if (flags & FOLL_WRITE) + if (write) _pmd =3D pmd_mkdirty(_pmd); if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK, - pmd, _pmd, flags & FOLL_WRITE)) + pmd, _pmd, write)) update_mmu_cache_pmd(vma, addr, pmd); } =20 @@ -1162,7 +1162,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct = *vma, unsigned long addr, return NULL; =20 if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags); + touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); =20 /* * device mapped pages can only be returned if the @@ -1399,21 +1399,13 @@ void huge_pud_set_accessed(struct vm_fault *vmf, pu= d_t orig_pud) =20 void huge_pmd_set_accessed(struct vm_fault *vmf) { - pmd_t entry; - unsigned long haddr; bool write =3D vmf->flags & FAULT_FLAG_WRITE; - pmd_t orig_pmd =3D vmf->orig_pmd; =20 vmf->ptl =3D pmd_lock(vmf->vma->vm_mm, vmf->pmd); - if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) + if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd))) goto unlock; =20 - entry =3D pmd_mkyoung(orig_pmd); - if (write) - entry =3D pmd_mkdirty(entry); - haddr =3D vmf->address & HPAGE_PMD_MASK; - if (pmdp_set_access_flags(vmf->vma, haddr, vmf->pmd, entry, write)) - update_mmu_cache_pmd(vmf->vma, vmf->address, vmf->pmd); + touch_pmd(vmf->vma, vmf->address, vmf->pmd, write); =20 unlock: spin_unlock(vmf->ptl); @@ -1549,7 +1541,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_str= uct *vma, return ERR_PTR(-ENOMEM); =20 if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags); + touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); =20 page +=3D (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFBDBC43334 for ; Mon, 4 Jul 2022 13:23:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233424AbiGDNXB (ORCPT ); Mon, 4 Jul 2022 09:23:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234030AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E698270F for ; Mon, 4 Jul 2022 06:22:17 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Lc5zq0fk7zYd0r; Mon, 4 Jul 2022 21:21:27 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:12 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 06/16] mm/huge_memory: rename mmun_start to haddr in remove_migration_pmd Date: Mon, 4 Jul 2022 21:21:51 +0800 Message-ID: <20220704132201.14611-7-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" mmun_start indicates mmu_notifier start address but there's no mmu_notifier stuff in remove_migration_pmd. This will make it hard to get the meaning of mmun_start. Rename it to haddr to avoid confusing readers and also imporve readability. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/huge_memory.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f9b6eb3f2215..f2856cfac900 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3284,7 +3284,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk= *pvmw, struct page *new) struct vm_area_struct *vma =3D pvmw->vma; struct mm_struct *mm =3D vma->vm_mm; unsigned long address =3D pvmw->address; - unsigned long mmun_start =3D address & HPAGE_PMD_MASK; + unsigned long haddr =3D address & HPAGE_PMD_MASK; pmd_t pmde; swp_entry_t entry; =20 @@ -3307,12 +3307,12 @@ void remove_migration_pmd(struct page_vma_mapped_wa= lk *pvmw, struct page *new) if (!is_readable_migration_entry(entry)) rmap_flags |=3D RMAP_EXCLUSIVE; =20 - page_add_anon_rmap(new, vma, mmun_start, rmap_flags); + page_add_anon_rmap(new, vma, haddr, rmap_flags); } else { page_add_file_rmap(new, vma, true); } VM_BUG_ON(pmd_write(pmde) && PageAnon(new) && !PageAnonExclusive(new)); - set_pmd_at(mm, mmun_start, pvmw->pmd, pmde); + set_pmd_at(mm, haddr, pvmw->pmd, pmde); =20 /* No need to invalidate - it was non-present before */ update_mmu_cache_pmd(vma, address, pvmw->pmd); --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36B23C433EF for ; Mon, 4 Jul 2022 13:23:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233680AbiGDNXU (ORCPT ); Mon, 4 Jul 2022 09:23:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233983AbiGDNWl (ORCPT ); Mon, 4 Jul 2022 09:22:41 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B52F721B2 for ; Mon, 4 Jul 2022 06:22:15 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Lc5xy2Fq4zhZ1C; Mon, 4 Jul 2022 21:19:50 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:13 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 07/16] mm/huge_memory: use helper function vma_lookup in split_huge_pages_pid Date: Mon, 4 Jul 2022 21:21:52 +0800 Message-ID: <20220704132201.14611-8-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper function vma_lookup to lookup the needed vma to simplify the code. Minor readability improvement. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f2856cfac900..5f5123130b28 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3045,10 +3045,10 @@ static int split_huge_pages_pid(int pid, unsigned l= ong vaddr_start, * table filled with PTE-mapped THPs, each of which is distinct. */ for (addr =3D vaddr_start; addr < vaddr_end; addr +=3D PAGE_SIZE) { - struct vm_area_struct *vma =3D find_vma(mm, addr); + struct vm_area_struct *vma =3D vma_lookup(mm, addr); struct page *page; =20 - if (!vma || addr < vma->vm_start) + if (!vma) break; =20 /* skip special VMA and hugetlb VMA */ --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3545CC433EF for ; Mon, 4 Jul 2022 13:23:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234500AbiGDNXk (ORCPT ); Mon, 4 Jul 2022 09:23:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234140AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 823C526F8 for ; Mon, 4 Jul 2022 06:22:16 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Lc5wb4tZ9zTgWf; Mon, 4 Jul 2022 21:18:39 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:13 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 08/16] mm/huge_memory: use helper macro __ATTR_RW Date: Mon, 4 Jul 2022 21:21:53 +0800 Message-ID: <20220704132201.14611-9-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper macro __ATTR_RW to define use_zero_page_attr, defrag_attr and enabled_attr to make code more clear. Minor readability improvement. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/huge_memory.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5f5123130b28..32a45a1e98b7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -277,8 +277,8 @@ static ssize_t enabled_store(struct kobject *kobj, } return ret; } -static struct kobj_attribute enabled_attr =3D - __ATTR(enabled, 0644, enabled_show, enabled_store); + +static struct kobj_attribute enabled_attr =3D __ATTR_RW(enabled); =20 ssize_t single_hugepage_flag_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf, @@ -367,8 +367,7 @@ static ssize_t defrag_store(struct kobject *kobj, =20 return count; } -static struct kobj_attribute defrag_attr =3D - __ATTR(defrag, 0644, defrag_show, defrag_store); +static struct kobj_attribute defrag_attr =3D __ATTR_RW(defrag); =20 static ssize_t use_zero_page_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) @@ -382,8 +381,7 @@ static ssize_t use_zero_page_store(struct kobject *kobj, return single_hugepage_flag_store(kobj, attr, buf, count, TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG); } -static struct kobj_attribute use_zero_page_attr =3D - __ATTR(use_zero_page, 0644, use_zero_page_show, use_zero_page_store); +static struct kobj_attribute use_zero_page_attr =3D __ATTR_RW(use_zero_pag= e); =20 static ssize_t hpage_pmd_size_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E44BC433EF for ; Mon, 4 Jul 2022 13:24:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234802AbiGDNX5 (ORCPT ); Mon, 4 Jul 2022 09:23:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234073AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6416126F5 for ; Mon, 4 Jul 2022 06:22:16 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Lc5z52SWyzkX8n; Mon, 4 Jul 2022 21:20:49 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:14 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 09/16] mm/huge_memory: fix comment in zap_huge_pud Date: Mon, 4 Jul 2022 21:21:54 +0800 Message-ID: <20220704132201.14611-10-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The comment about deposited pgtable is borrowed from zap_huge_pmd but there's no deposited pgtable stuff for huge pud in zap_huge_pud. Remove it to avoid confusion. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 32a45a1e98b7..8a40dc8edb7a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2014,12 +2014,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, ptl =3D __pud_trans_huge_lock(pud, vma); if (!ptl) return 0; - /* - * For architectures like ppc64 we look at deposited pgtable - * when calling pudp_huge_get_and_clear. So do the - * pgtable_trans_huge_withdraw after finishing pudp related - * operations. - */ + pudp_huge_get_and_clear_full(tlb->mm, addr, pud, tlb->fullmm); tlb_remove_pud_tlb_entry(tlb, pud, addr); if (vma_is_special_huge(vma)) { --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5489C433EF for ; Mon, 4 Jul 2022 13:23:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233569AbiGDNXr (ORCPT ); Mon, 4 Jul 2022 09:23:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234172AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBD4E60FD for ; Mon, 4 Jul 2022 06:22:17 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Lc5y01qKjz1L8jZ; Mon, 4 Jul 2022 21:19:52 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:14 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 10/16] mm/huge_memory: check pmd_present first in is_huge_zero_pmd Date: Mon, 4 Jul 2022 21:21:55 +0800 Message-ID: <20220704132201.14611-11-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When pmd is non-present, pmd_pfn returns an insane value. So we should check pmd_present first to avoid acquiring such insane value and also avoid touching possible cold huge_zero_pfn cache line when pmd isn't present. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- include/linux/huge_mm.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index ae3d8e2fd9e2..12b297f9951d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -273,7 +273,7 @@ static inline bool is_huge_zero_page(struct page *page) =20 static inline bool is_huge_zero_pmd(pmd_t pmd) { - return READ_ONCE(huge_zero_pfn) =3D=3D pmd_pfn(pmd) && pmd_present(pmd); + return pmd_present(pmd) && READ_ONCE(huge_zero_pfn) =3D=3D pmd_pfn(pmd); } =20 static inline bool is_huge_zero_pud(pud_t pud) --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B2B9C43334 for ; Mon, 4 Jul 2022 13:23:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234437AbiGDNX2 (ORCPT ); Mon, 4 Jul 2022 09:23:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234159AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D0D962EE for ; Mon, 4 Jul 2022 06:22:18 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Lc5z71SYCzkX8p; Mon, 4 Jul 2022 21:20:51 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:15 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 11/16] mm/huge_memory: try to free subpage in swapcache when possible Date: Mon, 4 Jul 2022 21:21:56 +0800 Message-ID: <20220704132201.14611-12-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Subpages in swapcache won't be freed even if it is the last user of the page until next time reclaim. It shouldn't hurt indeed, but we could try to free these pages to save more memory for system. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8a40dc8edb7a..6d95751ebfc9 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2643,7 +2643,7 @@ static void __split_huge_page(struct page *page, stru= ct list_head *list, * requires taking the lru_lock so we do the put_page * of the tail pages after the split is complete. */ - put_page(subpage); + free_page_and_swap_cache(subpage); } } =20 --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D328BC43334 for ; Mon, 4 Jul 2022 13:24:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234823AbiGDNYD (ORCPT ); Mon, 4 Jul 2022 09:24:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234219AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F350640B for ; Mon, 4 Jul 2022 06:22:19 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Lc5y14zDWzhZ1R; Mon, 4 Jul 2022 21:19:53 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:16 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 12/16] mm/huge_memory: minor cleanup for split_huge_pages_all Date: Mon, 4 Jul 2022 21:21:57 +0800 Message-ID: <20220704132201.14611-13-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is nothing to do if a zone doesn't have any pages managed by the buddy allocator. So we should check managed_zone instead. Also if a thp is found, there's no need to traverse the subpages again. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6d95751ebfc9..77be7dec1420 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2961,9 +2961,12 @@ static void split_huge_pages_all(void) unsigned long total =3D 0, split =3D 0; =20 pr_debug("Split all THPs\n"); - for_each_populated_zone(zone) { + for_each_zone(zone) { + if (!managed_zone(zone)) + continue; max_zone_pfn =3D zone_end_pfn(zone); for (pfn =3D zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) { + int nr_pages; if (!pfn_valid(pfn)) continue; =20 @@ -2979,8 +2982,10 @@ static void split_huge_pages_all(void) =20 total++; lock_page(page); + nr_pages =3D thp_nr_pages(page); if (!split_huge_page(page)) split++; + pfn +=3D nr_pages - 1; unlock_page(page); next: put_page(page); --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58FABC43334 for ; Mon, 4 Jul 2022 13:24:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234835AbiGDNYF (ORCPT ); Mon, 4 Jul 2022 09:24:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234320AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D7D06442 for ; Mon, 4 Jul 2022 06:22:19 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Lc5zv3SylzYd0w; Mon, 4 Jul 2022 21:21:31 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:17 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 13/16] mm/huge_memory: fix comment of page_deferred_list Date: Mon, 4 Jul 2022 21:21:58 +0800 Message-ID: <20220704132201.14611-14-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The current comment is confusing because if global or memcg deferred list in the second tail page is occupied by compound_head, why we still use page[2].deferred_list here? I think it wants to say that Global or memcg deferred list in the first tail page is occupied by compound_mapcount and compound_pincount so we use the second tail page's deferred_list instead. Signed-off-by: Miaohe Lin --- include/linux/huge_mm.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 12b297f9951d..37f2f11a6d7e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -294,8 +294,8 @@ static inline bool thp_migration_supported(void) static inline struct list_head *page_deferred_list(struct page *page) { /* - * Global or memcg deferred list in the second tail pages is - * occupied by compound_head. + * See organization of tail pages of compound page in + * "struct page" definition. */ return &page[2].deferred_list; } --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0B1AC43334 for ; Mon, 4 Jul 2022 13:23:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234277AbiGDNXx (ORCPT ); Mon, 4 Jul 2022 09:23:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234332AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE2616470 for ; Mon, 4 Jul 2022 06:22:19 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Lc5z85BJxzkX90; Mon, 4 Jul 2022 21:20:52 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:17 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 14/16] mm/huge_memory: correct comment of prep_transhuge_page Date: Mon, 4 Jul 2022 21:21:59 +0800 Message-ID: <20220704132201.14611-15-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We use page->mapping and page->index, instead of page->indexlru in second tail page as list_head. Correct it. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 77be7dec1420..36f3fc2e7306 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -682,7 +682,7 @@ static inline void split_queue_unlock_irqrestore(struct= deferred_split *queue, void prep_transhuge_page(struct page *page) { /* - * we use page->mapping and page->indexlru in second tail page + * we use page->mapping and page->index in second tail page * as list_head: assuming THP order >=3D 2 */ =20 --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E971C433EF for ; Mon, 4 Jul 2022 13:24:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234816AbiGDNYB (ORCPT ); Mon, 4 Jul 2022 09:24:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234352AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2376B7FD for ; Mon, 4 Jul 2022 06:22:20 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Lc5wh0czfzTgXl; Mon, 4 Jul 2022 21:18:44 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:18 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 15/16] mm/huge_memory: comment the subtly logic in __split_huge_pmd Date: Mon, 4 Jul 2022 21:22:00 +0800 Message-ID: <20220704132201.14611-16-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It's dangerous and wrong to call page_folio(pmd_page(*pmd)) when pmd isn't present. But the caller guarantees pmd is present when folio is set. So we should be safe here. Add comment to make it clear. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 36f3fc2e7306..8380912b39fd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2336,6 +2336,10 @@ void __split_huge_pmd(struct vm_area_struct *vma, pm= d_t *pmd, =20 if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd)) { + /* + * It's safe to call pmd_page when folio is set because it's + * guaranteed that pmd is present. + */ if (folio && folio !=3D page_folio(pmd_page(*pmd))) goto out; __split_huge_pmd_locked(vma, pmd, range.start, freeze); --=20 2.23.0 From nobody Mon Apr 20 02:46:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B8C1C43334 for ; Mon, 4 Jul 2022 13:24:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232113AbiGDNYV (ORCPT ); Mon, 4 Jul 2022 09:24:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234346AbiGDNWm (ORCPT ); Mon, 4 Jul 2022 09:22:42 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 432EFDFA7 for ; Mon, 4 Jul 2022 06:22:21 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Lc5wh4FV5zTgXr; Mon, 4 Jul 2022 21:18:44 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 4 Jul 2022 21:22:18 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH v3 16/16] mm/huge_memory: use helper macro IS_ERR_OR_NULL in split_huge_pages_pid Date: Mon, 4 Jul 2022 21:22:01 +0800 Message-ID: <20220704132201.14611-17-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220704132201.14611-1-linmiaohe@huawei.com> References: <20220704132201.14611-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper macro IS_ERR_OR_NULL to check the validity of page to simplify the code. Minor readability improvement. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/huge_memory.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8380912b39fd..fd9d502aadc4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3062,9 +3062,7 @@ static int split_huge_pages_pid(int pid, unsigned lon= g vaddr_start, /* FOLL_DUMP to ignore special (like zero) pages */ page =3D follow_page(vma, addr, FOLL_GET | FOLL_DUMP); =20 - if (IS_ERR(page)) - continue; - if (!page || is_zone_device_page(page)) + if (IS_ERR_OR_NULL(page) || is_zone_device_page(page)) continue; =20 if (!is_transparent_hugepage(page)) --=20 2.23.0