From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61C26C433EF for ; Wed, 22 Jun 2022 17:06:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376969AbiFVRGg (ORCPT ); Wed, 22 Jun 2022 13:06:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358779AbiFVRGb (ORCPT ); Wed, 22 Jun 2022 13:06:31 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82B123EF0B for ; Wed, 22 Jun 2022 10:06:30 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LSqT33KVJzSh9t; Thu, 23 Jun 2022 01:03:03 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:27 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 01/16] mm/huge_memory: use flush_pmd_tlb_range in move_huge_pmd Date: Thu, 23 Jun 2022 01:06:12 +0800 Message-ID: <20220622170627.19786-2-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" ARCHes with special requirements for evicting THP backing TLB entries can implement flush_pmd_tlb_range. Otherwise also, it can help optimize TLB flush in THP regime. Using flush_pmd_tlb_range to take advantage of this in move_huge_pmd. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song Reviewed-by: Zach O'Keefe --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index af0751a79c19..fd6da053a13e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1746,7 +1746,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsign= ed long old_addr, pmd =3D move_soft_dirty_pmd(pmd); set_pmd_at(mm, new_addr, new_pmd, pmd); if (force_flush) - flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); + flush_pmd_tlb_range(vma, old_addr, old_addr + PMD_SIZE); if (new_ptl !=3D old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl); --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74D71C43334 for ; Wed, 22 Jun 2022 17:06:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376940AbiFVRGe (ORCPT ); Wed, 22 Jun 2022 13:06:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358339AbiFVRGb (ORCPT ); Wed, 22 Jun 2022 13:06:31 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 449C43EF06 for ; Wed, 22 Jun 2022 10:06:30 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LSqT36mZDzSh9m; Thu, 23 Jun 2022 01:03:03 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:28 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 02/16] mm/huge_memory: access vm_page_prot with READ_ONCE in remove_migration_pmd Date: Thu, 23 Jun 2022 01:06:13 +0800 Message-ID: <20220622170627.19786-3-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" vma->vm_page_prot is read lockless from the rmap_walk, it may be updated concurrently. Using READ_ONCE to prevent the risk of reading intermediate values. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fd6da053a13e..83fb6c3442ff 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3202,7 +3202,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk= *pvmw, struct page *new) =20 entry =3D pmd_to_swp_entry(*pvmw->pmd); get_page(new); - pmde =3D pmd_mkold(mk_huge_pmd(new, vma->vm_page_prot)); + pmde =3D pmd_mkold(mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot))); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde =3D pmd_mksoft_dirty(pmde); if (is_writable_migration_entry(entry)) --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76968C43334 for ; Wed, 22 Jun 2022 17:06:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377013AbiFVRGk (ORCPT ); Wed, 22 Jun 2022 13:06:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376586AbiFVRGc (ORCPT ); Wed, 22 Jun 2022 13:06:32 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6FE23EF06 for ; Wed, 22 Jun 2022 10:06:31 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LSqVY0PsfzhYfV; Thu, 23 Jun 2022 01:04:21 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:28 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 03/16] mm/huge_memory: fix comment of __pud_trans_huge_lock Date: Thu, 23 Jun 2022 01:06:14 +0800 Message-ID: <20220622170627.19786-4-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" __pud_trans_huge_lock returns page table lock pointer if a given pud maps a thp instead of 'true' since introduced. Fix corresponding comments. Signed-off-by: Miaohe Lin Acked-by: Muchun Song --- mm/huge_memory.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 83fb6c3442ff..a26580da8011 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1903,10 +1903,10 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struc= t vm_area_struct *vma) } =20 /* - * Returns true if a given pud maps a thp, false otherwise. + * Returns page table lock pointer if a given pud maps a thp, NULL otherwi= se. * - * Note that if it returns true, this routine returns without unlocking pa= ge - * table lock. So callers must unlock it. + * Note that if it returns page table lock pointe, this routine returns wi= thout + * unlocking page table lock. So callers must unlock it. */ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B78BCC43334 for ; Wed, 22 Jun 2022 17:06:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376991AbiFVRGi (ORCPT ); Wed, 22 Jun 2022 13:06:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376275AbiFVRGc (ORCPT ); Wed, 22 Jun 2022 13:06:32 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D569A3ED3D for ; Wed, 22 Jun 2022 10:06:31 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LSqT46cSZzSgrr; Thu, 23 Jun 2022 01:03:04 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:29 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 04/16] mm/huge_memory: use helper touch_pud in huge_pud_set_accessed Date: Thu, 23 Jun 2022 01:06:15 +0800 Message-ID: <20220622170627.19786-5-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper touch_pud to set pud accessed to simplify the code and improve the readability. No functional change intended. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a26580da8011..a0c0e4bf9c1e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1281,21 +1281,15 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct = mm_struct *src_mm, =20 void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud) { - pud_t entry; - unsigned long haddr; - bool write =3D vmf->flags & FAULT_FLAG_WRITE; + int flags =3D 0; =20 vmf->ptl =3D pud_lock(vmf->vma->vm_mm, vmf->pud); if (unlikely(!pud_same(*vmf->pud, orig_pud))) goto unlock; =20 - entry =3D pud_mkyoung(orig_pud); - if (write) - entry =3D pud_mkdirty(entry); - haddr =3D vmf->address & HPAGE_PUD_MASK; - if (pudp_set_access_flags(vmf->vma, haddr, vmf->pud, entry, write)) - update_mmu_cache_pud(vmf->vma, vmf->address, vmf->pud); - + if (vmf->flags & FAULT_FLAG_WRITE) + flags =3D FOLL_WRITE; + touch_pud(vmf->vma, vmf->address, vmf->pud, flags); unlock: spin_unlock(vmf->ptl); } --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5628EC43334 for ; Wed, 22 Jun 2022 17:06:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377106AbiFVRGy (ORCPT ); Wed, 22 Jun 2022 13:06:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376944AbiFVRGf (ORCPT ); Wed, 22 Jun 2022 13:06:35 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BC6C3EF23 for ; Wed, 22 Jun 2022 10:06:33 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LSqVZ3FCpz1KC4m; Thu, 23 Jun 2022 01:04:22 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:29 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 05/16] mm/huge_memory: use helper touch_pmd in huge_pmd_set_accessed Date: Thu, 23 Jun 2022 01:06:16 +0800 Message-ID: <20220622170627.19786-6-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper touch_pmd to set pmd accessed to simplify the code and improve the readability. No functional change intended. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a0c0e4bf9c1e..c6302fe6704b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1297,21 +1297,15 @@ void huge_pud_set_accessed(struct vm_fault *vmf, pu= d_t orig_pud) =20 void huge_pmd_set_accessed(struct vm_fault *vmf) { - pmd_t entry; - unsigned long haddr; - bool write =3D vmf->flags & FAULT_FLAG_WRITE; - pmd_t orig_pmd =3D vmf->orig_pmd; + int flags =3D 0; =20 vmf->ptl =3D pmd_lock(vmf->vma->vm_mm, vmf->pmd); - if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) + if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd))) goto unlock; =20 - entry =3D pmd_mkyoung(orig_pmd); - if (write) - entry =3D pmd_mkdirty(entry); - haddr =3D vmf->address & HPAGE_PMD_MASK; - if (pmdp_set_access_flags(vmf->vma, haddr, vmf->pmd, entry, write)) - update_mmu_cache_pmd(vmf->vma, vmf->address, vmf->pmd); + if (vmf->flags & FAULT_FLAG_WRITE) + flags =3D FOLL_WRITE; + touch_pmd(vmf->vma, vmf->address, vmf->pmd, flags); =20 unlock: spin_unlock(vmf->ptl); --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26CD3C433EF for ; Wed, 22 Jun 2022 17:06:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377020AbiFVRGo (ORCPT ); Wed, 22 Jun 2022 13:06:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376727AbiFVRGc (ORCPT ); Wed, 22 Jun 2022 13:06:32 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29B173EF0B for ; Wed, 22 Jun 2022 10:06:32 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LSqT56RpXzSh9p; Thu, 23 Jun 2022 01:03:05 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:29 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 06/16] mm/huge_memory: rename mmun_start to haddr in remove_migration_pmd Date: Thu, 23 Jun 2022 01:06:17 +0800 Message-ID: <20220622170627.19786-7-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" mmun_start indicates mmu_notifier start address but there's no mmu_notifier stuff in remove_migration_pmd. This will make it hard to get the meaning of mmun_start. Rename it to haddr to avoid confusing readers and also imporve readability. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/huge_memory.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c6302fe6704b..fb5c484dfa39 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3181,7 +3181,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk= *pvmw, struct page *new) struct vm_area_struct *vma =3D pvmw->vma; struct mm_struct *mm =3D vma->vm_mm; unsigned long address =3D pvmw->address; - unsigned long mmun_start =3D address & HPAGE_PMD_MASK; + unsigned long haddr =3D address & HPAGE_PMD_MASK; pmd_t pmde; swp_entry_t entry; =20 @@ -3204,12 +3204,12 @@ void remove_migration_pmd(struct page_vma_mapped_wa= lk *pvmw, struct page *new) if (!is_readable_migration_entry(entry)) rmap_flags |=3D RMAP_EXCLUSIVE; =20 - page_add_anon_rmap(new, vma, mmun_start, rmap_flags); + page_add_anon_rmap(new, vma, haddr, rmap_flags); } else { page_add_file_rmap(new, vma, true); } VM_BUG_ON(pmd_write(pmde) && PageAnon(new) && !PageAnonExclusive(new)); - set_pmd_at(mm, mmun_start, pvmw->pmd, pmde); + set_pmd_at(mm, haddr, pvmw->pmd, pmde); =20 /* No need to invalidate - it was non-present before */ update_mmu_cache_pmd(vma, address, pvmw->pmd); --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32CA5C433EF for ; Wed, 22 Jun 2022 17:06:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377063AbiFVRGs (ORCPT ); Wed, 22 Jun 2022 13:06:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376911AbiFVRGd (ORCPT ); Wed, 22 Jun 2022 13:06:33 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DCF63ED3D for ; Wed, 22 Jun 2022 10:06:33 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LSqW52NY4zkWLh; Thu, 23 Jun 2022 01:04:49 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:30 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 07/16] mm/huge_memory: minor cleanup for split_huge_pages_pid Date: Thu, 23 Jun 2022 01:06:18 +0800 Message-ID: <20220622170627.19786-8-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper function vma_lookup to lookup the needed vma and use help macro IS_ERR_OR_NULL to check the validity of page to simplify the code. Minor readability improvement. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fb5c484dfa39..7cfa003b1789 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2942,10 +2942,10 @@ static int split_huge_pages_pid(int pid, unsigned l= ong vaddr_start, * table filled with PTE-mapped THPs, each of which is distinct. */ for (addr =3D vaddr_start; addr < vaddr_end; addr +=3D PAGE_SIZE) { - struct vm_area_struct *vma =3D find_vma(mm, addr); + struct vm_area_struct *vma =3D vma_lookup(mm, addr); struct page *page; =20 - if (!vma || addr < vma->vm_start) + if (!vma) break; =20 /* skip special VMA and hugetlb VMA */ @@ -2957,9 +2957,7 @@ static int split_huge_pages_pid(int pid, unsigned lon= g vaddr_start, /* FOLL_DUMP to ignore special (like zero) pages */ page =3D follow_page(vma, addr, FOLL_GET | FOLL_DUMP | FOLL_LRU); =20 - if (IS_ERR(page)) - continue; - if (!page) + if (IS_ERR_OR_NULL(page)) continue; =20 if (!is_transparent_hugepage(page)) --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BC5BC43334 for ; Wed, 22 Jun 2022 17:06:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377051AbiFVRGv (ORCPT ); Wed, 22 Jun 2022 13:06:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376941AbiFVRGf (ORCPT ); Wed, 22 Jun 2022 13:06:35 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E9A23EF0B for ; Wed, 22 Jun 2022 10:06:33 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LSqWf3RNkzkWdH; Thu, 23 Jun 2022 01:05:18 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:30 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 08/16] mm/huge_memory: use helper macro __ATTR_RW Date: Thu, 23 Jun 2022 01:06:19 +0800 Message-ID: <20220622170627.19786-9-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use helper macro __ATTR_RW to define use_zero_page_attr, defrag_attr and enabled_attr to make code more clear. Minor readability improvement. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/huge_memory.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7cfa003b1789..b42c8fa51e46 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -273,8 +273,8 @@ static ssize_t enabled_store(struct kobject *kobj, } return ret; } -static struct kobj_attribute enabled_attr =3D - __ATTR(enabled, 0644, enabled_show, enabled_store); + +static struct kobj_attribute enabled_attr =3D __ATTR_RW(enabled); =20 ssize_t single_hugepage_flag_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf, @@ -363,8 +363,7 @@ static ssize_t defrag_store(struct kobject *kobj, =20 return count; } -static struct kobj_attribute defrag_attr =3D - __ATTR(defrag, 0644, defrag_show, defrag_store); +static struct kobj_attribute defrag_attr =3D __ATTR_RW(defrag); =20 static ssize_t use_zero_page_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) @@ -378,8 +377,7 @@ static ssize_t use_zero_page_store(struct kobject *kobj, return single_hugepage_flag_store(kobj, attr, buf, count, TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG); } -static struct kobj_attribute use_zero_page_attr =3D - __ATTR(use_zero_page, 0644, use_zero_page_show, use_zero_page_store); +static struct kobj_attribute use_zero_page_attr =3D __ATTR_RW(use_zero_pag= e); =20 static ssize_t hpage_pmd_size_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3A33C433EF for ; Wed, 22 Jun 2022 17:07:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377091AbiFVRHE (ORCPT ); Wed, 22 Jun 2022 13:07:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376945AbiFVRGf (ORCPT ); Wed, 22 Jun 2022 13:06:35 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9A573EF28 for ; Wed, 22 Jun 2022 10:06:33 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LSqVc34nyz1KC7g; Thu, 23 Jun 2022 01:04:24 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:31 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 09/16] mm/huge_memory: fix comment in zap_huge_pud Date: Thu, 23 Jun 2022 01:06:20 +0800 Message-ID: <20220622170627.19786-10-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The comment about deposited pgtable is borrowed from zap_huge_pmd but there's no deposited pgtable stuff for huge pud in zap_huge_pud. Remove it to avoid confusion. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b42c8fa51e46..fd12fa930937 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1914,12 +1914,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, ptl =3D __pud_trans_huge_lock(pud, vma); if (!ptl) return 0; - /* - * For architectures like ppc64 we look at deposited pgtable - * when calling pudp_huge_get_and_clear. So do the - * pgtable_trans_huge_withdraw after finishing pudp related - * operations. - */ + pudp_huge_get_and_clear_full(tlb->mm, addr, pud, tlb->fullmm); tlb_remove_pud_tlb_entry(tlb, pud, addr); if (vma_is_special_huge(vma)) { --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED9AFCCA47D for ; Wed, 22 Jun 2022 17:06:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376276AbiFVRG6 (ORCPT ); Wed, 22 Jun 2022 13:06:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376949AbiFVRGg (ORCPT ); Wed, 22 Jun 2022 13:06:36 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5BE93EF30 for ; Wed, 22 Jun 2022 10:06:34 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LSqVc69q5z1KC7l; Thu, 23 Jun 2022 01:04:24 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:31 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 10/16] mm/huge_memory: check pmd_present first in is_huge_zero_pmd Date: Thu, 23 Jun 2022 01:06:21 +0800 Message-ID: <20220622170627.19786-11-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When pmd is non-present, pmd_pfn returns an insane value. So we should check pmd_present first to avoid acquiring such insane value and also avoid touching possible cold huge_zero_pfn cache line when pmd isn't present. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- include/linux/huge_mm.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index ae3d8e2fd9e2..12b297f9951d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -273,7 +273,7 @@ static inline bool is_huge_zero_page(struct page *page) =20 static inline bool is_huge_zero_pmd(pmd_t pmd) { - return READ_ONCE(huge_zero_pfn) =3D=3D pmd_pfn(pmd) && pmd_present(pmd); + return pmd_present(pmd) && READ_ONCE(huge_zero_pfn) =3D=3D pmd_pfn(pmd); } =20 static inline bool is_huge_zero_pud(pud_t pud) --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B139CCA47D for ; Wed, 22 Jun 2022 17:07:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357456AbiFVRHL (ORCPT ); Wed, 22 Jun 2022 13:07:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377035AbiFVRGm (ORCPT ); Wed, 22 Jun 2022 13:06:42 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 707403EF3D for ; Wed, 22 Jun 2022 10:06:36 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4LSqXR1nM0zDrGg; Thu, 23 Jun 2022 01:05:59 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:32 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 11/16] mm/huge_memory: try to free subpage in swapcache when possible Date: Thu, 23 Jun 2022 01:06:22 +0800 Message-ID: <20220622170627.19786-12-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Subpages in swapcache won't be freed even if it is the last user of the page until next time reclaim. It shouldn't hurt indeed, but we could try to free these pages to save more memory for system. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fd12fa930937..506e7a682780 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2539,7 +2539,7 @@ static void __split_huge_page(struct page *page, stru= ct list_head *list, * requires taking the lru_lock so we do the put_page * of the tail pages after the split is complete. */ - put_page(subpage); + free_page_and_swap_cache(subpage); } } =20 --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E804BC433EF for ; Wed, 22 Jun 2022 17:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377125AbiFVRHX (ORCPT ); Wed, 22 Jun 2022 13:07:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377054AbiFVRGn (ORCPT ); Wed, 22 Jun 2022 13:06:43 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BBF03EF3C for ; Wed, 22 Jun 2022 10:06:36 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LSqT85tK4zShBY; Thu, 23 Jun 2022 01:03:08 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:32 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 12/16] mm/huge_memory: minor cleanup for split_huge_pages_all Date: Thu, 23 Jun 2022 01:06:23 +0800 Message-ID: <20220622170627.19786-13-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is nothing to do if a zone doesn't have any pages managed by the buddy allocator. So we should check managed_zone instead. Also if a thp is found, there's no need to traverse the subpages again. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 506e7a682780..0030b4f67cd9 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2858,9 +2858,12 @@ static void split_huge_pages_all(void) unsigned long total =3D 0, split =3D 0; =20 pr_debug("Split all THPs\n"); - for_each_populated_zone(zone) { + for_each_zone(zone) { + if (!managed_zone(zone)) + continue; max_zone_pfn =3D zone_end_pfn(zone); for (pfn =3D zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) { + int nr_pages; if (!pfn_valid(pfn)) continue; =20 @@ -2876,8 +2879,10 @@ static void split_huge_pages_all(void) =20 total++; lock_page(page); + nr_pages =3D thp_nr_pages(page); if (!split_huge_page(page)) split++; + pfn +=3D nr_pages - 1; unlock_page(page); next: put_page(page); --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D746C43334 for ; Wed, 22 Jun 2022 17:07:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377192AbiFVRHU (ORCPT ); Wed, 22 Jun 2022 13:07:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377041AbiFVRGn (ORCPT ); Wed, 22 Jun 2022 13:06:43 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EF123F308 for ; Wed, 22 Jun 2022 10:06:37 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LSqVd6XLFzhYds; Thu, 23 Jun 2022 01:04:25 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:33 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 13/16] mm/huge_memory: add helper __get_deferred_split_queue Date: Thu, 23 Jun 2022 01:06:24 +0800 Message-ID: <20220622170627.19786-14-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add helper __get_deferred_split_queue to remove the duplicated codes of getting ds_queue. No functional change intended. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 35 ++++++++++++----------------------- 1 file changed, 12 insertions(+), 23 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0030b4f67cd9..de8155ff584c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -555,25 +555,23 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_str= uct *vma) return pmd; } =20 -#ifdef CONFIG_MEMCG -static inline struct deferred_split *get_deferred_split_queue(struct page = *page) +static inline struct deferred_split *__get_deferred_split_queue(struct pgl= ist_data *pgdat, + struct mem_cgroup *memcg) { - struct mem_cgroup *memcg =3D page_memcg(compound_head(page)); - struct pglist_data *pgdat =3D NODE_DATA(page_to_nid(page)); - +#ifdef CONFIG_MEMCG if (memcg) return &memcg->deferred_split_queue; - else - return &pgdat->deferred_split_queue; +#endif + return &pgdat->deferred_split_queue; } -#else + static inline struct deferred_split *get_deferred_split_queue(struct page = *page) { + struct mem_cgroup *memcg =3D page_memcg(compound_head(page)); struct pglist_data *pgdat =3D NODE_DATA(page_to_nid(page)); =20 - return &pgdat->deferred_split_queue; + return __get_deferred_split_queue(pgdat, memcg); } -#endif =20 void prep_transhuge_page(struct page *page) { @@ -2774,31 +2772,22 @@ void deferred_split_huge_page(struct page *page) static unsigned long deferred_split_count(struct shrinker *shrink, struct shrink_control *sc) { - struct pglist_data *pgdata =3D NODE_DATA(sc->nid); - struct deferred_split *ds_queue =3D &pgdata->deferred_split_queue; + struct deferred_split *ds_queue; =20 -#ifdef CONFIG_MEMCG - if (sc->memcg) - ds_queue =3D &sc->memcg->deferred_split_queue; -#endif + ds_queue =3D __get_deferred_split_queue(NODE_DATA(sc->nid), sc->memcg); return READ_ONCE(ds_queue->split_queue_len); } =20 static unsigned long deferred_split_scan(struct shrinker *shrink, struct shrink_control *sc) { - struct pglist_data *pgdata =3D NODE_DATA(sc->nid); - struct deferred_split *ds_queue =3D &pgdata->deferred_split_queue; + struct deferred_split *ds_queue; unsigned long flags; LIST_HEAD(list), *pos, *next; struct page *page; int split =3D 0; =20 -#ifdef CONFIG_MEMCG - if (sc->memcg) - ds_queue =3D &sc->memcg->deferred_split_queue; -#endif - + ds_queue =3D __get_deferred_split_queue(NODE_DATA(sc->nid), sc->memcg); spin_lock_irqsave(&ds_queue->split_queue_lock, flags); /* Take pin on all head pages to avoid freeing them under us */ list_for_each_safe(pos, next, &ds_queue->split_queue) { --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E22FFC43334 for ; Wed, 22 Jun 2022 17:07:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376498AbiFVRHA (ORCPT ); Wed, 22 Jun 2022 13:07:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377029AbiFVRGm (ORCPT ); Wed, 22 Jun 2022 13:06:42 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14A033EF39 for ; Wed, 22 Jun 2022 10:06:36 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LSqVf5hGkz1KC7p; Thu, 23 Jun 2022 01:04:26 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:33 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 14/16] mm/huge_memory: fix comment of page_deferred_list Date: Thu, 23 Jun 2022 01:06:25 +0800 Message-ID: <20220622170627.19786-15-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The current comment is confusing because if global or memcg deferred list in the second tail page is occupied by compound_head, why we still use page[2].deferred_list here? I think it wants to say that Global or memcg deferred list in the first tail page is occupied by compound_mapcount and compound_pincount so we use the second tail page's deferred_list instead. Signed-off-by: Miaohe Lin --- include/linux/huge_mm.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 12b297f9951d..2e8062b3417a 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -294,8 +294,8 @@ static inline bool thp_migration_supported(void) static inline struct list_head *page_deferred_list(struct page *page) { /* - * Global or memcg deferred list in the second tail pages is - * occupied by compound_head. + * Global or memcg deferred list in the first tail page is + * occupied by compound_mapcount and compound_pincount. */ return &page[2].deferred_list; } --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D813CC433EF for ; Wed, 22 Jun 2022 17:07:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377109AbiFVRHO (ORCPT ); Wed, 22 Jun 2022 13:07:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377036AbiFVRGm (ORCPT ); Wed, 22 Jun 2022 13:06:42 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AE423EF0E for ; Wed, 22 Jun 2022 10:06:37 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LSqTB1y7BzShBf; Thu, 23 Jun 2022 01:03:10 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:34 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 15/16] mm/huge_memory: correct comment of prep_transhuge_page Date: Thu, 23 Jun 2022 01:06:26 +0800 Message-ID: <20220622170627.19786-16-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We use page->mapping and page->index, instead of page->indexlru in second tail page as list_head. Correct it. Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index de8155ff584c..8bd937cc1f74 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -576,7 +576,7 @@ static inline struct deferred_split *get_deferred_split= _queue(struct page *page) void prep_transhuge_page(struct page *page) { /* - * we use page->mapping and page->indexlru in second tail page + * we use page->mapping and page->index in second tail page * as list_head: assuming THP order >=3D 2 */ =20 --=20 2.23.0 From nobody Mon Apr 20 02:46:12 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB19FC43334 for ; Wed, 22 Jun 2022 17:07:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377122AbiFVRHS (ORCPT ); Wed, 22 Jun 2022 13:07:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377037AbiFVRGn (ORCPT ); Wed, 22 Jun 2022 13:06:43 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EED03F307 for ; Wed, 22 Jun 2022 10:06:37 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LSqWk2dxzzkWhM; Thu, 23 Jun 2022 01:05:22 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 23 Jun 2022 01:06:34 +0800 From: Miaohe Lin To: CC: , , , , , Subject: [PATCH 16/16] mm/huge_memory: comment the subtle logic in __split_huge_pmd Date: Thu, 23 Jun 2022 01:06:27 +0800 Message-ID: <20220622170627.19786-17-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220622170627.19786-1-linmiaohe@huawei.com> References: <20220622170627.19786-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It's dangerous and wrong to call page_folio(pmd_page(*pmd)) when pmd isn't present. But the caller guarantees pmd is present when folio is set. So we should be safe here. Add comment to make it clear. Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8bd937cc1f74..b98b97592bd3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2234,6 +2234,10 @@ void __split_huge_pmd(struct vm_area_struct *vma, pm= d_t *pmd, =20 if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd)) { + /* + * It's safe to call pmd_page when folio is set because it's + * guaranteed that pmd is present. + */ if (folio && folio !=3D page_folio(pmd_page(*pmd))) goto out; __split_huge_pmd_locked(vma, pmd, range.start, freeze); --=20 2.23.0