From nobody Fri Apr 10 10:43:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E57C2C32774 for ; Tue, 23 Aug 2022 07:51:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241317AbiHWHu7 (ORCPT ); Tue, 23 Aug 2022 03:50:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241280AbiHWHuV (ORCPT ); Tue, 23 Aug 2022 03:50:21 -0400 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEAF065550 for ; Tue, 23 Aug 2022 00:50:16 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R941e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VN0jAoJ_1661241013; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VN0jAoJ_1661241013) by smtp.aliyun-inc.com; Tue, 23 Aug 2022 15:50:14 +0800 From: Baolin Wang To: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com Cc: baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/5] mm/hugetlb: fix races when looking up a CONT-PTE size hugetlb page Date: Tue, 23 Aug 2022 15:50:01 +0800 Message-Id: <0e5d92da043d147a867f634b17acbcc97a7f0e64.1661240170.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb (2M and 1G), but also CONT-PTE/PMD size(64K and 32M) if a 4K page size specified. So when looking up a CONT-PTE size hugetlb page by follow_page(), it will use pte_offset_map_lock() to get the pte entry lock for the CONT-PTE size hugetlb in follow_page_pte(). However this pte entry lock is incorrect for the CONT-PTE size hugetlb, since we should use huge_pte_lock() to get the correct lock, which is mm->page_table_lock. That means the pte entry of the CONT-PTE size hugetlb under current pte lock is unstable in follow_page_pte(), we can continue to migrate or poison the pte entry of the CONT-PTE size hugetlb, which can cause some potential race issues, and following pte_xxx() validation is also unstable in follow_page_pte(), even though they are under the 'pte lock'. Moreover we should use huge_ptep_get() to get the pte entry value of the CONT-PTE size hugetlb, which already takes into account the subpages' dirty or young bits in case we missed the dirty or young state of the CONT-PTE size hugetlb. To fix above issues, introducing a new helper follow_huge_pte() to look up a CONT-PTE size hugetlb page, which uses huge_pte_lock() to get the correct pte entry lock to make the pte entry stable, as well as supporting non-present pte handling. Signed-off-by: Baolin Wang --- include/linux/hugetlb.h | 8 ++++++++ mm/gup.c | 11 ++++++++++ mm/hugetlb.c | 53 +++++++++++++++++++++++++++++++++++++++++++++= ++++ 3 files changed, 72 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3ec981a..d491138 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -207,6 +207,8 @@ struct page *follow_huge_addr(struct mm_struct *mm, uns= igned long address, struct page *follow_huge_pd(struct vm_area_struct *vma, unsigned long address, hugepd_t hpd, int flags, int pdshift); +struct page *follow_huge_pte(struct vm_area_struct *vma, unsigned long add= ress, + pmd_t *pmd, int flags); struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address, pmd_t *pmd, int flags); struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address, @@ -312,6 +314,12 @@ static inline struct page *follow_huge_pd(struct vm_ar= ea_struct *vma, return NULL; } =20 +static inline struct page *follow_huge_pte(struct vm_area_struct *vma, + unsigned long address, pmd_t *pmd, int flags) +{ + return NULL; +} + static inline struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address, pmd_t *pmd, int flags) { diff --git a/mm/gup.c b/mm/gup.c index 3b656b7..87a94f5 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -534,6 +534,17 @@ static struct page *follow_page_pte(struct vm_area_str= uct *vma, if (unlikely(pmd_bad(*pmd))) return no_page_table(vma, flags); =20 + /* + * Considering PTE level hugetlb, like continuous-PTE hugetlb on + * ARM64 architecture. + */ + if (is_vm_hugetlb_page(vma)) { + page =3D follow_huge_pte(vma, address, pmd, flags); + if (page) + return page; + return no_page_table(vma, flags); + } + ptep =3D pte_offset_map_lock(mm, pmd, address, &ptl); pte =3D *ptep; if (!pte_present(pte)) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c00ba1..cf742d1 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6981,6 +6981,59 @@ struct page * __weak return NULL; } =20 +/* Support looking up a CONT-PTE size hugetlb page. */ +struct page * __weak +follow_huge_pte(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmd, int flags) +{ + struct mm_struct *mm =3D vma->vm_mm; + struct hstate *hstate =3D hstate_vma(vma); + unsigned long size =3D huge_page_size(hstate); + struct page *page =3D NULL; + spinlock_t *ptl; + pte_t *ptep, pte; + + /* + * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via + * follow_hugetlb_page(). + */ + if (WARN_ON_ONCE(flags & FOLL_PIN)) + return NULL; + + ptep =3D huge_pte_offset(mm, address, size); + if (!ptep) + return NULL; + +retry: + ptl =3D huge_pte_lock(hstate, mm, ptep); + pte =3D huge_ptep_get(ptep); + if (pte_present(pte)) { + page =3D pte_page(pte); + if (WARN_ON_ONCE(!try_grab_page(page, flags))) { + page =3D NULL; + goto out; + } + } else { + if (!(flags & FOLL_MIGRATION)) { + page =3D NULL; + goto out; + } + + if (is_hugetlb_entry_migration(pte)) { + spin_unlock(ptl); + __migration_entry_wait_huge(ptep, ptl); + goto retry; + } + /* + * hwpoisoned entry is treated as no_page_table in + * follow_page_mask(). + */ + } +out: + spin_unlock(ptl); + return page; +} + struct page * __weak follow_huge_pmd(struct mm_struct *mm, unsigned long address, pmd_t *pmd, int flags) --=20 1.8.3.1 From nobody Fri Apr 10 10:43:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D706C32774 for ; Tue, 23 Aug 2022 07:50:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241283AbiHWHun (ORCPT ); Tue, 23 Aug 2022 03:50:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241286AbiHWHuW (ORCPT ); Tue, 23 Aug 2022 03:50:22 -0400 Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E0AB6555A for ; Tue, 23 Aug 2022 00:50:18 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VN0jAoy_1661241014; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VN0jAoy_1661241014) by smtp.aliyun-inc.com; Tue, 23 Aug 2022 15:50:14 +0800 From: Baolin Wang To: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com Cc: baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/5] mm/hugetlb: use PTE page lock to protect CONT-PTE entries Date: Tue, 23 Aug 2022 15:50:02 +0800 Message-Id: <064489292e6e224ef4406af990c7cdc3c054ca77.1661240170.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Considering the pte entries of a CONT-PTE hugetlb can not span on multiple PTEs, we can change to use the PTE page lock, which can be much finer grain that lock in the mm. Signed-off-by: Baolin Wang --- include/linux/hugetlb.h | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d491138..4b172a7 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -892,9 +892,23 @@ static inline gfp_t htlb_modify_alloc_mask(struct hsta= te *h, gfp_t gfp_mask) static inline spinlock_t *huge_pte_lockptr(struct hstate *h, struct mm_struct *mm, pte_t *pte) { - if (huge_page_size(h) =3D=3D PMD_SIZE) - return pmd_lockptr(mm, (pmd_t *) pte); VM_BUG_ON(huge_page_size(h) =3D=3D PAGE_SIZE); + + if (huge_page_size(h) =3D=3D PMD_SIZE) { + return pmd_lockptr(mm, (pmd_t *) pte); + } else if (huge_page_size(h) < PMD_SIZE) { + unsigned long mask =3D ~(PTRS_PER_PTE * sizeof(pte_t) - 1); + struct page *page =3D + virt_to_page((void *)((unsigned long)pte & mask)); + + /* + * Considering CONT-PTE size hugetlb, since the CONT-PTE + * entry can not span multiple PTEs, we can use the PTE + * page lock to get a fine grained lock. + */ + return ptlock_ptr(page); + } + return &mm->page_table_lock; } =20 --=20 1.8.3.1 From nobody Fri Apr 10 10:43:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68D86C32793 for ; Tue, 23 Aug 2022 07:51:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241365AbiHWHvH (ORCPT ); Tue, 23 Aug 2022 03:51:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241300AbiHWHuX (ORCPT ); Tue, 23 Aug 2022 03:50:23 -0400 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE77565264 for ; Tue, 23 Aug 2022 00:50:20 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VN0m784_1661241015; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VN0m784_1661241015) by smtp.aliyun-inc.com; Tue, 23 Aug 2022 15:50:15 +0800 From: Baolin Wang To: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com Cc: baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/5] mm/hugetlb: fix races when looking up a CONT-PMD size hugetlb page Date: Tue, 23 Aug 2022 15:50:03 +0800 Message-Id: X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb (2M and 1G), but also CONT-PTE/PMD size(64K and 32M) if a 4K page size specified. When looking up a CONT-PMD size hugetlb page by follow_page(), it will always use the PMD page lock to protect the pmd entry in follow_huge_pmd(). However this is not the correct lock for CONT-PMD size hugetlb, and the pmd entry will be unstable under the incorrect lock, which means it still can be migrated or poisoned, and can not get the correct CONT-PMD size page. Thus changing to use huge_pte_lock() to get the correct pmd entry lock for CONT-PMD size hugetlb to fix the potential race. Signed-off-by: Baolin Wang --- include/linux/hugetlb.h | 4 ++-- mm/gup.c | 2 +- mm/hugetlb.c | 7 ++++--- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 4b172a7..3a96f67 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -209,7 +209,7 @@ struct page *follow_huge_pd(struct vm_area_struct *vma, int flags, int pdshift); struct page *follow_huge_pte(struct vm_area_struct *vma, unsigned long add= ress, pmd_t *pmd, int flags); -struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address, +struct page *follow_huge_pmd(struct vm_area_struct *vma, unsigned long add= ress, pmd_t *pmd, int flags); struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address, pud_t *pud, int flags); @@ -320,7 +320,7 @@ static inline struct page *follow_huge_pte(struct vm_ar= ea_struct *vma, return NULL; } =20 -static inline struct page *follow_huge_pmd(struct mm_struct *mm, +static inline struct page *follow_huge_pmd(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, int flags) { return NULL; diff --git a/mm/gup.c b/mm/gup.c index 87a94f5..014accd 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -673,7 +673,7 @@ static struct page *follow_pmd_mask(struct vm_area_stru= ct *vma, if (pmd_none(pmdval)) return no_page_table(vma, flags); if (pmd_huge(pmdval) && is_vm_hugetlb_page(vma)) { - page =3D follow_huge_pmd(mm, address, pmd, flags); + page =3D follow_huge_pmd(vma, address, pmd, flags); if (page) return page; return no_page_table(vma, flags); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index cf742d1..2c4048a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7035,9 +7035,11 @@ struct page * __weak } =20 struct page * __weak -follow_huge_pmd(struct mm_struct *mm, unsigned long address, +follow_huge_pmd(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, int flags) { + struct mm_struct *mm =3D vma->vm_mm; + struct hstate *hstate =3D hstate_vma(vma); struct page *page =3D NULL; spinlock_t *ptl; pte_t pte; @@ -7050,8 +7052,7 @@ struct page * __weak return NULL; =20 retry: - ptl =3D pmd_lockptr(mm, pmd); - spin_lock(ptl); + ptl =3D huge_pte_lock(hstate, mm, (pte_t *)pmd); /* * make sure that the address range covered by this pmd is not * unmapped from other threads. --=20 1.8.3.1 From nobody Fri Apr 10 10:43:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 964D1C32774 for ; Tue, 23 Aug 2022 07:50:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241268AbiHWHuk (ORCPT ); Tue, 23 Aug 2022 03:50:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241305AbiHWHuX (ORCPT ); Tue, 23 Aug 2022 03:50:23 -0400 Received: from out30-54.freemail.mail.aliyun.com (out30-54.freemail.mail.aliyun.com [115.124.30.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A824C6556A for ; Tue, 23 Aug 2022 00:50:20 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VN0jApx_1661241015; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VN0jApx_1661241015) by smtp.aliyun-inc.com; Tue, 23 Aug 2022 15:50:16 +0800 From: Baolin Wang To: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com Cc: baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/5] mm/hugetlb: use PMD page lock to protect CONT-PTE entries Date: Tue, 23 Aug 2022 15:50:04 +0800 Message-Id: <88c8a8c68d87429f0fc48e81100f19b71f6e664f.1661240170.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Considering the pmd entries of a CONT-PMD hugetlb can not span on multiple PMDs, we can change to use the PMD page lock, which can be much finer grain that lock in the mm. Signed-off-by: Baolin Wang --- include/linux/hugetlb.h | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3a96f67..d4803a89 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -892,9 +892,17 @@ static inline gfp_t htlb_modify_alloc_mask(struct hsta= te *h, gfp_t gfp_mask) static inline spinlock_t *huge_pte_lockptr(struct hstate *h, struct mm_struct *mm, pte_t *pte) { - VM_BUG_ON(huge_page_size(h) =3D=3D PAGE_SIZE); + unsigned long hp_size =3D huge_page_size(h); =20 - if (huge_page_size(h) =3D=3D PMD_SIZE) { + VM_BUG_ON(hp_size =3D=3D PAGE_SIZE); + + /* + * Considering CONT-PMD size hugetlb, since the CONT-PMD entry + * can not span multiple PMDs, then we can use the fine grained + * PMD page lock. + */ + if (hp_size =3D=3D PMD_SIZE || + (hp_size > PMD_SIZE && hp_size < PUD_SIZE)) { return pmd_lockptr(mm, (pmd_t *) pte); } else if (huge_page_size(h) < PMD_SIZE) { unsigned long mask =3D ~(PTRS_PER_PTE * sizeof(pte_t) - 1); --=20 1.8.3.1 From nobody Fri Apr 10 10:43:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B05B1C32789 for ; Tue, 23 Aug 2022 07:50:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241309AbiHWHuz (ORCPT ); Tue, 23 Aug 2022 03:50:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241304AbiHWHuX (ORCPT ); Tue, 23 Aug 2022 03:50:23 -0400 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A820565569 for ; Tue, 23 Aug 2022 00:50:20 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VN0l0NV_1661241016; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VN0l0NV_1661241016) by smtp.aliyun-inc.com; Tue, 23 Aug 2022 15:50:16 +0800 From: Baolin Wang To: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com Cc: baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/5] mm/hugetlb: add FOLL_MIGRATION validation before waiting for a migration entry Date: Tue, 23 Aug 2022 15:50:05 +0800 Message-Id: <2aa2856012baa9f7251c993ee0f1406a51185a83.1661240170.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The hugetlb should keep the same logics with normal page when waiting for a migration pte entry, that means we should also validate if the FOLL_MIGRATION flag is set before waiting for a migration pte entry of a hugetlb page. Signed-off-by: Baolin Wang --- mm/hugetlb.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2c4048a..6430b74 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7075,6 +7075,11 @@ struct page * __weak goto out; } } else { + if (!(flags & FOLL_MIGRATION)) { + page =3D NULL; + goto out; + } + if (is_hugetlb_entry_migration(pte)) { spin_unlock(ptl); __migration_entry_wait_huge((pte_t *)pmd, ptl); @@ -7113,6 +7118,11 @@ struct page * __weak goto out; } } else { + if (!(flags & FOLL_MIGRATION)) { + page =3D NULL; + goto out; + } + if (is_hugetlb_entry_migration(pte)) { spin_unlock(ptl); __migration_entry_wait(mm, (pte_t *)pud, ptl); --=20 1.8.3.1