From nobody Fri Apr 10 12:34:57 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68D86C32793 for ; Tue, 23 Aug 2022 07:51:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241365AbiHWHvH (ORCPT ); Tue, 23 Aug 2022 03:51:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241300AbiHWHuX (ORCPT ); Tue, 23 Aug 2022 03:50:23 -0400 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE77565264 for ; Tue, 23 Aug 2022 00:50:20 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VN0m784_1661241015; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VN0m784_1661241015) by smtp.aliyun-inc.com; Tue, 23 Aug 2022 15:50:15 +0800 From: Baolin Wang To: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com Cc: baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/5] mm/hugetlb: fix races when looking up a CONT-PMD size hugetlb page Date: Tue, 23 Aug 2022 15:50:03 +0800 Message-Id: X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb (2M and 1G), but also CONT-PTE/PMD size(64K and 32M) if a 4K page size specified. When looking up a CONT-PMD size hugetlb page by follow_page(), it will always use the PMD page lock to protect the pmd entry in follow_huge_pmd(). However this is not the correct lock for CONT-PMD size hugetlb, and the pmd entry will be unstable under the incorrect lock, which means it still can be migrated or poisoned, and can not get the correct CONT-PMD size page. Thus changing to use huge_pte_lock() to get the correct pmd entry lock for CONT-PMD size hugetlb to fix the potential race. Signed-off-by: Baolin Wang --- include/linux/hugetlb.h | 4 ++-- mm/gup.c | 2 +- mm/hugetlb.c | 7 ++++--- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 4b172a7..3a96f67 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -209,7 +209,7 @@ struct page *follow_huge_pd(struct vm_area_struct *vma, int flags, int pdshift); struct page *follow_huge_pte(struct vm_area_struct *vma, unsigned long add= ress, pmd_t *pmd, int flags); -struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address, +struct page *follow_huge_pmd(struct vm_area_struct *vma, unsigned long add= ress, pmd_t *pmd, int flags); struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address, pud_t *pud, int flags); @@ -320,7 +320,7 @@ static inline struct page *follow_huge_pte(struct vm_ar= ea_struct *vma, return NULL; } =20 -static inline struct page *follow_huge_pmd(struct mm_struct *mm, +static inline struct page *follow_huge_pmd(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, int flags) { return NULL; diff --git a/mm/gup.c b/mm/gup.c index 87a94f5..014accd 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -673,7 +673,7 @@ static struct page *follow_pmd_mask(struct vm_area_stru= ct *vma, if (pmd_none(pmdval)) return no_page_table(vma, flags); if (pmd_huge(pmdval) && is_vm_hugetlb_page(vma)) { - page =3D follow_huge_pmd(mm, address, pmd, flags); + page =3D follow_huge_pmd(vma, address, pmd, flags); if (page) return page; return no_page_table(vma, flags); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index cf742d1..2c4048a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7035,9 +7035,11 @@ struct page * __weak } =20 struct page * __weak -follow_huge_pmd(struct mm_struct *mm, unsigned long address, +follow_huge_pmd(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, int flags) { + struct mm_struct *mm =3D vma->vm_mm; + struct hstate *hstate =3D hstate_vma(vma); struct page *page =3D NULL; spinlock_t *ptl; pte_t pte; @@ -7050,8 +7052,7 @@ struct page * __weak return NULL; =20 retry: - ptl =3D pmd_lockptr(mm, pmd); - spin_lock(ptl); + ptl =3D huge_pte_lock(hstate, mm, (pte_t *)pmd); /* * make sure that the address range covered by this pmd is not * unmapped from other threads. --=20 1.8.3.1