From nobody Tue Sep 16 10:51:36 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35940C3DA7A for ; Thu, 5 Jan 2023 10:22:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232694AbjAEKWQ (ORCPT ); Thu, 5 Jan 2023 05:22:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232507AbjAEKUm (ORCPT ); Thu, 5 Jan 2023 05:20:42 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDE38574D2 for ; Thu, 5 Jan 2023 02:19:34 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-46eb8a5a713so313721737b3.1 for ; Thu, 05 Jan 2023 02:19:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NW6wHLpIenfZ4H83ugQ2cvHsKjvU5VzZf+cgjAGHY40=; b=rEGgdMvuu+27Ici8qii/KRILsHn9RlUYSO9K/TJzur40T3t5kesN5ok/NY1HloAU4L Z7iSC9U8uLAkzjnzBDCE6RwQaxIdaGRfRBI7Ay4/uqsXEpMpfJH5fXmrdGe8jBMdebhB PTfqNSSWxeBYlcJ/n89QdrXzg+3piVs9cMRx16JezvMUtD4M+SLYVIhAlR6VJsoaz/0q LIveRtwE+XWa1auke1aNsrJsM9rgwGQVMhxbEoL1PGm2s5/+Us9MZpK3VqY68e/ucGb6 rLqEJyLAlrwQyolGVPX3aLH9GGNSXJQpX581J5jygzsuQLpiwPMUgm2Gwf8Q06CukAiZ 8Rdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NW6wHLpIenfZ4H83ugQ2cvHsKjvU5VzZf+cgjAGHY40=; b=RTR6GN1vHS4yjBFCnzr9Jp9uOh4VNwNaOKJqAP75vAh1UefWCUvZkUj6xDe1Zqy2Jx H0PGcxUif1TmhW6PYspoAVZKwFKIhYb9Q0ARxazraQth9lRfLL8zmylcy//heoCt6rnD jAg1d7jPZmShUPJ9HRv5Zjz5RCSEgBuDwARTcrL3MHnLmFJZuLo4MDAK73Yv0Untz4Wk OR5RM/z4lvVQ1qEG4ENCG3er2gBYY3HL4XJpb/rMKAMiLzsCPdnoKKyZk3ku6yINlDnF FOVC8clLSvRH46IOmnQqmx0VfOBmyayeVkdZCSgU/s0v1f/gtFyec4Bz0w7w2BGrwy+B V3Qg== X-Gm-Message-State: AFqh2koqaXJB4+piqBAFA9t096yI8hRaYY2ZM17ScIWkM2KN2bSsTXr3 Kk0E/Yr09GCsD4cD2rXl4050PnaQKAW9xC/i X-Google-Smtp-Source: AMrXdXsgAfnLGLH/tQnO5IQRR4cK4f4YIvu7i1V91wP/6Pz7HgI4U5X4q/x6mDIyEA8c+u2ZnwHVp4g63zKjZ7Sk X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a25:4bc4:0:b0:794:4ff7:1111 with SMTP id y187-20020a254bc4000000b007944ff71111mr1789851yba.603.1672913973763; Thu, 05 Jan 2023 02:19:33 -0800 (PST) Date: Thu, 5 Jan 2023 10:18:25 +0000 In-Reply-To: <20230105101844.1893104-1-jthoughton@google.com> Mime-Version: 1.0 References: <20230105101844.1893104-1-jthoughton@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230105101844.1893104-28-jthoughton@google.com> Subject: [PATCH 27/46] hugetlb: add HGM support for move_hugetlb_page_tables From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This is very similar to the support that was added to copy_hugetlb_page_range. We simply do a high-granularity walk now, and most of the rest of the code stays the same. Signed-off-by: James Houghton --- mm/hugetlb.c | 47 +++++++++++++++++++++++++++-------------------- 1 file changed, 27 insertions(+), 20 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 21a5116f509b..582d14a206b5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5313,16 +5313,16 @@ int copy_hugetlb_page_range(struct mm_struct *dst, = struct mm_struct *src, return ret; } =20 -static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_ad= dr, - unsigned long new_addr, pte_t *src_pte, pte_t *dst_pte) +static void move_hugetlb_pte(struct vm_area_struct *vma, unsigned long old= _addr, + unsigned long new_addr, struct hugetlb_pte *src_hpte, + struct hugetlb_pte *dst_hpte) { - struct hstate *h =3D hstate_vma(vma); struct mm_struct *mm =3D vma->vm_mm; spinlock_t *src_ptl, *dst_ptl; pte_t pte; =20 - dst_ptl =3D huge_pte_lock(h, mm, dst_pte); - src_ptl =3D huge_pte_lockptr(huge_page_shift(h), mm, src_pte); + dst_ptl =3D hugetlb_pte_lock(dst_hpte); + src_ptl =3D hugetlb_pte_lockptr(src_hpte); =20 /* * We don't have to worry about the ordering of src and dst ptlocks @@ -5331,8 +5331,8 @@ static void move_huge_pte(struct vm_area_struct *vma,= unsigned long old_addr, if (src_ptl !=3D dst_ptl) spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); =20 - pte =3D huge_ptep_get_and_clear(mm, old_addr, src_pte); - set_huge_pte_at(mm, new_addr, dst_pte, pte); + pte =3D huge_ptep_get_and_clear(mm, old_addr, src_hpte->ptep); + set_huge_pte_at(mm, new_addr, dst_hpte->ptep, pte); =20 if (src_ptl !=3D dst_ptl) spin_unlock(src_ptl); @@ -5350,9 +5350,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *v= ma, struct mm_struct *mm =3D vma->vm_mm; unsigned long old_end =3D old_addr + len; unsigned long last_addr_mask; - pte_t *src_pte, *dst_pte; struct mmu_notifier_range range; bool shared_pmd =3D false; + struct hugetlb_pte src_hpte, dst_hpte; =20 mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, old_addr, old_end); @@ -5368,28 +5368,35 @@ int move_hugetlb_page_tables(struct vm_area_struct = *vma, /* Prevent race with file truncation */ hugetlb_vma_lock_write(vma); i_mmap_lock_write(mapping); - for (; old_addr < old_end; old_addr +=3D sz, new_addr +=3D sz) { - src_pte =3D hugetlb_walk(vma, old_addr, sz); - if (!src_pte) { - old_addr |=3D last_addr_mask; - new_addr |=3D last_addr_mask; + while (old_addr < old_end) { + if (hugetlb_full_walk(&src_hpte, vma, old_addr)) { + /* The hstate-level PTE wasn't allocated. */ + old_addr =3D (old_addr | last_addr_mask) + sz; + new_addr =3D (new_addr | last_addr_mask) + sz; continue; } - if (huge_pte_none(huge_ptep_get(src_pte))) + + if (huge_pte_none(huge_ptep_get(src_hpte.ptep))) { + old_addr +=3D hugetlb_pte_size(&src_hpte); + new_addr +=3D hugetlb_pte_size(&src_hpte); continue; + } =20 - if (huge_pmd_unshare(mm, vma, old_addr, src_pte)) { + if (hugetlb_pte_size(&src_hpte) =3D=3D sz && + huge_pmd_unshare(mm, vma, old_addr, src_hpte.ptep)) { shared_pmd =3D true; - old_addr |=3D last_addr_mask; - new_addr |=3D last_addr_mask; + old_addr =3D (old_addr | last_addr_mask) + sz; + new_addr =3D (new_addr | last_addr_mask) + sz; continue; } =20 - dst_pte =3D huge_pte_alloc(mm, new_vma, new_addr, sz); - if (!dst_pte) + if (hugetlb_full_walk_alloc(&dst_hpte, new_vma, new_addr, + hugetlb_pte_size(&src_hpte))) break; =20 - move_huge_pte(vma, old_addr, new_addr, src_pte, dst_pte); + move_hugetlb_pte(vma, old_addr, new_addr, &src_hpte, &dst_hpte); + old_addr +=3D hugetlb_pte_size(&src_hpte); + new_addr +=3D hugetlb_pte_size(&src_hpte); } =20 if (shared_pmd) --=20 2.39.0.314.g84b9a713c41-goog