From nobody Mon Feb 9 13:57:16 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE146C76196 for ; Mon, 10 Apr 2023 13:40:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229905AbjDJNkC (ORCPT ); Mon, 10 Apr 2023 09:40:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229801AbjDJNjr (ORCPT ); Mon, 10 Apr 2023 09:39:47 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA4731FEA for ; Mon, 10 Apr 2023 06:39:45 -0700 (PDT) Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Pw94k2nk6zKxnl; Mon, 10 Apr 2023 21:37:10 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 10 Apr 2023 21:39:41 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng Subject: [PATCH v6 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user() Date: Mon, 10 Apr 2023 21:39:29 +0800 Message-ID: <20230410133932.32288-4-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230410133932.32288-1-zhangpeng362@huawei.com> References: <20230410133932.32288-1-zhangpeng362@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: ZhangPeng Replace copy_huge_page_from_user() with copy_folio_from_user(). copy_folio_from_user() does the same as copy_huge_page_from_user(), but takes in a folio instead of a page. Convert page_kaddr to kaddr in copy_folio_from_user() to do indenting cleanup. Signed-off-by: ZhangPeng Reviewed-by: Sidhartha Kumar Reviewed-by: Mike Kravetz --- include/linux/mm.h | 7 +++---- mm/hugetlb.c | 5 ++--- mm/memory.c | 23 +++++++++++------------ mm/userfaultfd.c | 6 ++---- 4 files changed, 18 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 243bfba378c5..a978413b40a4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3698,10 +3698,9 @@ extern void copy_user_huge_page(struct page *dst, st= ruct page *src, unsigned long addr_hint, struct vm_area_struct *vma, unsigned int pages_per_huge_page); -extern long copy_huge_page_from_user(struct page *dst_page, - const void __user *usr_src, - unsigned int pages_per_huge_page, - bool allow_pagefault); +long copy_folio_from_user(struct folio *dst_folio, + const void __user *usr_src, + bool allow_pagefault); =20 /** * vma_is_special_huge - Are transhuge page-table entries considered speci= al? diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7e4a80769c9e..aade1b513474 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6217,9 +6217,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, goto out; } =20 - ret =3D copy_huge_page_from_user(&folio->page, - (const void __user *) src_addr, - pages_per_huge_page(h), false); + ret =3D copy_folio_from_user(folio, (const void __user *) src_addr, + false); =20 /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { diff --git a/mm/memory.c b/mm/memory.c index 808f354bce65..021cab989703 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5868,26 +5868,25 @@ void copy_user_huge_page(struct page *dst, struct p= age *src, process_huge_page(addr_hint, pages_per_huge_page, copy_subpage, &arg); } =20 -long copy_huge_page_from_user(struct page *dst_page, - const void __user *usr_src, - unsigned int pages_per_huge_page, - bool allow_pagefault) +long copy_folio_from_user(struct folio *dst_folio, + const void __user *usr_src, + bool allow_pagefault) { - void *page_kaddr; + void *kaddr; unsigned long i, rc =3D 0; - unsigned long ret_val =3D pages_per_huge_page * PAGE_SIZE; + unsigned int nr_pages =3D folio_nr_pages(dst_folio); + unsigned long ret_val =3D nr_pages * PAGE_SIZE; struct page *subpage; =20 - for (i =3D 0; i < pages_per_huge_page; i++) { - subpage =3D nth_page(dst_page, i); - page_kaddr =3D kmap_local_page(subpage); + for (i =3D 0; i < nr_pages; i++) { + subpage =3D folio_page(dst_folio, i); + kaddr =3D kmap_local_page(subpage); if (!allow_pagefault) pagefault_disable(); - rc =3D copy_from_user(page_kaddr, - usr_src + i * PAGE_SIZE, PAGE_SIZE); + rc =3D copy_from_user(kaddr, usr_src + i * PAGE_SIZE, PAGE_SIZE); if (!allow_pagefault) pagefault_enable(); - kunmap_local(page_kaddr); + kunmap_local(kaddr); =20 ret_val -=3D (PAGE_SIZE - rc); if (rc) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 313bc683c2b6..1e7dba6c4c5f 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -421,10 +421,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( mmap_read_unlock(dst_mm); BUG_ON(!page); =20 - err =3D copy_huge_page_from_user(page, - (const void __user *)src_addr, - vma_hpagesize / PAGE_SIZE, - true); + err =3D copy_folio_from_user(page_folio(page), + (const void __user *)src_addr, true); if (unlikely(err)) { err =3D -EFAULT; goto out; --=20 2.25.1