From nobody Sun Feb 8 07:07:38 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B742137E for ; Sat, 23 Mar 2024 03:33:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711164799; cv=none; b=qurE7CNZU7ykK/1Nne7MUOeZP21eylSGS0glpUcair7qauC/5IBaPYjAS5QiUicc0qALU+xu6Ut1ERorZYyB5exJtcJTQ91w0L/cE7w9T0KocrM9AUiJMGX5b2kFd08ZoZHKoWx6v9nWPmVqNh6H8J2ZsyykVd+NKPpmzeR0MYk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711164799; c=relaxed/simple; bh=R+3W/lqDRivVSXlpp1Pv3tLz3bw9l7+pp43IX9IDg2E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WAhiyRSnVPnT6d1yXwJk/XKsb5+9YtcdVcq7r+kyPb4Be6g7a9a/+kg9xtjJtKd9L1bRTPXAQkYjko19wceaxzP+nunlUlEi/yaoYYHLPbP3YrjgSYJLbM2cTJ01MIvqCDPiOVywhM04cUSxJtVq8AKdFUmdK9mZyrZlAtefn/U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=cPwF9yn2; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="cPwF9yn2" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711164796; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0bPSu+W7vT4RKbhScarttkkMbTdfZ3qbP2U3+RXyUi8=; b=cPwF9yn2fSduJfMqwwsOhADr3QaSeHST5XT4Y5vsRIFC1i9T97k0ygv5PMEdCg4v6IATD9 8NiadRquJ6iety8xne4VXeseJdn4wWT3nb6rsJ6BJre7IsivRzi9++d2xc9oxIWcvGHiX2 zdXZQpQdVkpgzLIf9L/9tvj3EA8W0g8= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-315-j97yGQ-6NvWAr_0lGeY6QA-1; Fri, 22 Mar 2024 23:33:14 -0400 X-MC-Unique: j97yGQ-6NvWAr_0lGeY6QA-1 Received: by mail-qt1-f197.google.com with SMTP id d75a77b69052e-4311dd10102so10554861cf.0 for ; Fri, 22 Mar 2024 20:33:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711164794; x=1711769594; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0bPSu+W7vT4RKbhScarttkkMbTdfZ3qbP2U3+RXyUi8=; b=iSCoj4zUcaahZOYGRxn8MY1UNgUyFAgwv6NyS8m6TGZn4NOdWDl1c46EdBAjbRsKqW iUdn1z2uRSYT5AijcPweLKPztGn3vIYKVyLGNEJYclzWxWWPb/qOWFhT9t7xFRdw+tDP X+oakP6MDp35TRkRBmosJdoO+s9evogghhrDhk5Drlcjc3eGfsC7rM3HC7Hx1uj/6TJt X3x2Yo/BfmKptqCHTZeFgyQH5T7X/F54BqIrZSNsvxuam6JIQcRHTm0gGMtY85bWnXEx hPlDGa/rcmA9RzEHYPdEBua+KCtCehI3DqAUCihE9i/zACokCAwzNOrDvuUvzjIl/1sE 1Obw== X-Gm-Message-State: AOJu0YxcIMWRDNKm3OfDH4UD7KWHbupCBiZD9Fkz01oxu6pk2d4fE5q/ 6tbMtfWdFmPerjB14DpAnJC/S3K4j4dJgptyAZzLidWwvmhR+IsbGhSJJYf5ZmxWfSOjSv57cXp Fof6URJ4kspCdY6gMqrKO6uX9KUA0nNp5X4BIJ2ndBn7xS+L9rZtNycnpV0O3+2eOZv0bxiM2fC YUI6hGAABIWyKL3R0umxULLrR1DVtv/2jTuH7fS6Rb6qY= X-Received: by 2002:a05:6214:4608:b0:691:2524:2d87 with SMTP id oq8-20020a056214460800b0069125242d87mr1398482qvb.2.1711164793972; Fri, 22 Mar 2024 20:33:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGLCFa/0D8xCoreHcECO/ArQZLiTfxOdYRCzMwFUL7ykul7taqLyoq/q98gRwf22Obq5Gw9sQ== X-Received: by 2002:a05:6214:4608:b0:691:2524:2d87 with SMTP id oq8-20020a056214460800b0069125242d87mr1398465qvb.2.1711164793308; Fri, 22 Mar 2024 20:33:13 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id j12-20020a05621419cc00b006910e7edfedsm1698198qvc.62.2024.03.22.20.33.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Mar 2024 20:33:12 -0700 (PDT) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , SeongJae Park , peterx@redhat.com Subject: [PATCH 1/2] fixup! mm: make HPAGE_PXD_* macros even if !THP Date: Fri, 22 Mar 2024 23:33:09 -0400 Message-ID: <20240323033310.971447-2-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240323033310.971447-1-peterx@redhat.com> References: <20240323033310.971447-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Xu [To be squashed into the corresponding patch] I initially wanted to simply cover all macros to be under PGTABLE_HAS_HUGE_LEAVES, but I found that it won't work, we must define HPAGE_PMD_SHIFT even if PMD_SHIFT is not defined (!MMU case).. The only solution is use the old trick of "({ BUILD_BUG(); 0; })". Signed-off-by: Peter Xu Tested-by: SeongJae Park --- include/linux/huge_mm.h | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f451f0bdab97..d210c849ab7a 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -84,17 +84,23 @@ extern struct kobj_attribute shmem_enabled_attr; #define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs= , order) \ (!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, B= IT(order))) =20 +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES #define HPAGE_PMD_SHIFT PMD_SHIFT -#define HPAGE_PMD_SIZE ((1UL) << HPAGE_PMD_SHIFT) -#define HPAGE_PMD_MASK (~(HPAGE_PMD_SIZE - 1)) +#define HPAGE_PUD_SHIFT PUD_SHIFT +#else +#define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; }) +#define HPAGE_PUD_SHIFT ({ BUILD_BUG(); 0; }) +#endif + #define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT) #define HPAGE_PMD_NR (1<; Sat, 23 Mar 2024 03:33:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711164801; cv=none; b=SkNfku1evGHuaXergY1VCQGqvJLg3n5Qs7eNwDsIHuAHqz4kv3ogH4qYkHQf4vuHVFtRenRVirasQldrGB17UCDiaq/R9V1pbeKywv8DzlgneRgNtKcEKr/VM6BzqTlwPAU06rpul9PabKEIhtsjtwj3BBDlJbRcedJFpcsyhCg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711164801; c=relaxed/simple; bh=gSP6ypGcGVyUPXg12ljPEJ5zv4OPCaxX0EaLkDEbiwU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EBhAJU5kmMWvnASok9fs2IFSeZm1hTijV/toYIoqTh9AoExbUuts+DV1RnLrD84OfloeBDnHh14+rs+rjTrQNoAof73btw5Nww3DfvUQ9meXDomQ2Hq9N1GIeu8h6irkNZFFlt14tbdfmsJcMdssqumtjtJJK+KnfqHX+Tgf45A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=hQWGmLsy; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hQWGmLsy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711164798; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+RT3JU78BLCUvx59f8uOYoEo4UVIewlw6hqvfhC3pJ8=; b=hQWGmLsyRxilIJvy3COfNT41VTHpO3hQ+8vq4WgN6e+CagP9zZ7EWDwGKLzKEF6LiqvdO5 PQvAM55WetkDWNI9BWfE9FuqmBvGo/4doNblS+YAJDoS6os78ozQ8kTkj/U+gj4t/US7OH fyncB18O7wvY+afKoCN4TNynLjvRF94= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-691-XBcmXtX3PNCmOqcZaVr_Sw-1; Fri, 22 Mar 2024 23:33:16 -0400 X-MC-Unique: XBcmXtX3PNCmOqcZaVr_Sw-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-69651ab4c4fso7332526d6.0 for ; Fri, 22 Mar 2024 20:33:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711164795; x=1711769595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+RT3JU78BLCUvx59f8uOYoEo4UVIewlw6hqvfhC3pJ8=; b=IBPYFPL/25z8QhQ75T5Prp3wRoo4USy9mctWUQWKZquxYyUoic3zxX5HceZ9jmgmvR uji/wYpjpKxPTu0xNOACm4RhlB5nFf8WSZEgab2UGrg7gaxCmWSw6L+95zuKuAVWz4BK gE9fPe/zWIGHkOmMLZ0ApEqaLl46/bG/O0UjwP0MCcr7D5Zd8H8+DW3y5AzYLGl1iUci O9qgTqOy5Ev1j8asYHuAwA04J1Rcy92g2u7KF2SHwRC5pBD7jOaEw589RL3UA6arXFx9 TJbn8V3EuZFZrlaMkh5G4ZMxBypVIyY0qHh1n/OBlS3FdyVypDAcu9rgTNIQwDUf6rJi vD0Q== X-Gm-Message-State: AOJu0YxsgSp0SIxSSdJvECbHHpzrnpyolrxvp/ClUCedZQIIkJKnc5Z2 MhmXpjqM75pBIanjSQ2xyETRbjC9F52mTxckEQ00Rizvaa/3o2uYCHAPfQ+iW+eI6541OUIjfeI /1Gzt/g7vCZAr0yPASoU0IpFYQKGfDVDFhsD5qq/G7VJxarA2IUnqHkMyNzc2W+QlA0cLkGzgQX Bb2OkwYloAdFAaCb4HtY/BME4RXs34dd0QsZi55fJn1uo= X-Received: by 2002:a05:6214:4598:b0:696:7b32:cceb with SMTP id op24-20020a056214459800b006967b32ccebmr496279qvb.6.1711164795356; Fri, 22 Mar 2024 20:33:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGKF8jhZbJHSvX24ZFr9evzOa/c38cWLTQmFytc3WEfQcqwh3k7QGxTFmu0XJXF0XffA+dzpA== X-Received: by 2002:a05:6214:4598:b0:696:7b32:cceb with SMTP id op24-20020a056214459800b006967b32ccebmr496264qvb.6.1711164794792; Fri, 22 Mar 2024 20:33:14 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id j12-20020a05621419cc00b006910e7edfedsm1698198qvc.62.2024.03.22.20.33.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Mar 2024 20:33:14 -0700 (PDT) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , SeongJae Park , peterx@redhat.com Subject: [PATCH 2/2] fixup! mm/gup: handle hugepd for follow_page() Date: Fri, 22 Mar 2024 23:33:10 -0400 Message-ID: <20240323033310.971447-3-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240323033310.971447-1-peterx@redhat.com> References: <20240323033310.971447-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Peter Xu The major issue is that now slow gup will reuse some fast gup functions to parse hugepd entries. So we need to move hugepd and relevant functions out of HAVE_FAST_GUP, but also under CONFIG_MMU. Meanwhile, the helper record_subpages() can be used by either hugepd or fast-gup section. To avoid "unused function" warnings we must provide a macro to it, unfortunately. Signed-off-by: Peter Xu Tested-by: SeongJae Park --- mm/gup.c | 287 +++++++++++++++++++++++++++---------------------------- 1 file changed, 143 insertions(+), 144 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 4cd349390477..fe9df268bef2 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -30,11 +30,6 @@ struct follow_page_context { unsigned int page_mask; }; =20 -static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hug= epd, - unsigned long addr, unsigned int pdshift, - unsigned int flags, - struct follow_page_context *ctx); - static inline void sanity_check_pinned_pages(struct page **pages, unsigned long npages) { @@ -505,6 +500,149 @@ static inline void mm_set_has_pinned_flag(unsigned lo= ng *mm_flags) } =20 #ifdef CONFIG_MMU + +#if defined(CONFIG_ARCH_HAS_HUGEPD) || defined(CONFIG_HAVE_FAST_GUP) +static int record_subpages(struct page *page, unsigned long sz, + unsigned long addr, unsigned long end, + struct page **pages) +{ + struct page *start_page; + int nr; + + start_page =3D nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); + for (nr =3D 0; addr !=3D end; nr++, addr +=3D PAGE_SIZE) + pages[nr] =3D nth_page(start_page, nr); + + return nr; +} +#endif /* CONFIG_ARCH_HAS_HUGEPD || CONFIG_HAVE_FAST_GUP */ + +#ifdef CONFIG_ARCH_HAS_HUGEPD +static unsigned long hugepte_addr_end(unsigned long addr, unsigned long en= d, + unsigned long sz) +{ + unsigned long __boundary =3D (addr + sz) & ~(sz-1); + return (__boundary - 1 < end - 1) ? __boundary : end; +} + +static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, + unsigned long end, unsigned int flags, + struct page **pages, int *nr) +{ + unsigned long pte_end; + struct page *page; + struct folio *folio; + pte_t pte; + int refs; + + pte_end =3D (addr + sz) & ~(sz-1); + if (pte_end < end) + end =3D pte_end; + + pte =3D huge_ptep_get(ptep); + + if (!pte_access_permitted(pte, flags & FOLL_WRITE)) + return 0; + + /* hugepages are never "special" */ + VM_BUG_ON(!pfn_valid(pte_pfn(pte))); + + page =3D pte_page(pte); + refs =3D record_subpages(page, sz, addr, end, pages + *nr); + + folio =3D try_grab_folio(page, refs, flags); + if (!folio) + return 0; + + if (unlikely(pte_val(pte) !=3D pte_val(ptep_get(ptep)))) { + gup_put_folio(folio, refs, flags); + return 0; + } + + if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { + gup_put_folio(folio, refs, flags); + return 0; + } + + *nr +=3D refs; + folio_set_referenced(folio); + return 1; +} + +/* + * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file + * systems on Power, which does not have issue with folio writeback against + * GUP updates. When hugepd will be extended to support non-hugetlbfs or + * even anonymous memory, we need to do extra check as what we do with most + * of the other folios. See writable_file_mapping_allowed() and + * folio_fast_pin_allowed() for more information. + */ +static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, + unsigned int pdshift, unsigned long end, unsigned int flags, + struct page **pages, int *nr) +{ + pte_t *ptep; + unsigned long sz =3D 1UL << hugepd_shift(hugepd); + unsigned long next; + + ptep =3D hugepte_offset(hugepd, addr, pdshift); + do { + next =3D hugepte_addr_end(addr, end, sz); + if (!gup_hugepte(ptep, sz, addr, end, flags, pages, nr)) + return 0; + } while (ptep++, addr =3D next, addr !=3D end); + + return 1; +} + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hug= epd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct page *page; + struct hstate *h; + spinlock_t *ptl; + int nr =3D 0, ret; + pte_t *ptep; + + /* Only hugetlb supports hugepd */ + if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma))) + return ERR_PTR(-EFAULT); + + h =3D hstate_vma(vma); + ptep =3D hugepte_offset(hugepd, addr, pdshift); + ptl =3D huge_pte_lock(h, vma->vm_mm, ptep); + ret =3D gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE, + flags, &page, &nr); + spin_unlock(ptl); + + if (ret) { + WARN_ON_ONCE(nr !=3D 1); + ctx->page_mask =3D (1U << huge_page_order(h)) - 1; + return page; + } + + return NULL; +} +#else /* CONFIG_ARCH_HAS_HUGEPD */ +static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, + unsigned int pdshift, unsigned long end, unsigned int flags, + struct page **pages, int *nr) +{ + return 0; +} + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hug= epd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} +#endif /* CONFIG_ARCH_HAS_HUGEPD */ + + static struct page *no_page_table(struct vm_area_struct *vma, unsigned int flags, unsigned long address) { @@ -2962,145 +3100,6 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *= pudp, unsigned long addr, } #endif =20 -static int record_subpages(struct page *page, unsigned long sz, - unsigned long addr, unsigned long end, - struct page **pages) -{ - struct page *start_page; - int nr; - - start_page =3D nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); - for (nr =3D 0; addr !=3D end; nr++, addr +=3D PAGE_SIZE) - pages[nr] =3D nth_page(start_page, nr); - - return nr; -} - -#ifdef CONFIG_ARCH_HAS_HUGEPD -static unsigned long hugepte_addr_end(unsigned long addr, unsigned long en= d, - unsigned long sz) -{ - unsigned long __boundary =3D (addr + sz) & ~(sz-1); - return (__boundary - 1 < end - 1) ? __boundary : end; -} - -static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - unsigned long pte_end; - struct page *page; - struct folio *folio; - pte_t pte; - int refs; - - pte_end =3D (addr + sz) & ~(sz-1); - if (pte_end < end) - end =3D pte_end; - - pte =3D huge_ptep_get(ptep); - - if (!pte_access_permitted(pte, flags & FOLL_WRITE)) - return 0; - - /* hugepages are never "special" */ - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - - page =3D pte_page(pte); - refs =3D record_subpages(page, sz, addr, end, pages + *nr); - - folio =3D try_grab_folio(page, refs, flags); - if (!folio) - return 0; - - if (unlikely(pte_val(pte) !=3D pte_val(ptep_get(ptep)))) { - gup_put_folio(folio, refs, flags); - return 0; - } - - if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { - gup_put_folio(folio, refs, flags); - return 0; - } - - *nr +=3D refs; - folio_set_referenced(folio); - return 1; -} - -/* - * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file - * systems on Power, which does not have issue with folio writeback against - * GUP updates. When hugepd will be extended to support non-hugetlbfs or - * even anonymous memory, we need to do extra check as what we do with most - * of the other folios. See writable_file_mapping_allowed() and - * folio_fast_pin_allowed() for more information. - */ -static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, - unsigned int pdshift, unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - pte_t *ptep; - unsigned long sz =3D 1UL << hugepd_shift(hugepd); - unsigned long next; - - ptep =3D hugepte_offset(hugepd, addr, pdshift); - do { - next =3D hugepte_addr_end(addr, end, sz); - if (!gup_hugepte(ptep, sz, addr, end, flags, pages, nr)) - return 0; - } while (ptep++, addr =3D next, addr !=3D end); - - return 1; -} - -static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hug= epd, - unsigned long addr, unsigned int pdshift, - unsigned int flags, - struct follow_page_context *ctx) -{ - struct page *page; - struct hstate *h; - spinlock_t *ptl; - int nr =3D 0, ret; - pte_t *ptep; - - /* Only hugetlb supports hugepd */ - if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma))) - return ERR_PTR(-EFAULT); - - h =3D hstate_vma(vma); - ptep =3D hugepte_offset(hugepd, addr, pdshift); - ptl =3D huge_pte_lock(h, vma->vm_mm, ptep); - ret =3D gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE, - flags, &page, &nr); - spin_unlock(ptl); - - if (ret) { - WARN_ON_ONCE(nr !=3D 1); - ctx->page_mask =3D (1U << huge_page_order(h)) - 1; - return page; - } - - return NULL; -} -#else -static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, - unsigned int pdshift, unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - return 0; -} - -static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hug= epd, - unsigned long addr, unsigned int pdshift, - unsigned int flags, - struct follow_page_context *ctx) -{ - return NULL; -} -#endif /* CONFIG_ARCH_HAS_HUGEPD */ - static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) --=20 2.44.0