From nobody Sat Feb 7 12:19:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F1DEEB64D7 for ; Fri, 23 Jun 2023 14:30:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232179AbjFWOab (ORCPT ); Fri, 23 Jun 2023 10:30:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232166AbjFWOaW (ORCPT ); Fri, 23 Jun 2023 10:30:22 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5617D2136 for ; Fri, 23 Jun 2023 07:29:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687530583; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KBG5VmVtD6hD7uqVIm/KwEJm6kTIhRuBKiuOHix9n+4=; b=fygFZNrililEdWBDJA6sF2jN8hxy06g13wVD3fzPu5mvqZZYGRBteo87dLSKOyKiBLwZOa 9EvUAcsGErERplrpqQRwwEO2G+N0MvF2s8FxWan4AEGdR+U/kHFosmJ5qpJ1EzPc2JxeMi 56BMkQAdJpfT28JJVwZCurp2EEKkILo= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-316-MM_UJJgaNiqU1U509BK5PA-1; Fri, 23 Jun 2023 10:29:42 -0400 X-MC-Unique: MM_UJJgaNiqU1U509BK5PA-1 Received: by mail-qt1-f198.google.com with SMTP id d75a77b69052e-3f8283a3a7aso1821981cf.1 for ; Fri, 23 Jun 2023 07:29:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687530581; x=1690122581; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KBG5VmVtD6hD7uqVIm/KwEJm6kTIhRuBKiuOHix9n+4=; b=YD/LtA+luB60LcgwXUAvFTbjZywJqyxipKPxvVda7r8zp7DWJZm2K328t413Wr76Na Uksg3N6RrJRwsH6azIcXDXKRX9MhHjVz8Q0BwQYLKHCgFwQOTyCtLMOhLdM49c+EDq9Z 37MHVK0sTwrCPE8viPBTxKXDPYMmcR4KX8O+syxirXR16vGxVwuaSlr0EgNMf3CnrCLj N2N7unKK7nDwcsBGC1E1VlS8h6mwrScnkOCHDmVp+uPk6OupT4i+CmhPgMrMeTApsgsB zEGaGaQCYa9uD8kCjDfE7Mlh6Uhzwr5OYsPNj68D5dQbV/jhTpo53J3zCkIBpjvyYuC0 rxWw== X-Gm-Message-State: AC+VfDyym+xTSvBCdirU3UY4sh8/MlzbdzyKDA1b1BodEBQnzB1RujGF tuint276deTt6+jn3a+3heo+GNp7qTLeUo6Tnartwqt65W7BccWDhJtOdcQAqQHs+ibsYqgM/ve OhO4D6uNfSdJErf788igNKySppOVvC2Cz X-Received: by 2002:ad4:5b8a:0:b0:62f:e4de:5bed with SMTP id 10-20020ad45b8a000000b0062fe4de5bedmr25208868qvp.5.1687530581206; Fri, 23 Jun 2023 07:29:41 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ77EhCcf/GM+yw2Xot5giAOKEx2zhLBubqKJqeheIkq+W5o91Ui2eWJpfYc4H7mEvuNOB2yeQ== X-Received: by 2002:ad4:5b8a:0:b0:62f:e4de:5bed with SMTP id 10-20020ad45b8a000000b0062fe4de5bedmr25208839qvp.5.1687530580951; Fri, 23 Jun 2023 07:29:40 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id b9-20020a0cc989000000b0062821057ac7sm5104827qvk.39.2023.06.23.07.29.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 07:29:40 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Lorenzo Stoakes , John Hubbard , Andrew Morton , Mike Rapoport , David Hildenbrand , peterx@redhat.com, Yang Shi , Andrea Arcangeli , Vlastimil Babka , "Kirill A . Shutemov" , James Houghton , Matthew Wilcox , Mike Kravetz , Hugh Dickins , Jason Gunthorpe Subject: [PATCH v3 1/8] mm/hugetlb: Handle FOLL_DUMP well in follow_page_mask() Date: Fri, 23 Jun 2023 10:29:29 -0400 Message-Id: <20230623142936.268456-2-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230623142936.268456-1-peterx@redhat.com> References: <20230623142936.268456-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Firstly, the no_page_table() is meaningless for hugetlb which is a no-op there, because a hugetlb page always satisfies: - vma_is_anonymous() =3D=3D false - vma->vm_ops->fault !=3D NULL So we can already safely remove it in hugetlb_follow_page_mask(), alongside with the page* variable. Meanwhile, what we do in follow_hugetlb_page() actually makes sense for a dump: we try to fault in the page only if the page cache is already allocated. Let's do the same here for follow_page_mask() on hugetlb. It should so far has zero effect on real dumps, because that still goes into follow_hugetlb_page(). But this may start to influence a bit on follow_page() users who mimics a "dump page" scenario, but hopefully in a good way. This also paves way for unifying the hugetlb gup-slow. Reviewed-by: Mike Kravetz Reviewed-by: David Hildenbrand Signed-off-by: Peter Xu --- mm/gup.c | 9 ++------- mm/hugetlb.c | 9 +++++++++ 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ce14d4d28503..abcd841d94b7 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -767,7 +767,6 @@ static struct page *follow_page_mask(struct vm_area_str= uct *vma, struct follow_page_context *ctx) { pgd_t *pgd; - struct page *page; struct mm_struct *mm =3D vma->vm_mm; =20 ctx->page_mask =3D 0; @@ -780,12 +779,8 @@ static struct page *follow_page_mask(struct vm_area_st= ruct *vma, * hugetlb_follow_page_mask is only for follow_page() handling here. * Ordinary GUP uses follow_hugetlb_page for hugetlb processing. */ - if (is_vm_hugetlb_page(vma)) { - page =3D hugetlb_follow_page_mask(vma, address, flags); - if (!page) - page =3D no_page_table(vma, flags); - return page; - } + if (is_vm_hugetlb_page(vma)) + return hugetlb_follow_page_mask(vma, address, flags); =20 pgd =3D pgd_offset(mm, address); =20 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d76574425da3..f75f5e78ff0b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6498,6 +6498,15 @@ struct page *hugetlb_follow_page_mask(struct vm_area= _struct *vma, spin_unlock(ptl); out_unlock: hugetlb_vma_unlock_read(vma); + + /* + * Fixup retval for dump requests: if pagecache doesn't exist, + * don't try to allocate a new page but just skip it. + */ + if (!page && (flags & FOLL_DUMP) && + !hugetlbfs_pagecache_present(h, vma, address)) + page =3D ERR_PTR(-EFAULT); + return page; } =20 --=20 2.40.1 From nobody Sat Feb 7 12:19:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2879CEB64D7 for ; Fri, 23 Jun 2023 14:30:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231389AbjFWOaq (ORCPT ); Fri, 23 Jun 2023 10:30:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232106AbjFWOab (ORCPT ); Fri, 23 Jun 2023 10:30:31 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D6352107 for ; Fri, 23 Jun 2023 07:29:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687530584; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6gk9nBOPG6j2+jM6FIDDaVYVwIkM1l3Tetg7sVWCV7Y=; b=Jf8/Z7rHZW705xb9nyY9kqDdJpAThcdqQgE+RkUFRniv6/HBnnArSpyZjUUkEVK0AFySF5 TzS2g2TU3LzRo+o+2kBNNoyA9CQ3gynCvi8O5bIJl/eicQM/87YRDSN1gw6/SpBID3QzSg sQ8HU/D8WsVxurYbjDEXg9lM/QwwV/c= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-640-cdKQw0Q7P_Sop92j7zu6yA-1; Fri, 23 Jun 2023 10:29:43 -0400 X-MC-Unique: cdKQw0Q7P_Sop92j7zu6yA-1 Received: by mail-qv1-f72.google.com with SMTP id 6a1803df08f44-62ffa1214edso1381436d6.0 for ; Fri, 23 Jun 2023 07:29:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687530582; x=1690122582; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6gk9nBOPG6j2+jM6FIDDaVYVwIkM1l3Tetg7sVWCV7Y=; b=kwb0Tm/G35J145/5w+eB9xjyQXc3PbNgbXdO5Bp97s9mhdxDycAgCJ57M2fZgNts0c tVGtHep1N3TWcQ8jqbigkbIsFLZaVKnpmUkhPv+7W4nzUand12OjGjuEUDRj/6Ymfyza p5Vu94neY/OuZAdTE9S+Rmq+SNILwAt6L5qy3OF5j/b9t6vC96+76iwbf7ILPMAGVc5l hiGjVfi1m9qo5EaPFzdKvVfHwwbolyEx0GlBhf/waICotQa8BrdMXnemoahg9K2JiHSF +KJLVDVmvX+DgR9Kw+iS6glWPjvRmFItvF9JkOWxsg7WDw+9tJ9u+TbfcE1TZtWNm+PA BKbw== X-Gm-Message-State: AC+VfDyJIIMMTQrRCF5AoCdXJnIhZhWczFyUSi1i/z7oL/qawpYSrhFw xbsHukbL04R7ccYI6B3KSUOf052xLTvFHabtwNq7ndfY26hG514/axWOE9PuPBMY2iNQhyfWxSH tU4biEsQNPJ5kBMLXHwIuBINN X-Received: by 2002:a05:6214:27eb:b0:616:870c:96b8 with SMTP id jt11-20020a05621427eb00b00616870c96b8mr25389399qvb.3.1687530582615; Fri, 23 Jun 2023 07:29:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6b6BfAOlLY4UY84oPtrElMpGPs2JooejEq2dKySJ7sqg4gO6FWfBJO1DRPK0VM93X3ONSVMg== X-Received: by 2002:a05:6214:27eb:b0:616:870c:96b8 with SMTP id jt11-20020a05621427eb00b00616870c96b8mr25389388qvb.3.1687530582350; Fri, 23 Jun 2023 07:29:42 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id b9-20020a0cc989000000b0062821057ac7sm5104827qvk.39.2023.06.23.07.29.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 07:29:41 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Lorenzo Stoakes , John Hubbard , Andrew Morton , Mike Rapoport , David Hildenbrand , peterx@redhat.com, Yang Shi , Andrea Arcangeli , Vlastimil Babka , "Kirill A . Shutemov" , James Houghton , Matthew Wilcox , Mike Kravetz , Hugh Dickins , Jason Gunthorpe Subject: [PATCH v3 2/8] mm/hugetlb: Prepare hugetlb_follow_page_mask() for FOLL_PIN Date: Fri, 23 Jun 2023 10:29:30 -0400 Message-Id: <20230623142936.268456-3-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230623142936.268456-1-peterx@redhat.com> References: <20230623142936.268456-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" follow_page() doesn't use FOLL_PIN, meanwhile hugetlb seems to not be the target of FOLL_WRITE either. However add the checks. Namely, either the need to CoW due to missing write bit, or proper unsharing on !AnonExclusive pages over R/O pins to reject the follow page. That brings this function closer to follow_hugetlb_page(). So we don't care before, and also for now. But we'll care if we switch over slow-gup to use hugetlb_follow_page_mask(). We'll also care when to return -EMLINK properly, as that's the gup internal api to mean "we should unshare". Not really needed for follow page path, though. When at it, switching the try_grab_page() to use WARN_ON_ONCE(), to be clear that it just should never fail. When error happens, instead of setting page=3D=3DNULL, capture the errno instead. Reviewed-by: Mike Kravetz Signed-off-by: Peter Xu Reviewed-by: David Hildenbrand --- mm/hugetlb.c | 31 ++++++++++++++++++++----------- 1 file changed, 20 insertions(+), 11 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f75f5e78ff0b..27367edf5c72 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6462,13 +6462,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area= _struct *vma, struct page *page =3D NULL; spinlock_t *ptl; pte_t *pte, entry; - - /* - * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via - * follow_hugetlb_page(). - */ - if (WARN_ON_ONCE(flags & FOLL_PIN)) - return NULL; + int ret; =20 hugetlb_vma_lock_read(vma); pte =3D hugetlb_walk(vma, haddr, huge_page_size(h)); @@ -6478,8 +6472,21 @@ struct page *hugetlb_follow_page_mask(struct vm_area= _struct *vma, ptl =3D huge_pte_lock(h, mm, pte); entry =3D huge_ptep_get(pte); if (pte_present(entry)) { - page =3D pte_page(entry) + - ((address & ~huge_page_mask(h)) >> PAGE_SHIFT); + page =3D pte_page(entry); + + if ((flags & FOLL_WRITE) && !huge_pte_write(entry)) { + page =3D NULL; + goto out; + } + + if (gup_must_unshare(vma, flags, page)) { + /* Tell the caller to do unsharing */ + page =3D ERR_PTR(-EMLINK); + goto out; + } + + page +=3D ((address & ~huge_page_mask(h)) >> PAGE_SHIFT); + /* * Note that page may be a sub-page, and with vmemmap * optimizations the page struct may be read only. @@ -6489,8 +6496,10 @@ struct page *hugetlb_follow_page_mask(struct vm_area= _struct *vma, * try_grab_page() should always be able to get the page here, * because we hold the ptl lock and have verified pte_present(). */ - if (try_grab_page(page, flags)) { - page =3D NULL; + ret =3D try_grab_page(page, flags); + + if (WARN_ON_ONCE(ret)) { + page =3D ERR_PTR(ret); goto out; } } --=20 2.40.1 From nobody Sat Feb 7 12:19:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 206CDEB64D7 for ; Fri, 23 Jun 2023 14:30:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232210AbjFWOal (ORCPT ); Fri, 23 Jun 2023 10:30:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232164AbjFWOa3 (ORCPT ); Fri, 23 Jun 2023 10:30:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D85322681 for ; Fri, 23 Jun 2023 07:29:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687530587; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kqyY+x2QLggisqHxk6p5/e1iQd0i792KL+XWdM1gQLA=; b=NYgDEUlu7/1y4OvD/pIcnpYOeQNQTjIlkpHLvKIl5eStysE7ntzvhuaHcxZE6t4n2G9CgF R7UMpCh1NznQaXtWeRL9jAJguT8NdhuyKEJtYhHjDIRCPcfLzFGd0IdQunk0M4JrcVjC4P 4FunL9MOEH3DIG4yhMmlAUiHBLtIAD0= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-281-rwDXeA9mOmG1LnIQeY9osg-1; Fri, 23 Jun 2023 10:29:44 -0400 X-MC-Unique: rwDXeA9mOmG1LnIQeY9osg-1 Received: by mail-qt1-f199.google.com with SMTP id d75a77b69052e-3f8283a3a7aso1822091cf.1 for ; Fri, 23 Jun 2023 07:29:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687530584; x=1690122584; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kqyY+x2QLggisqHxk6p5/e1iQd0i792KL+XWdM1gQLA=; b=mHS1QSlwPkT4DoxpOmz4Yd2W90r1rn65J0dBpx6uD0zoW0VdnpycMYzGovMEJg4Sne RVfl/oUiwRyFQitSO9oztFY1igHCP7eK9dnq/P+T4HDgKAzJUI7YkR64yFxqWu5aztFv YXvepvwH1IbpaIaHhJOLKADyOuoRu25lKxjM0iqusgBX9kdTCPb2M3yutGcfcUBnOZge BZSVkBtRtjEaV1wfRsp27wi0dPTLXxABD59Dy6J/jiIszvhk89D8HkuXZqsCX9BTK6fe YNB3dYvYR3rLNNtjbtc2TDklMbJSxrcfYOFd4x+f9klGb+c1sK0rUw+z7myWNa9ibbM/ zCSQ== X-Gm-Message-State: AC+VfDyDCCWGOUQhwo2WvthaXpS0L/ZAYxBDTlwfaXIikp6BQ/MGmVw5 jRWSyZHMtQLmvCofD7KzUF72xoLGF4fEBjob6xnPGr+7FqQTZr7mxqlLAzs/L11NNvkOwUOVxoJ g8RMHielV6rGdzxJO9WZxhAlh X-Received: by 2002:ad4:5b8a:0:b0:62f:e4de:5bed with SMTP id 10-20020ad45b8a000000b0062fe4de5bedmr25208991qvp.5.1687530583984; Fri, 23 Jun 2023 07:29:43 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ73mR/vS6ro7yIhMCL42Ko8gAJI2pqYLlmvye+6zRW8sb2ZlI2HrV+rMo76OgVpAmu4iXJJhg== X-Received: by 2002:ad4:5b8a:0:b0:62f:e4de:5bed with SMTP id 10-20020ad45b8a000000b0062fe4de5bedmr25208969qvp.5.1687530583682; Fri, 23 Jun 2023 07:29:43 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id b9-20020a0cc989000000b0062821057ac7sm5104827qvk.39.2023.06.23.07.29.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 07:29:43 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Lorenzo Stoakes , John Hubbard , Andrew Morton , Mike Rapoport , David Hildenbrand , peterx@redhat.com, Yang Shi , Andrea Arcangeli , Vlastimil Babka , "Kirill A . Shutemov" , James Houghton , Matthew Wilcox , Mike Kravetz , Hugh Dickins , Jason Gunthorpe Subject: [PATCH v3 3/8] mm/hugetlb: Add page_mask for hugetlb_follow_page_mask() Date: Fri, 23 Jun 2023 10:29:31 -0400 Message-Id: <20230623142936.268456-4-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230623142936.268456-1-peterx@redhat.com> References: <20230623142936.268456-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" follow_page() doesn't need it, but we'll start to need it when unifying gup for hugetlb. Reviewed-by: David Hildenbrand Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 8 +++++--- mm/gup.c | 3 ++- mm/hugetlb.c | 5 ++++- 3 files changed, 11 insertions(+), 5 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index beb7c63d2871..2e2d89e79d6c 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -131,7 +131,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct vm_area_struct *, struct vm_area_struct *); struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags); + unsigned long address, unsigned int flags, + unsigned int *page_mask); long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, struct page **, unsigned long *, unsigned long *, long, unsigned int, int *); @@ -297,8 +298,9 @@ static inline void adjust_range_if_pmd_sharing_possible( { } =20 -static inline struct page *hugetlb_follow_page_mask(struct vm_area_struct = *vma, - unsigned long address, unsigned int flags) +static inline struct page *hugetlb_follow_page_mask( + struct vm_area_struct *vma, unsigned long address, unsigned int flags, + unsigned int *page_mask) { BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ } diff --git a/mm/gup.c b/mm/gup.c index abcd841d94b7..9fc9271cba8d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -780,7 +780,8 @@ static struct page *follow_page_mask(struct vm_area_str= uct *vma, * Ordinary GUP uses follow_hugetlb_page for hugetlb processing. */ if (is_vm_hugetlb_page(vma)) - return hugetlb_follow_page_mask(vma, address, flags); + return hugetlb_follow_page_mask(vma, address, flags, + &ctx->page_mask); =20 pgd =3D pgd_offset(mm, address); =20 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 27367edf5c72..b4973edef9f2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6454,7 +6454,8 @@ static inline bool __follow_hugetlb_must_fault(struct= vm_area_struct *vma, } =20 struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags) + unsigned long address, unsigned int flags, + unsigned int *page_mask) { struct hstate *h =3D hstate_vma(vma); struct mm_struct *mm =3D vma->vm_mm; @@ -6502,6 +6503,8 @@ struct page *hugetlb_follow_page_mask(struct vm_area_= struct *vma, page =3D ERR_PTR(ret); goto out; } + + *page_mask =3D (1U << huge_page_order(h)) - 1; } out: spin_unlock(ptl); --=20 2.40.1 From nobody Sat Feb 7 12:19:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F9C4EB64D7 for ; Fri, 23 Jun 2023 14:30:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231950AbjFWOau (ORCPT ); Fri, 23 Jun 2023 10:30:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231310AbjFWOab (ORCPT ); Fri, 23 Jun 2023 10:30:31 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBABD1706 for ; Fri, 23 Jun 2023 07:29:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687530587; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eaaAtm4E0YzeLzCpC+sle/rLwyV4UWhhjWGc4/kvdNw=; b=LeRVe2YosrM8EFo6DWbVO/StP4LggomZjc2H2pqnTRMLTxU/KPVBk5crAHEy3ic7J63IzP jwsL37jvGaWGCdEyusG7RjAzmxLyjIRDnunrXeQ9y1aNbCvxskBPbIHNcO0OF+6zcVEuXI 1RN68cGB4YSLXpjdv/Btc68UdS5VP5U= Received: from mail-oi1-f198.google.com (mail-oi1-f198.google.com [209.85.167.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-640-eXSct1RDNYqp0SuVv9vN6g-1; Fri, 23 Jun 2023 10:29:46 -0400 X-MC-Unique: eXSct1RDNYqp0SuVv9vN6g-1 Received: by mail-oi1-f198.google.com with SMTP id 5614622812f47-39cabe624deso48380b6e.1 for ; Fri, 23 Jun 2023 07:29:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687530585; x=1690122585; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eaaAtm4E0YzeLzCpC+sle/rLwyV4UWhhjWGc4/kvdNw=; b=gDq/ftp1IK/FhgHav3WPPFUGJa4IDAernAQFeSH6Cnfi/419jzbA2e1OqMRKk/L5lk bGa2XcKKBd/ZRVkaSwNMG0H3Ia6CLwWxy07O4dPnjYkjW/GY9IPqZcBF+QpQWD/TrSwY bJS/QZEiUfvIBHkju8e0zD6hg9DtKczWEMugnLvTF2sTrQcfRWqmAXdstXZSp/QvGPh8 obeLXLAe1/e3ZMk79yjixswi/xhl2+acYTbRWqLEUEbTgJojeIBKzyrUGasC6n9CuEgq 9IRTmry6t+9+gRKw7BX+oL5Mmhh2PMTqIgK9toiNoT3KViFzOhkYhOSNKeRtv6ZG9DKO nltA== X-Gm-Message-State: AC+VfDzakg+vI8ZhpvKeU1979PhKyvoBXaUPtmoRktGL/mKUAQpk0u2G +hXrpzq8vxxqAa8ZdhwIYs5KRup+aGm3rr7SB4jLOdg/kMa3EUyLlFPKNzoomoAf44MWqr1C2j6 J83c17Ip0tjQXBvNMxHp7gv4u X-Received: by 2002:aca:e1c6:0:b0:397:f116:8f67 with SMTP id y189-20020acae1c6000000b00397f1168f67mr15159992oig.0.1687530585229; Fri, 23 Jun 2023 07:29:45 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6GO65T8HReZxqXh+dPcy+HPwi177tHnPRkclvjCV4jFXu/xj5JsB/zleCQUHHU+pcmmBAOMw== X-Received: by 2002:aca:e1c6:0:b0:397:f116:8f67 with SMTP id y189-20020acae1c6000000b00397f1168f67mr15159971oig.0.1687530584997; Fri, 23 Jun 2023 07:29:44 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id b9-20020a0cc989000000b0062821057ac7sm5104827qvk.39.2023.06.23.07.29.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 07:29:44 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Lorenzo Stoakes , John Hubbard , Andrew Morton , Mike Rapoport , David Hildenbrand , peterx@redhat.com, Yang Shi , Andrea Arcangeli , Vlastimil Babka , "Kirill A . Shutemov" , James Houghton , Matthew Wilcox , Mike Kravetz , Hugh Dickins , Jason Gunthorpe Subject: [PATCH v3 4/8] mm/gup: Cleanup next_page handling Date: Fri, 23 Jun 2023 10:29:32 -0400 Message-Id: <20230623142936.268456-5-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230623142936.268456-1-peterx@redhat.com> References: <20230623142936.268456-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The only path that doesn't use generic "**pages" handling is the gate vma. Make it use the same path, meanwhile tune the next_page label upper to cover "**pages" handling. This prepares for THP handling for "**pages". Reviewed-by: Lorenzo Stoakes Acked-by: David Hildenbrand Signed-off-by: Peter Xu --- mm/gup.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 9fc9271cba8d..4a00d609033e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1124,7 +1124,7 @@ static long __get_user_pages(struct mm_struct *mm, if (!vma && in_gate_area(mm, start)) { ret =3D get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, - pages ? &pages[i] : NULL); + pages ? &page : NULL); if (ret) goto out; ctx.page_mask =3D 0; @@ -1194,19 +1194,18 @@ static long __get_user_pages(struct mm_struct *mm, ret =3D PTR_ERR(page); goto out; } - - goto next_page; } else if (IS_ERR(page)) { ret =3D PTR_ERR(page); goto out; } +next_page: if (pages) { pages[i] =3D page; flush_anon_page(vma, page, start); flush_dcache_page(page); ctx.page_mask =3D 0; } -next_page: + page_increm =3D 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); if (page_increm > nr_pages) page_increm =3D nr_pages; --=20 2.40.1 From nobody Sat Feb 7 12:19:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 811D0EB64D7 for ; Fri, 23 Jun 2023 14:31:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231689AbjFWObA (ORCPT ); Fri, 23 Jun 2023 10:31:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232198AbjFWOah (ORCPT ); Fri, 23 Jun 2023 10:30:37 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 058602139 for ; Fri, 23 Jun 2023 07:29:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687530588; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZmCvEzGFNb0TFjW2MTydxEZnQUnXGwjoOel6yGTr3Bo=; b=Z3N+omf3sVH+MgXm18iZ335oYiouK50udE5U3xC/U1c1IHqysXHGZT/66D62i39eOTLQd4 MGeRkoNQxyy/zw7V9+yty8c7D38ZWtWhCczACcFZKNGgWCmj9rZwkMHDQARQyICu2qJa1o BQDSsAVFvhfr62ljvh7kqR6Yl129+B8= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-523-XcbcpbNMNdmcafwQqIAxlQ-1; Fri, 23 Jun 2023 10:29:47 -0400 X-MC-Unique: XcbcpbNMNdmcafwQqIAxlQ-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-62ff7a8b9aeso1315546d6.1 for ; Fri, 23 Jun 2023 07:29:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687530586; x=1690122586; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZmCvEzGFNb0TFjW2MTydxEZnQUnXGwjoOel6yGTr3Bo=; b=d/Vi6vO05URC2xi6wX/qHZ1tS+Fz5kT7j2i89jHaFvfpyTYabgyyA9C8sQEQJrkQS2 ew8SKfoUzNDAFBO6Cad8Uz6h3lg9fdj4kHa8uqclNvwBXNyDHnZLOzzK0ILPGNxAwtni mykyB8otShD2/rSBL5eLCyBgqZqYZEWcY/zY9GPq4lwyZA1xj/SoyrjJevC2Rumzjfxt FZ+J235mz1iKC0f69swbQUbd343EyJnF3VNf5MEG84TtSs43tWo+bA44kKLkrdT6r3co P0zPghR6gHIttkW9CxaYwGyoeebrnz8tpb14IqlzgjxszkhS4FNsCqyZYwPIu5ZOtiLA yMKA== X-Gm-Message-State: AC+VfDxr4Za/a5BsqaZXIPiGziU1pVvU2Yp865t+5lSLdkAX1f3L3A6C 1WoPXCkvyhmZbTvrwbJDbYRUrXd5Z9KFHzNh6NNP+ARy/1Iuq5DIuglar9p3d4TRcasZnW0oLge syZk9sZ4YD45d73lyvNWTIVNF X-Received: by 2002:a05:6214:300f:b0:625:86ed:8ab4 with SMTP id ke15-20020a056214300f00b0062586ed8ab4mr4610254qvb.3.1687530586532; Fri, 23 Jun 2023 07:29:46 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6oG/I/D+2ou09cQcTlfX+z4F1YPxjXAbS6ijyg5lFwMxdYPO8Sr/TxyEq0M2ORmBAvTkxKmg== X-Received: by 2002:a05:6214:300f:b0:625:86ed:8ab4 with SMTP id ke15-20020a056214300f00b0062586ed8ab4mr4610231qvb.3.1687530586290; Fri, 23 Jun 2023 07:29:46 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id b9-20020a0cc989000000b0062821057ac7sm5104827qvk.39.2023.06.23.07.29.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 07:29:46 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Lorenzo Stoakes , John Hubbard , Andrew Morton , Mike Rapoport , David Hildenbrand , peterx@redhat.com, Yang Shi , Andrea Arcangeli , Vlastimil Babka , "Kirill A . Shutemov" , James Houghton , Matthew Wilcox , Mike Kravetz , Hugh Dickins , Jason Gunthorpe Subject: [PATCH v3 5/8] mm/gup: Accelerate thp gup even for "pages != NULL" Date: Fri, 23 Jun 2023 10:29:33 -0400 Message-Id: <20230623142936.268456-6-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230623142936.268456-1-peterx@redhat.com> References: <20230623142936.268456-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The acceleration of THP was done with ctx.page_mask, however it'll be ignored if **pages is non-NULL. The old optimization was introduced in 2013 in 240aadeedc4a ("mm: accelerate mm_populate() treatment of THP pages"). It didn't explain why we can't optimize the **pages non-NULL case. It's possible that at that time the major goal was for mm_populate() which should be enough back then. Optimize thp for all cases, by properly looping over each subpage, doing cache flushes, and boost refcounts / pincounts where needed in one go. This can be verified using gup_test below: # chrt -f 1 ./gup_test -m 512 -t -L -n 1024 -r 10 Before: 13992.50 ( +-8.75%) After: 378.50 (+-69.62%) Reviewed-by: Lorenzo Stoakes Signed-off-by: Peter Xu --- mm/gup.c | 51 ++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 44 insertions(+), 7 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 4a00d609033e..22e32cff9ac7 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1199,16 +1199,53 @@ static long __get_user_pages(struct mm_struct *mm, goto out; } next_page: - if (pages) { - pages[i] =3D page; - flush_anon_page(vma, page, start); - flush_dcache_page(page); - ctx.page_mask =3D 0; - } - page_increm =3D 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); if (page_increm > nr_pages) page_increm =3D nr_pages; + + if (pages) { + struct page *subpage; + unsigned int j; + + /* + * This must be a large folio (and doesn't need to + * be the whole folio; it can be part of it), do + * the refcount work for all the subpages too. + * + * NOTE: here the page may not be the head page + * e.g. when start addr is not thp-size aligned. + * try_grab_folio() should have taken care of tail + * pages. + */ + if (page_increm > 1) { + struct folio *folio; + + /* + * Since we already hold refcount on the + * large folio, this should never fail. + */ + folio =3D try_grab_folio(page, page_increm - 1, + foll_flags); + if (WARN_ON_ONCE(!folio)) { + /* + * Release the 1st page ref if the + * folio is problematic, fail hard. + */ + gup_put_folio(page_folio(page), 1, + foll_flags); + ret =3D -EFAULT; + goto out; + } + } + + for (j =3D 0; j < page_increm; j++) { + subpage =3D nth_page(page, j); + pages[i + j] =3D subpage; + flush_anon_page(vma, subpage, start + j * PAGE_SIZE); + flush_dcache_page(subpage); + } + } + i +=3D page_increm; start +=3D page_increm * PAGE_SIZE; nr_pages -=3D page_increm; --=20 2.40.1 From nobody Sat Feb 7 12:19:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4F33EB64D7 for ; Fri, 23 Jun 2023 14:30:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232235AbjFWOax (ORCPT ); Fri, 23 Jun 2023 10:30:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232180AbjFWOac (ORCPT ); Fri, 23 Jun 2023 10:30:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05CB32693 for ; Fri, 23 Jun 2023 07:29:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687530590; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P4y7Ae1fpCTtvuEzdDqcTV3xAT7ebtLUCI261jHMfiY=; b=aOpdRePX05epkfIJ1+PllvD/LSb/gIoywseZvxx6MW7xZ8MpytP7U6UZ6y6LmJMbnAUmja 2cHIHsZktrsVcc9xKIAOiglT1fsfsTtIVxoRvzHvnhotNqSr72Z00XOjoMvR3o3p53FE5L sqaFLjQSgg65kCzYM1qpJ15L38rE2Is= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-145-AJbP7ruMPdq5D673yA2xiA-1; Fri, 23 Jun 2023 10:29:48 -0400 X-MC-Unique: AJbP7ruMPdq5D673yA2xiA-1 Received: by mail-qt1-f199.google.com with SMTP id d75a77b69052e-3f9e7a1caf2so2096151cf.0 for ; Fri, 23 Jun 2023 07:29:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687530588; x=1690122588; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P4y7Ae1fpCTtvuEzdDqcTV3xAT7ebtLUCI261jHMfiY=; b=QgeYBTlZDUQNrMxP7AL55sDBimdVuMKSUs+b9Qg1uOk7avjSeOVPQrV1Awhepx3wUX GkxsleGM1vRJ23bEOdUSJuj1jDOGRyOn4wrHJTtPiFQiyP7NwIybKDmeUy4DYa8W1H1S bNsqIiAZbXs4I/ugAOUn0JeXPcrQcrmMFXmZfora5QokBOdobJ73TeWjl10Pi4AfWVlF lV3tMU343kmwiB/7c3UPsu2Nxd9d9IhhzYEx1ODb5JixXoWUAw41hsJ145AEG2vVdfCu k2WVuE83V9G9Dbq2V1hDXf30zWUPtzM2mUIhG8oNXBP21wu7MEEsC+nfQg9rNsP9gzNg /Q6Q== X-Gm-Message-State: AC+VfDzeqGIS1vihsEP1wJbZGeEk90OiyYUWgjFcA1RPtJYZ/LFMDc/N zjP3sZSCW3Dr4W9C+BAbQ1H4Y/zRoh5VELOU3Dew/0IlBhC82DW7b1EV3rmkjIVNiL7wU7Lj7xy mQLiQptu845bZtpKXjqiQJTod X-Received: by 2002:a05:6214:5186:b0:62d:eceb:f7ce with SMTP id kl6-20020a056214518600b0062decebf7cemr8898634qvb.1.1687530588062; Fri, 23 Jun 2023 07:29:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7xTPkKWLLEE9oaBVdmvOEL+EzktAffFIbQniQEAF//E5ihb/bIjet/WBRe0QH6WnGGRv1UOg== X-Received: by 2002:a05:6214:5186:b0:62d:eceb:f7ce with SMTP id kl6-20020a056214518600b0062decebf7cemr8898615qvb.1.1687530587775; Fri, 23 Jun 2023 07:29:47 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id b9-20020a0cc989000000b0062821057ac7sm5104827qvk.39.2023.06.23.07.29.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 07:29:47 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Lorenzo Stoakes , John Hubbard , Andrew Morton , Mike Rapoport , David Hildenbrand , peterx@redhat.com, Yang Shi , Andrea Arcangeli , Vlastimil Babka , "Kirill A . Shutemov" , James Houghton , Matthew Wilcox , Mike Kravetz , Hugh Dickins , Jason Gunthorpe Subject: [PATCH v3 6/8] mm/gup: Retire follow_hugetlb_page() Date: Fri, 23 Jun 2023 10:29:34 -0400 Message-Id: <20230623142936.268456-7-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230623142936.268456-1-peterx@redhat.com> References: <20230623142936.268456-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now __get_user_pages() should be well prepared to handle thp completely, as long as hugetlb gup requests even without the hugetlb's special path. Time to retire follow_hugetlb_page(). Tweak misc comments to reflect reality of follow_hugetlb_page()'s removal. Signed-off-by: Peter Xu Acked-by: David Hildenbrand --- fs/userfaultfd.c | 2 +- include/linux/hugetlb.h | 12 --- mm/gup.c | 19 ---- mm/hugetlb.c | 224 ---------------------------------------- 4 files changed, 1 insertion(+), 256 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 7cecd49e078b..ae711f1d7a83 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -427,7 +427,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsig= ned long reason) * * We also don't do userfault handling during * coredumping. hugetlbfs has the special - * follow_hugetlb_page() to skip missing pages in the + * hugetlb_follow_page_mask() to skip missing pages in the * FOLL_DUMP case, anon memory also checks for FOLL_DUMP with * the no_page_table() helper in follow_page_mask(), but the * shmem_vm_ops->fault method is invoked even during diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2e2d89e79d6c..bb5024718fc1 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -133,9 +133,6 @@ int copy_hugetlb_page_range(struct mm_struct *, struct = mm_struct *, struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned int *page_mask); -long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, - struct page **, unsigned long *, unsigned long *, - long, unsigned int, int *); void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); @@ -305,15 +302,6 @@ static inline struct page *hugetlb_follow_page_mask( BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ } =20 -static inline long follow_hugetlb_page(struct mm_struct *mm, - struct vm_area_struct *vma, struct page **pages, - unsigned long *position, unsigned long *nr_pages, - long i, unsigned int flags, int *nonblocking) -{ - BUG(); - return 0; -} - static inline int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *dst_vma, diff --git a/mm/gup.c b/mm/gup.c index 22e32cff9ac7..ac61244e6fca 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -775,9 +775,6 @@ static struct page *follow_page_mask(struct vm_area_str= uct *vma, * Call hugetlb_follow_page_mask for hugetlb vmas as it will use * special hugetlb page table walking code. This eliminates the * need to check for hugetlb entries in the general walking code. - * - * hugetlb_follow_page_mask is only for follow_page() handling here. - * Ordinary GUP uses follow_hugetlb_page for hugetlb processing. */ if (is_vm_hugetlb_page(vma)) return hugetlb_follow_page_mask(vma, address, flags, @@ -1138,22 +1135,6 @@ static long __get_user_pages(struct mm_struct *mm, ret =3D check_vma_flags(vma, gup_flags); if (ret) goto out; - - if (is_vm_hugetlb_page(vma)) { - i =3D follow_hugetlb_page(mm, vma, pages, - &start, &nr_pages, i, - gup_flags, locked); - if (!*locked) { - /* - * We've got a VM_FAULT_RETRY - * and we've lost mmap_lock. - * We must stop here. - */ - BUG_ON(gup_flags & FOLL_NOWAIT); - goto out; - } - continue; - } } retry: /* diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b4973edef9f2..50a3579782a5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5721,7 +5721,6 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, =20 /* * Return whether there is a pagecache page to back given address within V= MA. - * Caller follow_hugetlb_page() holds page_table_lock so we cannot lock_pa= ge. */ static bool hugetlbfs_pagecache_present(struct hstate *h, struct vm_area_struct *vma, unsigned long address) @@ -6422,37 +6421,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ =20 -static void record_subpages(struct page *page, struct vm_area_struct *vma, - int refs, struct page **pages) -{ - int nr; - - for (nr =3D 0; nr < refs; nr++) { - if (likely(pages)) - pages[nr] =3D nth_page(page, nr); - } -} - -static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma, - unsigned int flags, pte_t *pte, - bool *unshare) -{ - pte_t pteval =3D huge_ptep_get(pte); - - *unshare =3D false; - if (is_swap_pte(pteval)) - return true; - if (huge_pte_write(pteval)) - return false; - if (flags & FOLL_WRITE) - return true; - if (gup_must_unshare(vma, flags, pte_page(pteval))) { - *unshare =3D true; - return true; - } - return false; -} - struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned int *page_mask) @@ -6522,198 +6490,6 @@ struct page *hugetlb_follow_page_mask(struct vm_are= a_struct *vma, return page; } =20 -long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, - struct page **pages, unsigned long *position, - unsigned long *nr_pages, long i, unsigned int flags, - int *locked) -{ - unsigned long pfn_offset; - unsigned long vaddr =3D *position; - unsigned long remainder =3D *nr_pages; - struct hstate *h =3D hstate_vma(vma); - int err =3D -EFAULT, refs; - - while (vaddr < vma->vm_end && remainder) { - pte_t *pte; - spinlock_t *ptl =3D NULL; - bool unshare =3D false; - int absent; - struct page *page; - - /* - * If we have a pending SIGKILL, don't keep faulting pages and - * potentially allocating memory. - */ - if (fatal_signal_pending(current)) { - remainder =3D 0; - break; - } - - hugetlb_vma_lock_read(vma); - /* - * Some archs (sparc64, sh*) have multiple pte_ts to - * each hugepage. We have to make sure we get the - * first, for the page indexing below to work. - * - * Note that page table lock is not held when pte is null. - */ - pte =3D hugetlb_walk(vma, vaddr & huge_page_mask(h), - huge_page_size(h)); - if (pte) - ptl =3D huge_pte_lock(h, mm, pte); - absent =3D !pte || huge_pte_none(huge_ptep_get(pte)); - - /* - * When coredumping, it suits get_dump_page if we just return - * an error where there's an empty slot with no huge pagecache - * to back it. This way, we avoid allocating a hugepage, and - * the sparse dumpfile avoids allocating disk blocks, but its - * huge holes still show up with zeroes where they need to be. - */ - if (absent && (flags & FOLL_DUMP) && - !hugetlbfs_pagecache_present(h, vma, vaddr)) { - if (pte) - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - remainder =3D 0; - break; - } - - /* - * We need call hugetlb_fault for both hugepages under migration - * (in which case hugetlb_fault waits for the migration,) and - * hwpoisoned hugepages (in which case we need to prevent the - * caller from accessing to them.) In order to do this, we use - * here is_swap_pte instead of is_hugetlb_entry_migration and - * is_hugetlb_entry_hwpoisoned. This is because it simply covers - * both cases, and because we can't follow correct pages - * directly from any kind of swap entries. - */ - if (absent || - __follow_hugetlb_must_fault(vma, flags, pte, &unshare)) { - vm_fault_t ret; - unsigned int fault_flags =3D 0; - - if (pte) - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - - if (flags & FOLL_WRITE) - fault_flags |=3D FAULT_FLAG_WRITE; - else if (unshare) - fault_flags |=3D FAULT_FLAG_UNSHARE; - if (locked) { - fault_flags |=3D FAULT_FLAG_ALLOW_RETRY | - FAULT_FLAG_KILLABLE; - if (flags & FOLL_INTERRUPTIBLE) - fault_flags |=3D FAULT_FLAG_INTERRUPTIBLE; - } - if (flags & FOLL_NOWAIT) - fault_flags |=3D FAULT_FLAG_ALLOW_RETRY | - FAULT_FLAG_RETRY_NOWAIT; - if (flags & FOLL_TRIED) { - /* - * Note: FAULT_FLAG_ALLOW_RETRY and - * FAULT_FLAG_TRIED can co-exist - */ - fault_flags |=3D FAULT_FLAG_TRIED; - } - ret =3D hugetlb_fault(mm, vma, vaddr, fault_flags); - if (ret & VM_FAULT_ERROR) { - err =3D vm_fault_to_errno(ret, flags); - remainder =3D 0; - break; - } - if (ret & VM_FAULT_RETRY) { - if (locked && - !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) - *locked =3D 0; - *nr_pages =3D 0; - /* - * VM_FAULT_RETRY must not return an - * error, it will return zero - * instead. - * - * No need to update "position" as the - * caller will not check it after - * *nr_pages is set to 0. - */ - return i; - } - continue; - } - - pfn_offset =3D (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT; - page =3D pte_page(huge_ptep_get(pte)); - - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && - !PageAnonExclusive(page), page); - - /* - * If subpage information not requested, update counters - * and skip the same_page loop below. - */ - if (!pages && !pfn_offset && - (vaddr + huge_page_size(h) < vma->vm_end) && - (remainder >=3D pages_per_huge_page(h))) { - vaddr +=3D huge_page_size(h); - remainder -=3D pages_per_huge_page(h); - i +=3D pages_per_huge_page(h); - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - continue; - } - - /* vaddr may not be aligned to PAGE_SIZE */ - refs =3D min3(pages_per_huge_page(h) - pfn_offset, remainder, - (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT); - - if (pages) - record_subpages(nth_page(page, pfn_offset), - vma, refs, - likely(pages) ? pages + i : NULL); - - if (pages) { - /* - * try_grab_folio() should always succeed here, - * because: a) we hold the ptl lock, and b) we've just - * checked that the huge page is present in the page - * tables. If the huge page is present, then the tail - * pages must also be present. The ptl prevents the - * head page and tail pages from being rearranged in - * any way. As this is hugetlb, the pages will never - * be p2pdma or not longterm pinable. So this page - * must be available at this point, unless the page - * refcount overflowed: - */ - if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs, - flags))) { - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - remainder =3D 0; - err =3D -ENOMEM; - break; - } - } - - vaddr +=3D (refs << PAGE_SHIFT); - remainder -=3D refs; - i +=3D refs; - - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - } - *nr_pages =3D remainder; - /* - * setting position is actually required only if remainder is - * not zero but it's faster not to add a "if (remainder)" - * branch. - */ - *position =3D vaddr; - - return i ? i : err; -} - long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot, unsigned long cp_flags) --=20 2.40.1 From nobody Sat Feb 7 12:19:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD6E2EB64DD for ; Fri, 23 Jun 2023 14:30:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231547AbjFWOa4 (ORCPT ); Fri, 23 Jun 2023 10:30:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232166AbjFWOac (ORCPT ); Fri, 23 Jun 2023 10:30:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AC9C2695 for ; Fri, 23 Jun 2023 07:29:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687530591; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3EQ9G/WyCR5BzbeYKEOc/c4ex09bzeUnnwnMR75AIjI=; b=Te/9SbCUR3wosLUMCz0Kgo/EYnebpEtmVUEKvx2xWgPU+fNw71rWMEsbF89SqlQUwdC+Dv olOmR7V1cEiMjlRfcR9uRCzlLIH+P9JqooNgRYZVv6t/0PbgnTLWERMl8zuGsmH38E0+rK L7YpMAZ2h/+rnjmVsHHXQymqxqd+0uc= Received: from mail-vk1-f197.google.com (mail-vk1-f197.google.com [209.85.221.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-64-0rnWH1vaMrGklHoJAUwyFw-1; Fri, 23 Jun 2023 10:29:50 -0400 X-MC-Unique: 0rnWH1vaMrGklHoJAUwyFw-1 Received: by mail-vk1-f197.google.com with SMTP id 71dfb90a1353d-46609b859f3so46927e0c.1 for ; Fri, 23 Jun 2023 07:29:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687530589; x=1690122589; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3EQ9G/WyCR5BzbeYKEOc/c4ex09bzeUnnwnMR75AIjI=; b=L5mfwJRPcK419wZbXqDYuffMvvJ3D6zv5+J369H9x9NWoAy7aP7i+mUDIU2DUprAzH 2nN8uTZ4wff15mycs8x0naLj1Wfw0M27LpQ2aR7Uta65kANHEhQUyNhWneMQRGxsVVM9 ObiIPfn094mSew2OwiCWJyL+220UPzYpXAgR33Ynn7cfM6NJo1st5L+RzCLfvecUaol5 SUr862aHWYYMaZCKQkCidTkVCb2sWzQz9tpEiV7p2Sxt+4TOJGUKtFgVR0Sree+WRwQ/ vrg3ir3HXmzjSSvZVlOhmFFd3AQrdBaOxig/vx76LQSbFBnjGURVBQkusOOCkc5/T4gc A4SQ== X-Gm-Message-State: AC+VfDxEwSelHHYpfGJMGWCZ7auZCzBEJqBZfSUw+hT0mtansfQPWnVZ ytgtbqwhGSA3F2yNA7boSMnDSiE2WlrF5tOfiQNP7JK0lnbxT55SWxHzS0uAVtKKiykJmRaq85S 2JP9WLWL8H1FEwCc4CsncKADw X-Received: by 2002:a05:6102:2158:b0:440:a365:ad50 with SMTP id h24-20020a056102215800b00440a365ad50mr6848581vsg.1.1687530589438; Fri, 23 Jun 2023 07:29:49 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5OV7Q+bQTZ8nHFXS9cjC6+LnDnU4rBFQijCL6Szo7CDKJQ/6LTVykZKitewlwT3KIiAhTFvw== X-Received: by 2002:a05:6102:2158:b0:440:a365:ad50 with SMTP id h24-20020a056102215800b00440a365ad50mr6848561vsg.1.1687530589175; Fri, 23 Jun 2023 07:29:49 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id b9-20020a0cc989000000b0062821057ac7sm5104827qvk.39.2023.06.23.07.29.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 07:29:48 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Lorenzo Stoakes , John Hubbard , Andrew Morton , Mike Rapoport , David Hildenbrand , peterx@redhat.com, Yang Shi , Andrea Arcangeli , Vlastimil Babka , "Kirill A . Shutemov" , James Houghton , Matthew Wilcox , Mike Kravetz , Hugh Dickins , Jason Gunthorpe Subject: [PATCH v3 7/8] selftests/mm: Add -a to run_vmtests.sh Date: Fri, 23 Jun 2023 10:29:35 -0400 Message-Id: <20230623142936.268456-8-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230623142936.268456-1-peterx@redhat.com> References: <20230623142936.268456-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Allows to specify optional tests in run_vmtests.sh, where we can run time consuming test matrix only when user specified "-a". Signed-off-by: Peter Xu Acked-by: David Hildenbrand --- tools/testing/selftests/mm/run_vmtests.sh | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/self= tests/mm/run_vmtests.sh index 3f26f6e15b2a..824e651f62f4 100644 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -12,11 +12,14 @@ exitcode=3D0 =20 usage() { cat <"] +usage: ${BASH_SOURCE[0]:-$0} [ options ] + + -a: run all tests, including extra ones -t: specify specific categories to tests to run -h: display this message =20 -The default behavior is to run all tests. +The default behavior is to run required tests only. If -a is specified, +will run all tests. =20 Alternatively, specific groups tests can be run by passing a string to the -t argument containing one or more of the following categories @@ -60,9 +63,11 @@ EOF exit 0 } =20 +RUN_ALL=3Dfalse =20 -while getopts "ht:" OPT; do +while getopts "aht:" OPT; do case ${OPT} in + "a") RUN_ALL=3Dtrue ;; "h") usage ;; "t") VM_SELFTEST_ITEMS=3D${OPTARG} ;; esac --=20 2.40.1 From nobody Sat Feb 7 12:19:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D134CEB64D7 for ; Fri, 23 Jun 2023 14:31:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232281AbjFWObK (ORCPT ); Fri, 23 Jun 2023 10:31:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231709AbjFWOai (ORCPT ); Fri, 23 Jun 2023 10:30:38 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DC612699 for ; Fri, 23 Jun 2023 07:29:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687530592; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lJajW3CLipjyQ640k+WW23diII/jzqCXLm/r+yoabGU=; b=QUQjzX1dXqe/Fw4J5ghRkijWgs0q7MnbC5WpVEC6Y0KjRjfs2/3dM59t+YK13yJUzxXSKo KBtZAtNsI/kcJFm1k1Lw2xmsIzGH7/lcSpvFFFyQEdgoyZZIbuIwFkHaE+KkQ91cNEMJLp H4PioZ6r15Fbb/exmimxgJ1M4dwKNN4= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-342-Ju6IGAA_OyWEYRipBJRQCA-1; Fri, 23 Jun 2023 10:29:51 -0400 X-MC-Unique: Ju6IGAA_OyWEYRipBJRQCA-1 Received: by mail-qv1-f72.google.com with SMTP id 6a1803df08f44-6340023ffbbso477766d6.1 for ; Fri, 23 Jun 2023 07:29:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687530591; x=1690122591; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lJajW3CLipjyQ640k+WW23diII/jzqCXLm/r+yoabGU=; b=VhGJOgqoLVy2ZXM0QQLF3NRO9VZAMZFMaDvL+u8xLjkHETfYsP0wbUS7Y4GkEdpdux nPQ64zlDT3GPRNxHLHYnDv6+pSiuLxJoIy7vEz1mTqT+dQ1VD4QKUjvx2EQf/o+lM7XE aP/m9Z30qzschNXezGs03opqnxubZbYnG/F0elV7e8AJFjBy30123pKdlWqKGzgIYdDN pKb9QXmoUyJk5hBgUJR6QmT/6TM8cwQdJRUHjex+HdqAdoqKtdoAxmz1bhT55qju8kDc atpt3dT3pY5kQ3sa0OTrFWIZpfCJpXxwVAYxPrFh0HC9gkRCAkSAh52d5dTctZ4ejwXq Fsfg== X-Gm-Message-State: AC+VfDyDtfkNVg6uF2SNUrI9cv7zFkmzL0QdKRyQx66HUlGPaMa5r2zo EzNxhP/+0vlC6M4Y+Z0GiQXXfxVfZjHBRdpT2VMGgWJEr8mGSNLMKt8qSCgQ0gSlSwkPb1m5u1+ aRrbzDGjxOnRwt0p7F+K8T7kU X-Received: by 2002:ad4:5b8a:0:b0:62f:e4de:5bed with SMTP id 10-20020ad45b8a000000b0062fe4de5bedmr25209293qvp.5.1687530590835; Fri, 23 Jun 2023 07:29:50 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4LBdflQdmHl6rCFBv/aEN13DoGpkWXXDa3kzujb7sug+FgkYDvvz3ULmIE5oitFlMfpXnucg== X-Received: by 2002:ad4:5b8a:0:b0:62f:e4de:5bed with SMTP id 10-20020ad45b8a000000b0062fe4de5bedmr25209269qvp.5.1687530590562; Fri, 23 Jun 2023 07:29:50 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id b9-20020a0cc989000000b0062821057ac7sm5104827qvk.39.2023.06.23.07.29.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jun 2023 07:29:50 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Lorenzo Stoakes , John Hubbard , Andrew Morton , Mike Rapoport , David Hildenbrand , peterx@redhat.com, Yang Shi , Andrea Arcangeli , Vlastimil Babka , "Kirill A . Shutemov" , James Houghton , Matthew Wilcox , Mike Kravetz , Hugh Dickins , Jason Gunthorpe Subject: [PATCH v3 8/8] selftests/mm: Add gup test matrix in run_vmtests.sh Date: Fri, 23 Jun 2023 10:29:36 -0400 Message-Id: <20230623142936.268456-9-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230623142936.268456-1-peterx@redhat.com> References: <20230623142936.268456-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add a matrix for testing gup based on the current gup_test. Only run the matrix when -a is specified because it's a bit slow. It covers: - Different types of huge pages: thp, hugetlb, or no huge page - Permissions: Write / Read-only - Fast-gup, with/without - Types of the GUP: pin / gup / longterm pins - Shared / Private memories - GUP size: 1 / 512 / random page sizes Signed-off-by: Peter Xu Acked-by: David Hildenbrand --- tools/testing/selftests/mm/run_vmtests.sh | 37 ++++++++++++++++++++--- 1 file changed, 32 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/self= tests/mm/run_vmtests.sh index 824e651f62f4..9666c0c171ab 100644 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -90,6 +90,30 @@ test_selected() { fi } =20 +run_gup_matrix() { + # -t: thp=3Don, -T: thp=3Doff, -H: hugetlb=3Don + local hugetlb_mb=3D$(( needmem_KB / 1024 )) + + for huge in -t -T "-H -m $hugetlb_mb"; do + # -u: gup-fast, -U: gup-basic, -a: pin-fast, -b: pin-basic, -L: pi= n-longterm + for test_cmd in -u -U -a -b -L; do + # -w: write=3D1, -W: write=3D0 + for write in -w -W; do + # -S: shared + for share in -S " "; do + # -n: How many pages to fetch together? 512 is special + # because it's default thp size (or 2M on x86), 123 to + # just test partial gup when hit a huge in whatever fo= rm + for num in "-n 1" "-n 512" "-n 123"; do + CATEGORY=3D"gup_test" run_test ./gup_test \ + $huge $test_cmd $write $share $num + done + done + done + done + done +} + # get huge pagesize and freepages from /proc/meminfo while read -r name size unit; do if [ "$name" =3D "HugePages_Free:" ]; then @@ -194,13 +218,16 @@ fi =20 CATEGORY=3D"mmap" run_test ./map_fixed_noreplace =20 -# get_user_pages_fast() benchmark -CATEGORY=3D"gup_test" run_test ./gup_test -u -# pin_user_pages_fast() benchmark -CATEGORY=3D"gup_test" run_test ./gup_test -a +if $RUN_ALL; then + run_gup_matrix +else + # get_user_pages_fast() benchmark + CATEGORY=3D"gup_test" run_test ./gup_test -u + # pin_user_pages_fast() benchmark + CATEGORY=3D"gup_test" run_test ./gup_test -a +fi # Dump pages 0, 19, and 4096, using pin_user_pages: CATEGORY=3D"gup_test" run_test ./gup_test -ct -F 0x1 0 19 0x1000 - CATEGORY=3D"gup_test" run_test ./gup_longterm =20 CATEGORY=3D"userfaultfd" run_test ./uffd-unit-tests --=20 2.40.1