From nobody Mon Feb 9 09:22:47 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1564959067; cv=none; d=zoho.com; s=zohoarc; b=mxYQr7OsIU7ToZgczcZGAFcj4KbUwTIZlhww8ayL1O+IstTxv/m5JRahpNYn0z/O8K47HTgpBJiNXomyAQLtHF+WCdc0v2+vLclGVnHBMjfgOLuIqdKNe4bAR72k/drF+nTodUPHHGYTdEFRaIcMVxAiQWzhiLFFoMTB2Oi4DAw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1564959067; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=ll+dwytI/+MqjY/3fHXBsJoWLxSvkZh54V/wBJgReTQ=; b=WuDNVZ6sZ/voZKOXb2pZRTgggV8D2yTumr2HFnYO7lPJIXyNopZGFeX7c8Q6SUIOVLiwMdjJht/ql1OPtJ21bE2S4PQDFG1h5dzig7qFKXWjaOsk+OOl1BhBk2riUIse9HlVZGU/+82LKL0SqotdVsgRJuuCV88Alj61z1/ZAdw= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1564959067765322.661301330494; Sun, 4 Aug 2019 15:51:07 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1huPJb-0003Rt-Kl; Sun, 04 Aug 2019 22:49:23 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1huPJa-0003RY-LM for xen-devel@lists.xenproject.org; Sun, 04 Aug 2019 22:49:22 +0000 Received: from mail-pf1-x441.google.com (unknown [2607:f8b0:4864:20::441]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 1901b817-b70a-11e9-8980-bc764e045a96; Sun, 04 Aug 2019 22:49:21 +0000 (UTC) Received: by mail-pf1-x441.google.com with SMTP id p184so38593361pfp.7 for ; Sun, 04 Aug 2019 15:49:21 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id r6sm35946836pjb.22.2019.08.04.15.49.19 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 04 Aug 2019 15:49:19 -0700 (PDT) X-Inumbo-ID: 1901b817-b70a-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AIONVD1ibu9lCnP7gBI5NeGvxxgvVJwLeJRH750Gszo=; b=gJZZQQvmwAPTKnrJBe+ly3fvjfaXUI9ig+EFFukhMT3EgWNu7MIfa2Xzv4wNTIBYye NFJ0dtKSbQJ3MvgPOaANox+0PexWsWwun47HNmnaW64hQ1NJK+gdSzFnsU8ifz8NBkIr aSa5/5lwJ0gzybFkpRxwlqrkvRElsoPkruabuI7OZLYl9GPATcWTn1GgddkEaQmRPfqv Z3Tg2wfua6Z9GfEdNatHRJ4+PcYaht6L9WFhhQ03OkN4K2gxdxDhjCAbZPTWveg0faSy fAJxDkAHd9r8mct4Il8/tP2K+Hmjdf++oXjwtZFm13dDXT5ZjA706JV5qknjjglBcxuw +vWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AIONVD1ibu9lCnP7gBI5NeGvxxgvVJwLeJRH750Gszo=; b=XY07w8jY9GqIVCi/VeuaQeR4ekbAE9Y0YH97hpimh5/MYpzlbMTzLQir1TaSsyiOHd 0rGNO7seu4BHcUeHoWordab8rRXaj0VzMZv/FE3BhC9flB/L8E1lYDPH3kdN9y0g3su7 iAG4nUehtI535n7Se0M8WS/VpQl0hzfSMeWduB/yb7dFQNlNqIDPMsFz6HkdCWdb3fJ4 Slcl/4Yx3+wlHgEqtDFumKJ6Pfe+TjjDQ65If6UTetoYHq0awYuA4CdY+fVtwS9EpEJH aXrSDZYkR7yWmCG5GWJavj+7vGMfhnNbhwr7+EQDvhzEbji3SGDkgXXOJLZj0kC7uA8Y SMUA== X-Gm-Message-State: APjAAAUQpIBQnTkujP+Fsn79W0Qq+mygyI/wkzh522OxmYFXeaTbHPuu DyDnBVgVq+rm7K5SI2AD9LQ= X-Google-Smtp-Source: APXvYqw+Gs8uCFGZ/xGn3M492gK3yhtVQ4yJrqubZlSSb+W+S6ocNyF/MC9tWdBhydO3x2a3gSthgw== X-Received: by 2002:a62:5c01:: with SMTP id q1mr48066780pfb.53.1564958960521; Sun, 04 Aug 2019 15:49:20 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Date: Sun, 4 Aug 2019 15:48:42 -0700 Message-Id: <20190804224915.28669-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190804224915.28669-1-jhubbard@nvidia.com> References: <20190804224915.28669-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Subject: [Xen-devel] [PATCH v2 01/34] mm/gup: add make_dirty arg to put_user_pages_dirty_lock() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: linux-fbdev@vger.kernel.org, Jan Kara , kvm@vger.kernel.org, Dave Hansen , Dave Chinner , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Matthew Wilcox , sparclinux@vger.kernel.org, Ira Weiny , ceph-devel@vger.kernel.org, devel@driverdev.osuosl.org, rds-devel@oss.oracle.com, linux-rdma@vger.kernel.org, x86@kernel.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Christoph Hellwig , Jason Gunthorpe , xen-devel@lists.xenproject.org, devel@lists.orangefs.org, linux-media@vger.kernel.org, John Hubbard , intel-gfx@lists.freedesktop.org, linux-block@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , linux-rpi-kernel@lists.infradead.org, Dan Williams , linux-arm-kernel@lists.infradead.org, linux-nfs@vger.kernel.org, netdev@vger.kernel.org, LKML , linux-xfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-fsdevel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: John Hubbard Provide a more capable variation of put_user_pages_dirty_lock(), and delete put_user_pages_dirty(). This is based on the following: 1. Lots of call sites become simpler if a bool is passed into put_user_page*(), instead of making the call site choose which put_user_page*() variant to call. 2. Christoph Hellwig's observation that set_page_dirty_lock() is usually correct, and set_page_dirty() is usually a bug, or at least questionable, within a put_user_page*() calling chain. This leads to the following API choices: * put_user_pages_dirty_lock(page, npages, make_dirty) * There is no put_user_pages_dirty(). You have to hand code that, in the rare case that it's required. Reviewed-by: Christoph Hellwig Cc: Matthew Wilcox Cc: Jan Kara Cc: Ira Weiny Cc: Jason Gunthorpe Signed-off-by: John Hubbard Reviewed-by: Ira Weiny --- drivers/infiniband/core/umem.c | 5 +- drivers/infiniband/hw/hfi1/user_pages.c | 5 +- drivers/infiniband/hw/qib/qib_user_pages.c | 13 +-- drivers/infiniband/hw/usnic/usnic_uiom.c | 5 +- drivers/infiniband/sw/siw/siw_mem.c | 19 +--- include/linux/mm.h | 5 +- mm/gup.c | 115 +++++++++------------ 7 files changed, 61 insertions(+), 106 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 08da840ed7ee..965cf9dea71a 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -54,10 +54,7 @@ static void __ib_umem_release(struct ib_device *dev, str= uct ib_umem *umem, int d =20 for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) { page =3D sg_page_iter_page(&sg_iter); - if (umem->writable && dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, umem->writable && dirty); } =20 sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/h= w/hfi1/user_pages.c index b89a9b9aef7a..469acb961fbd 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -118,10 +118,7 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsi= gned long vaddr, size_t np void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - if (dirty) - put_user_pages_dirty_lock(p, npages); - else - put_user_pages(p, npages); + put_user_pages_dirty_lock(p, npages, dirty); =20 if (mm) { /* during close after signal, mm can be NULL */ atomic64_sub(npages, &mm->pinned_vm); diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniban= d/hw/qib/qib_user_pages.c index bfbfbb7e0ff4..26c1fb8d45cc 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -37,15 +37,6 @@ =20 #include "qib.h" =20 -static void __qib_release_user_pages(struct page **p, size_t num_pages, - int dirty) -{ - if (dirty) - put_user_pages_dirty_lock(p, num_pages); - else - put_user_pages(p, num_pages); -} - /** * qib_map_page - a safety wrapper around pci_map_page() * @@ -124,7 +115,7 @@ int qib_get_user_pages(unsigned long start_page, size_t= num_pages, =20 return 0; bail_release: - __qib_release_user_pages(p, got, 0); + put_user_pages_dirty_lock(p, got, false); bail: atomic64_sub(num_pages, ¤t->mm->pinned_vm); return ret; @@ -132,7 +123,7 @@ int qib_get_user_pages(unsigned long start_page, size_t= num_pages, =20 void qib_release_user_pages(struct page **p, size_t num_pages) { - __qib_release_user_pages(p, num_pages, 1); + put_user_pages_dirty_lock(p, num_pages, true); =20 /* during close after signal, mm can be NULL */ if (current->mm) diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/= hw/usnic/usnic_uiom.c index 0b0237d41613..62e6ffa9ad78 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -75,10 +75,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk= _list, int dirty) for_each_sg(chunk->page_list, sg, chunk->nents, i) { page =3D sg_page(sg); pa =3D sg_phys(sg); - if (dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, dirty); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk); diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/si= w/siw_mem.c index 67171c82b0c4..1e197753bf2f 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -60,20 +60,6 @@ struct siw_mem *siw_mem_id2obj(struct siw_device *sdev, = int stag_index) return NULL; } =20 -static void siw_free_plist(struct siw_page_chunk *chunk, int num_pages, - bool dirty) -{ - struct page **p =3D chunk->plist; - - while (num_pages--) { - if (!PageDirty(*p) && dirty) - put_user_pages_dirty_lock(p, 1); - else - put_user_page(*p); - p++; - } -} - void siw_umem_release(struct siw_umem *umem, bool dirty) { struct mm_struct *mm_s =3D umem->owning_mm; @@ -82,8 +68,9 @@ void siw_umem_release(struct siw_umem *umem, bool dirty) for (i =3D 0; num_pages; i++) { int to_free =3D min_t(int, PAGES_PER_CHUNK, num_pages); =20 - siw_free_plist(&umem->page_chunk[i], to_free, - umem->writable && dirty); + put_user_pages_dirty_lock(umem->page_chunk[i].plist, + to_free, + umem->writable && dirty); kfree(umem->page_chunk[i].plist); num_pages -=3D to_free; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 0334ca97c584..9759b6a24420 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1057,8 +1057,9 @@ static inline void put_user_page(struct page *page) put_page(page); } =20 -void put_user_pages_dirty(struct page **pages, unsigned long npages); -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages); +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty); + void put_user_pages(struct page **pages, unsigned long npages); =20 #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) diff --git a/mm/gup.c b/mm/gup.c index 98f13ab37bac..7fefd7ab02c4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,85 +29,70 @@ struct follow_page_context { unsigned int page_mask; }; =20 -typedef int (*set_dirty_func_t)(struct page *page); - -static void __put_user_pages_dirty(struct page **pages, - unsigned long npages, - set_dirty_func_t sdf) -{ - unsigned long index; - - for (index =3D 0; index < npages; index++) { - struct page *page =3D compound_head(pages[index]); - - /* - * Checking PageDirty at this point may race with - * clear_page_dirty_for_io(), but that's OK. Two key cases: - * - * 1) This code sees the page as already dirty, so it skips - * the call to sdf(). That could happen because - * clear_page_dirty_for_io() called page_mkclean(), - * followed by set_page_dirty(). However, now the page is - * going to get written back, which meets the original - * intention of setting it dirty, so all is well: - * clear_page_dirty_for_io() goes on to call - * TestClearPageDirty(), and write the page back. - * - * 2) This code sees the page as clean, so it calls sdf(). - * The page stays dirty, despite being written back, so it - * gets written back again in the next writeback cycle. - * This is harmless. - */ - if (!PageDirty(page)) - sdf(page); - - put_user_page(page); - } -} - /** - * put_user_pages_dirty() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. + * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned p= ages + * @pages: array of pages to be maybe marked dirty, and definitely releas= ed. * @npages: number of pages in the @pages array. + * @make_dirty: whether to mark the pages dirty * * "gup-pinned page" refers to a page that has had one of the get_user_pag= es() * variants called on that page. * * For each page in the @pages array, make that page (or its head page, if= a - * compound page) dirty, if it was previously listed as clean. Then, relea= se - * the page using put_user_page(). + * compound page) dirty, if @make_dirty is true, and if the page was previ= ously + * listed as clean. In any case, releases all pages using put_user_page(), + * possibly via put_user_pages(), for the non-dirty case. * * Please see the put_user_page() documentation for details. * - * set_page_dirty(), which does not lock the page, is used here. - * Therefore, it is the caller's responsibility to ensure that this is - * safe. If not, then put_user_pages_dirty_lock() should be called instead. + * set_page_dirty_lock() is used internally. If instead, set_page_dirty() = is + * required, then the caller should a) verify that this is really correct, + * because _lock() is usually required, and b) hand code it: + * set_page_dirty_lock(), put_user_page(). * */ -void put_user_pages_dirty(struct page **pages, unsigned long npages) +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty) { - __put_user_pages_dirty(pages, npages, set_page_dirty); -} -EXPORT_SYMBOL(put_user_pages_dirty); + unsigned long index; =20 -/** - * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned = pages - * @pages: array of pages to be marked dirty and released. - * @npages: number of pages in the @pages array. - * - * For each page in the @pages array, make that page (or its head page, if= a - * compound page) dirty, if it was previously listed as clean. Then, relea= se - * the page using put_user_page(). - * - * Please see the put_user_page() documentation for details. - * - * This is just like put_user_pages_dirty(), except that it invokes - * set_page_dirty_lock(), instead of set_page_dirty(). - * - */ -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages) -{ - __put_user_pages_dirty(pages, npages, set_page_dirty_lock); + /* + * TODO: this can be optimized for huge pages: if a series of pages is + * physically contiguous and part of the same compound page, then a + * single operation to the head page should suffice. + */ + + if (!make_dirty) { + put_user_pages(pages, npages); + return; + } + + for (index =3D 0; index < npages; index++) { + struct page *page =3D compound_head(pages[index]); + /* + * Checking PageDirty at this point may race with + * clear_page_dirty_for_io(), but that's OK. Two key + * cases: + * + * 1) This code sees the page as already dirty, so it + * skips the call to set_page_dirty(). That could happen + * because clear_page_dirty_for_io() called + * page_mkclean(), followed by set_page_dirty(). + * However, now the page is going to get written back, + * which meets the original intention of setting it + * dirty, so all is well: clear_page_dirty_for_io() goes + * on to call TestClearPageDirty(), and write the page + * back. + * + * 2) This code sees the page as clean, so it calls + * set_page_dirty(). The page stays dirty, despite being + * written back, so it gets written back again in the + * next writeback cycle. This is harmless. + */ + if (!PageDirty(page)) + set_page_dirty_lock(page); + put_user_page(page); + } } EXPORT_SYMBOL(put_user_pages_dirty_lock); =20 --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel