From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590246; cv=none; d=zoho.com; s=zohoarc; b=evB2dU8l/GiARMizvB/JbDmJp9CoUJnMkzIQ2v41d88oNzgY3DCOulJJdz/RvZHt1tUtvY3xikrz+psAoMhPs5ADKD+u4cvxFSJkiZuAdjZFsQEj2jJm8QKa169ciDS3RXJQhk2H+Z0+XBn8wEDMP6vJZLApI8QkiBBjGbVVqpc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590246; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Nkmv3ARzBbeGHsdlCL6CAYL2mFvvtiKYbPDXGCeoiWc=; b=e/mRKdvdMSkzwixKfDrA4WLBbv/PDr2SLGfDijkO63iQyMBqsdCrfXniA3lCzblDGBVhJhl2xDaZD9khbhwr8SOwT0sufAljJtX2mSBAi9CkJxOyzmfC7NuGlV0azE1b/knWu10sVTXq4p2HHhkNktReeZPAOWCvaQWdqVvPYpA= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590246549748.3887805855958; Tue, 12 Nov 2019 12:24:06 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgc-0002yq-N6; Tue, 12 Nov 2019 20:22:50 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgb-0002yk-GK for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:22:49 +0000 Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 31bed46a-058a-11ea-9631-bc764e2007e4; Tue, 12 Nov 2019 20:22:48 +0000 (UTC) Received: by mail-qv1-xf43.google.com with SMTP id d3so5366762qvs.11 for ; Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id z17sm8848536qtq.69.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003ja-5T; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 31bed46a-058a-11ea-9631-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vnS3z876J83ixoyjmv49DG9xbHQOY1MbChbRcHTvym4=; b=Dzk0IYPNoeuja6Z87jLZ16h5TFzbWhhaL2hE6e59eRJAr2o3x1k/qz1haRHmaKm2Oo pJdjmv04a60AokPpqfbEs8fcyiISwOqa6zHSTlPcqeOyWCS8kySDbonYwBGZimXEsiCr m+99Yx7WlBmR7rBCtzYugbBaq8RGC03XIHbn2PpO3V8jyiqo5Hk0w/HBZzsp2aMrUtNM 6LE16wMsLdbDUqIZRBITP0567Jo4IONU9HkH5/mvp6Sr+ybWOj9b1NnCn2oFxTupAE27 smAd6xHPhgykqaEmfbT/1zs/1tv2/o4ldY0ZS0sj1TBlKGbqpB11Kml+kU8JiiV0ASzB Lu1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vnS3z876J83ixoyjmv49DG9xbHQOY1MbChbRcHTvym4=; b=qy2svGRSyHQcfvWM9+hRNN4uMCFLxVcnzu3M1Bl2itBztrawsYDfweubGp0u/Zyli5 FKFes2mPoXbVr5W39D41z9u/w7vEXc8mfmWFqZThW7O+8Ymg/U82ytyj8slZMm6BQf2f 6C/Dy9YqDRJfLcKJxwQJ7L+NN8220iWWaI+gmowyqFRLhCI8zLU/Ts0NROAkLC03AxI2 HhMeUrO9c0POfRYj7u6VCwBpH6IYSCbKxd6T2d2lz9JxCgEtcBaGJVthdmi9tXsy4NzR nyB+Xd0ipLi3Z29aLOnEnqXJXmCTLQ4zkqVu4nwOH2vNga5JvdDAcH7BK8iBNT7OMLRZ mM0Q== X-Gm-Message-State: APjAAAVW7d/Y6Jqg6naNtTwU6400ufSdsLqxrgK4pZ9FgajnEtBFe005 t+vi706fhWOc9m1HlNZkyr7D+w== X-Google-Smtp-Source: APXvYqwt6TPTyhRT+Pf1utgCUQUY7yQF048IJVBt4Y/k4d6RD16npyce/49GYasfkdRZSZLp6DS/WQ== X-Received: by 2002:a05:6214:14ac:: with SMTP id bo12mr30993106qvb.67.1573590168353; Tue, 12 Nov 2019 12:22:48 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:18 -0400 Message-Id: <20191112202231.3856-2-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 01/14] mm/mmu_notifier: define the header pre-processor parts even if disabled X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe Now that we have KERNEL_HEADER_TEST all headers are generally compile tested, so relying on makefile tricks to avoid compiling code that depends on CONFIG_MMU_NOTIFIER is more annoying. Instead follow the usual pattern and provide most of the header with only the functions stubbed out when CONFIG_MMU_NOTIFIER is disabled. This ensures code compiles no matter what the config setting is. While here, struct mmu_notifier_mm is private to mmu_notifier.c, move it. Reviewed-by: J=C3=A9r=C3=B4me Glisse Tested-by: Ralph Campbell Reviewed-by: John Hubbard Signed-off-by: Jason Gunthorpe Reviewed-by: Christoph Hellwig --- include/linux/mmu_notifier.h | 46 +++++++++++++----------------------- mm/mmu_notifier.c | 13 ++++++++++ 2 files changed, 30 insertions(+), 29 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 1bd8e6a09a3c27..12bd603d318ce7 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -7,8 +7,9 @@ #include #include =20 +struct mmu_notifier_mm; struct mmu_notifier; -struct mmu_notifier_ops; +struct mmu_notifier_range; =20 /** * enum mmu_notifier_event - reason for the mmu notifier callback @@ -40,36 +41,8 @@ enum mmu_notifier_event { MMU_NOTIFY_SOFT_DIRTY, }; =20 -#ifdef CONFIG_MMU_NOTIFIER - -#ifdef CONFIG_LOCKDEP -extern struct lockdep_map __mmu_notifier_invalidate_range_start_map; -#endif - -/* - * The mmu notifier_mm structure is allocated and installed in - * mm->mmu_notifier_mm inside the mm_take_all_locks() protected - * critical section and it's released only when mm_count reaches zero - * in mmdrop(). - */ -struct mmu_notifier_mm { - /* all mmu notifiers registerd in this mm are queued in this list */ - struct hlist_head list; - /* to serialize the list modifications and hlist_unhashed */ - spinlock_t lock; -}; - #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) =20 -struct mmu_notifier_range { - struct vm_area_struct *vma; - struct mm_struct *mm; - unsigned long start; - unsigned long end; - unsigned flags; - enum mmu_notifier_event event; -}; - struct mmu_notifier_ops { /* * Called either by mmu_notifier_unregister or when the mm is @@ -249,6 +222,21 @@ struct mmu_notifier { unsigned int users; }; =20 +#ifdef CONFIG_MMU_NOTIFIER + +#ifdef CONFIG_LOCKDEP +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map; +#endif + +struct mmu_notifier_range { + struct vm_area_struct *vma; + struct mm_struct *mm; + unsigned long start; + unsigned long end; + unsigned flags; + enum mmu_notifier_event event; +}; + static inline int mm_has_notifiers(struct mm_struct *mm) { return unlikely(mm->mmu_notifier_mm); diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 7fde88695f35d6..367670cfd02b7b 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -27,6 +27,19 @@ struct lockdep_map __mmu_notifier_invalidate_range_start= _map =3D { }; #endif =20 +/* + * The mmu notifier_mm structure is allocated and installed in + * mm->mmu_notifier_mm inside the mm_take_all_locks() protected + * critical section and it's released only when mm_count reaches zero + * in mmdrop(). + */ +struct mmu_notifier_mm { + /* all mmu notifiers registered in this mm are queued in this list */ + struct hlist_head list; + /* to serialize the list modifications and hlist_unhashed */ + spinlock_t lock; +}; + /* * This function can't run concurrently against mmu_notifier_register * because mm->mm_users > 0 during mmu_notifier_register and exit_mmap --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590270; cv=none; d=zoho.com; s=zohoarc; b=R3JquvnKEcMKd4rQk/XHKidwib3t+idL8lIE6M9mUaH2aJkGR80te0fQpB2oPJPlhRVlXplQNYM6bP+ynOpyx3eI2UqEopLQZ3NnlSlRFouTwUqnpvh+zzpcXjd48+dOdGHVUIVVKNd+4zEHywflr0mZGNcAG9u+SyQoMysRnx0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590270; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=PMsVSZoXzSM5id59Pqbu7FTgfJDcfh6Cv5AKFu4X8lQ=; b=A2AlA8eUFNtI9PbzPi8X55XA6AL8WMPubXRu6/DPcya35fLWO0UbyYNJ8JavbvSscbsrJ9932pzd3RRLMlvry/1lO3sbFF300s7TJcOY59zA7b9RmxfpXC/u5tC2a7rt7K6h0KJKPslmCmO+cFHq3A+Hybe2+5pGAU68HE6tzlk= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590270331625.0451162019339; Tue, 12 Nov 2019 12:24:30 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUch6-00038f-SH; Tue, 12 Nov 2019 20:23:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUch5-00037u-E0 for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:19 +0000 Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3371312c-058a-11ea-984a-bc764e2007e4; Tue, 12 Nov 2019 20:22:51 +0000 (UTC) Received: by mail-qt1-x841.google.com with SMTP id y10so21247478qto.3 for ; Tue, 12 Nov 2019 12:22:51 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id e17sm11976084qtk.65.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003jg-6u; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 3371312c-058a-11ea-984a-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AohGCprkDkOOMUYunK/FtQ4kWdIHsBqP71f8TbLipXw=; b=CfIn8g3m3gJmBOSym4g6ANgmGqb0E39HTiSkdTTuMr/hLmR88KgXDzT9ShhwV/GFkj CV/NNpgC+CtKQxRHdfqyDYF9EQDvtqDwnkc7qcCqX66Zb0hrse8N8xqAvEh4Z7C8md3B LwrsmwpIS5J/B7Ww+SL2zRtWwJ/wj6JkNJvFQLTD3qunUa6EFXhWSiXH5uwus2vXqfjf 0rPU5ArUsuEnlGLBL/DgVmVHh7b62kLdHWKgVHAhrpbphs7Q+jzAUsNR9Q7kEOh+T0bC 0wj605m5TYYJqYFgcmtP+kDcCGCBcenuhv5EAYeDRnzSyXKokBxSXSSx/5UtdRMrof4p iAOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AohGCprkDkOOMUYunK/FtQ4kWdIHsBqP71f8TbLipXw=; b=oqSRSSXWes09TpDhi5MpVuprkDFYqtHwEIj5XSwzR38EKfrw9tAiVMQNpoROP3Ck8C DVR0frGmEDknm3ytDkgE4axCXpvHg57Vqb7+rzA1056EcwyYtNHhMm5DNSNUQSVt7DWS HpB/W/OlS7gAfZQR3RiLvSTZKf5zKZ0WF7cTywqnxPiValeWAwvqOB3f9qeYMT0I9Veq 7O5Lnd01KjcmmblurtJjyPfnO2xBYFIXOUUyh8QvGfTMZbK2eEeElAJmJLNaswsVCpNU r7Vf8M8yhhizXuP05vUYPnU8dB1pie4asritITfr7ECRdLq0UfxEg8D//t+hXH6kDKPO acrg== X-Gm-Message-State: APjAAAUwefsCoJ9COQdTQ1SESUn8Kj2PsDtrfAyHDkTMXpQZo1Q7YYUC 3sVnrljP46J9pGVGKc4R6IRUKQ== X-Google-Smtp-Source: APXvYqyzRgnuxF6MTWdwAX3i+6gm7pQR3HJZHmzuGmXJFaM4kcxjZGjbdUgF3CJjBakscZI6XxWBuw== X-Received: by 2002:aed:26e2:: with SMTP id q89mr17779688qtd.391.1573590170602; Tue, 12 Nov 2019 12:22:50 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:19 -0400 Message-Id: <20191112202231.3856-3-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Philip Yang , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe Of the 13 users of mmu_notifiers, 8 of them use only invalidate_range_start/end() and immediately intersect the mmu_notifier_range with some kind of internal list of VAs. 4 use an interval tree (i915_gem, radeon_mn, umem_odp, hfi1). 4 use a linked list of some kind (scif_dma, vhost, gntdev, hmm) And the remaining 5 either don't use invalidate_range_start() or do some special thing with it. It turns out that building a correct scheme with an interval tree is pretty complicated, particularly if the use case is synchronizing against another thread doing get_user_pages(). Many of these implementations have various subtle and difficult to fix races. This approach puts the interval tree as common code at the top of the mmu notifier call tree and implements a shareable locking scheme. It includes: - An interval tree tracking VA ranges, with per-range callbacks - A read/write locking scheme for the interval tree that avoids sleeping in the notifier path (for OOM killer) - A sequence counter based collision-retry locking scheme to tell device page fault that a VA range is being concurrently invalidated. This is based on various ideas: - hmm accumulates invalidated VA ranges and releases them when all invalidates are done, via active_invalidate_ranges count. This approach avoids having to intersect the interval tree twice (as umem_odp does) at the potential cost of a longer device page fault. - kvm/umem_odp use a sequence counter to drive the collision retry, via invalidate_seq - a deferred work todo list on unlock scheme like RTNL, via deferred_list. This makes adding/removing interval tree members more deterministic - seqlock, except this version makes the seqlock idea multi-holder on the write side by protecting it with active_invalidate_ranges and a spinlock To minimize MM overhead when only the interval tree is being used, the entire SRCU and hlist overheads are dropped using some simple branches. Similarly the interval tree overhead is dropped when in hlist mode. The overhead from the mandatory spinlock is broadly the same as most of existing users which already had a lock (or two) of some sort on the invalidation path. Acked-by: Christian K=C3=B6nig Tested-by: Philip Yang Tested-by: Ralph Campbell Reviewed-by: John Hubbard Signed-off-by: Jason Gunthorpe Reviewed-by: Christoph Hellwig --- include/linux/mmu_notifier.h | 101 +++++++ mm/Kconfig | 1 + mm/mmu_notifier.c | 552 +++++++++++++++++++++++++++++++++-- 3 files changed, 628 insertions(+), 26 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 12bd603d318ce7..9e6caa8ecd1938 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -6,10 +6,12 @@ #include #include #include +#include =20 struct mmu_notifier_mm; struct mmu_notifier; struct mmu_notifier_range; +struct mmu_interval_notifier; =20 /** * enum mmu_notifier_event - reason for the mmu notifier callback @@ -32,6 +34,9 @@ struct mmu_notifier_range; * access flags). User should soft dirty the page in the end callback to m= ake * sure that anyone relying on soft dirtyness catch pages that might be wr= itten * through non CPU mappings. + * + * @MMU_NOTIFY_RELEASE: used during mmu_interval_notifier invalidate to si= gnal + * that the mm refcount is zero and the range is no longer accessible. */ enum mmu_notifier_event { MMU_NOTIFY_UNMAP =3D 0, @@ -39,6 +44,7 @@ enum mmu_notifier_event { MMU_NOTIFY_PROTECTION_VMA, MMU_NOTIFY_PROTECTION_PAGE, MMU_NOTIFY_SOFT_DIRTY, + MMU_NOTIFY_RELEASE, }; =20 #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) @@ -222,6 +228,26 @@ struct mmu_notifier { unsigned int users; }; =20 +/** + * struct mmu_interval_notifier_ops + * @invalidate: Upon return the caller must stop using any SPTEs within th= is + * range. This function can sleep. Return false only if sleep= ing + * was required but mmu_notifier_range_blockable(range) is fa= lse. + */ +struct mmu_interval_notifier_ops { + bool (*invalidate)(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range, + unsigned long cur_seq); +}; + +struct mmu_interval_notifier { + struct interval_tree_node interval_tree; + const struct mmu_interval_notifier_ops *ops; + struct mm_struct *mm; + struct hlist_node deferred_item; + unsigned long invalidate_seq; +}; + #ifdef CONFIG_MMU_NOTIFIER =20 #ifdef CONFIG_LOCKDEP @@ -263,6 +289,81 @@ extern int __mmu_notifier_register(struct mmu_notifier= *mn, struct mm_struct *mm); extern void mmu_notifier_unregister(struct mmu_notifier *mn, struct mm_struct *mm); + +unsigned long mmu_interval_read_begin(struct mmu_interval_notifier *mni); +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni, + struct mm_struct *mm, unsigned long start, + unsigned long length, + const struct mmu_interval_notifier_ops *ops); +int mmu_interval_notifier_insert_locked( + struct mmu_interval_notifier *mni, struct mm_struct *mm, + unsigned long start, unsigned long length, + const struct mmu_interval_notifier_ops *ops); +void mmu_interval_notifier_remove(struct mmu_interval_notifier *mni); + +/** + * mmu_interval_set_seq - Save the invalidation sequence + * @mni - The mni passed to invalidate + * @cur_seq - The cur_seq passed to the invalidate() callback + * + * This must be called unconditionally from the invalidate callback of a + * struct mmu_interval_notifier_ops under the same lock that is used to ca= ll + * mmu_interval_read_retry(). It updates the sequence number for later use= by + * mmu_interval_read_retry(). The provided cur_seq will always be odd. + * + * If the caller does not call mmu_interval_read_begin() or + * mmu_interval_read_retry() then this call is not required. + */ +static inline void mmu_interval_set_seq(struct mmu_interval_notifier *mni, + unsigned long cur_seq) +{ + WRITE_ONCE(mni->invalidate_seq, cur_seq); +} + +/** + * mmu_interval_read_retry - End a read side critical section against a VA= range + * mni: The range + * seq: The return of the paired mmu_interval_read_begin() + * + * This MUST be called under a user provided lock that is also held + * unconditionally by op->invalidate() when it calls mmu_interval_set_seq(= ). + * + * Each call should be paired with a single mmu_interval_read_begin() and + * should be used to conclude the read side. + * + * Returns true if an invalidation collided with this critical section, and + * the caller should retry. + */ +static inline bool mmu_interval_read_retry(struct mmu_interval_notifier *m= ni, + unsigned long seq) +{ + return mni->invalidate_seq !=3D seq; +} + +/** + * mmu_interval_check_retry - Test if a collision has occurred + * mni: The range + * seq: The return of the matching mmu_interval_read_begin() + * + * This can be used in the critical section between mmu_interval_read_begi= n() + * and mmu_interval_read_retry(). A return of true indicates an invalidat= ion + * has collided with this critical region and a future + * mmu_interval_read_retry() will return true. + * + * False is not reliable and only suggests a collision may not have + * occured. It can be called many times and does not have to hold the user + * provided lock. + * + * This call can be used as part of loops and other expensive operations to + * expedite a retry. + */ +static inline bool mmu_interval_check_retry(struct mmu_interval_notifier *= mni, + unsigned long seq) +{ + /* Pairs with the WRITE_ONCE in mmu_interval_set_seq() */ + return READ_ONCE(mni->invalidate_seq) !=3D seq; +} + extern void __mmu_notifier_mm_destroy(struct mm_struct *mm); extern void __mmu_notifier_release(struct mm_struct *mm); extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm, diff --git a/mm/Kconfig b/mm/Kconfig index a5dae9a7eb510a..d0b5046d9aeffd 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -284,6 +284,7 @@ config VIRT_TO_BUS config MMU_NOTIFIER bool select SRCU + select INTERVAL_TREE =20 config KSM bool "Enable KSM for page merging" diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 367670cfd02b7b..8ccafb12f56228 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -36,10 +37,253 @@ struct lockdep_map __mmu_notifier_invalidate_range_sta= rt_map =3D { struct mmu_notifier_mm { /* all mmu notifiers registered in this mm are queued in this list */ struct hlist_head list; + bool has_itree; /* to serialize the list modifications and hlist_unhashed */ spinlock_t lock; + unsigned long invalidate_seq; + unsigned long active_invalidate_ranges; + struct rb_root_cached itree; + wait_queue_head_t wq; + struct hlist_head deferred_list; }; =20 +/* + * This is a collision-retry read-side/write-side 'lock', a lot like a + * seqcount, however this allows multiple write-sides to hold it at + * once. Conceptually the write side is protecting the values of the PTEs = in + * this mm, such that PTES cannot be read into SPTEs (shadow PTEs) while a= ny + * writer exists. + * + * Note that the core mm creates nested invalidate_range_start()/end() reg= ions + * within the same thread, and runs invalidate_range_start()/end() in para= llel + * on multiple CPUs. This is designed to not reduce concurrency or block + * progress on the mm side. + * + * As a secondary function, holding the full write side also serves to pre= vent + * writers for the itree, this is an optimization to avoid extra locking + * during invalidate_range_start/end notifiers. + * + * The write side has two states, fully excluded: + * - mm->active_invalidate_ranges !=3D 0 + * - mnn->invalidate_seq & 1 =3D=3D True (odd) + * - some range on the mm_struct is being invalidated + * - the itree is not allowed to change + * + * And partially excluded: + * - mm->active_invalidate_ranges !=3D 0 + * - mnn->invalidate_seq & 1 =3D=3D False (even) + * - some range on the mm_struct is being invalidated + * - the itree is allowed to change + * + * Operations on mmu_notifier_mm->invalidate_seq (under spinlock): + * seq |=3D 1 # Begin writing + * seq++ # Release the writing state + * seq & 1 # True if a writer exists + * + * The later state avoids some expensive work on inv_end in the common cas= e of + * no mni monitoring the VA. + */ +static bool mn_itree_is_invalidating(struct mmu_notifier_mm *mmn_mm) +{ + lockdep_assert_held(&mmn_mm->lock); + return mmn_mm->invalidate_seq & 1; +} + +static struct mmu_interval_notifier * +mn_itree_inv_start_range(struct mmu_notifier_mm *mmn_mm, + const struct mmu_notifier_range *range, + unsigned long *seq) +{ + struct interval_tree_node *node; + struct mmu_interval_notifier *res =3D NULL; + + spin_lock(&mmn_mm->lock); + mmn_mm->active_invalidate_ranges++; + node =3D interval_tree_iter_first(&mmn_mm->itree, range->start, + range->end - 1); + if (node) { + mmn_mm->invalidate_seq |=3D 1; + res =3D container_of(node, struct mmu_interval_notifier, + interval_tree); + } + + *seq =3D mmn_mm->invalidate_seq; + spin_unlock(&mmn_mm->lock); + return res; +} + +static struct mmu_interval_notifier * +mn_itree_inv_next(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range) +{ + struct interval_tree_node *node; + + node =3D interval_tree_iter_next(&mni->interval_tree, range->start, + range->end - 1); + if (!node) + return NULL; + return container_of(node, struct mmu_interval_notifier, interval_tree); +} + +static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm) +{ + struct mmu_interval_notifier *mni; + struct hlist_node *next; + bool need_wake =3D false; + + spin_lock(&mmn_mm->lock); + if (--mmn_mm->active_invalidate_ranges || + !mn_itree_is_invalidating(mmn_mm)) { + spin_unlock(&mmn_mm->lock); + return; + } + + /* Make invalidate_seq even */ + mmn_mm->invalidate_seq++; + need_wake =3D true; + + /* + * The inv_end incorporates a deferred mechanism like + * rtnl_unlock(). Adds and removes are queued until the final inv_end + * happens then they are progressed. This arrangement for tree updates + * is used to avoid using a blocking lock during + * invalidate_range_start. + */ + hlist_for_each_entry_safe(mni, next, &mmn_mm->deferred_list, + deferred_item) { + if (RB_EMPTY_NODE(&mni->interval_tree.rb)) + interval_tree_insert(&mni->interval_tree, + &mmn_mm->itree); + else + interval_tree_remove(&mni->interval_tree, + &mmn_mm->itree); + hlist_del(&mni->deferred_item); + } + spin_unlock(&mmn_mm->lock); + + /* + * TODO: Since we already have a spinlock above, this would be faster + * as wake_up_q + */ + if (need_wake) + wake_up_all(&mmn_mm->wq); +} + +/** + * mmu_interval_read_begin - Begin a read side critical section against a = VA + * range + * mni: The range to use + * + * mmu_iterval_read_begin()/mmu_iterval_read_retry() implement a + * collision-retry scheme similar to seqcount for the VA range under mni. = If + * the mm invokes invalidation during the critical section then + * mmu_interval_read_retry() will return true. + * + * This is useful to obtain shadow PTEs where teardown or setup of the SPT= Es + * require a blocking context. The critical region formed by this can sle= ep, + * and the required 'user_lock' can also be a sleeping lock. + * + * The caller is required to provide a 'user_lock' to serialize both teard= own + * and setup. + * + * The return value should be passed to mmu_interval_read_retry(). + */ +unsigned long mmu_interval_read_begin(struct mmu_interval_notifier *mni) +{ + struct mmu_notifier_mm *mmn_mm =3D mni->mm->mmu_notifier_mm; + unsigned long seq; + bool is_invalidating; + + /* + * If the mni has a different seq value under the user_lock than we + * started with then it has collided. + * + * If the mni currently has the same seq value as the mmn_mm seq, then + * it is currently between invalidate_start/end and is colliding. + * + * The locking looks broadly like this: + * mn_tree_invalidate_start(): mmu_interval_read_begin(): + * spin_lock + * seq =3D READ_ONCE(mni->invali= date_seq); + * seq =3D=3D mmn_mm->invalidate= _seq + * spin_unlock + * spin_lock + * seq =3D ++mmn_mm->invalidate_seq + * spin_unlock + * op->invalidate_range(): + * user_lock + * mmu_interval_set_seq() + * mni->invalidate_seq =3D seq + * user_unlock + * + * [Required: mmu_interval_read_retry() =3D=3D t= rue] + * + * mn_itree_inv_end(): + * spin_lock + * seq =3D ++mmn_mm->invalidate_seq + * spin_unlock + * + * user_lock + * mmu_interval_read_retry(): + * mni->invalidate_seq !=3D seq + * user_unlock + * + * Barriers are not needed here as any races here are closed by an + * eventual mmu_interval_read_retry(), which provides a barrier via the + * user_lock. + */ + spin_lock(&mmn_mm->lock); + /* Pairs with the WRITE_ONCE in mmu_interval_set_seq() */ + seq =3D READ_ONCE(mni->invalidate_seq); + is_invalidating =3D seq =3D=3D mmn_mm->invalidate_seq; + spin_unlock(&mmn_mm->lock); + + /* + * mni->invalidate_seq must always be set to an odd value via + * mmu_interval_set_seq() using the provided cur_seq from + * mn_itree_inv_start_range(). This ensures that if seq does wrap we + * will always clear the below sleep in some reasonable time as + * mmn_mm->invalidate_seq is even in the idle state. + */ + lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); + lock_map_release(&__mmu_notifier_invalidate_range_start_map); + if (is_invalidating) + wait_event(mmn_mm->wq, + READ_ONCE(mmn_mm->invalidate_seq) !=3D seq); + + /* + * Notice that mmu_interval_read_retry() can already be true at this + * point, avoiding loops here allows the caller to provide a global + * time bound. + */ + + return seq; +} +EXPORT_SYMBOL_GPL(mmu_interval_read_begin); + +static void mn_itree_release(struct mmu_notifier_mm *mmn_mm, + struct mm_struct *mm) +{ + struct mmu_notifier_range range =3D { + .flags =3D MMU_NOTIFIER_RANGE_BLOCKABLE, + .event =3D MMU_NOTIFY_RELEASE, + .mm =3D mm, + .start =3D 0, + .end =3D ULONG_MAX, + }; + struct mmu_interval_notifier *mni; + unsigned long cur_seq; + bool ret; + + for (mni =3D mn_itree_inv_start_range(mmn_mm, &range, &cur_seq); mni; + mni =3D mn_itree_inv_next(mni, &range)) { + ret =3D mni->ops->invalidate(mni, &range, cur_seq); + WARN_ON(!ret); + } + + mn_itree_inv_end(mmn_mm); +} + /* * This function can't run concurrently against mmu_notifier_register * because mm->mm_users > 0 during mmu_notifier_register and exit_mmap @@ -52,7 +296,8 @@ struct mmu_notifier_mm { * can't go away from under us as exit_mmap holds an mm_count pin * itself. */ -void __mmu_notifier_release(struct mm_struct *mm) +static void mn_hlist_release(struct mmu_notifier_mm *mmn_mm, + struct mm_struct *mm) { struct mmu_notifier *mn; int id; @@ -62,7 +307,7 @@ void __mmu_notifier_release(struct mm_struct *mm) * ->release returns. */ id =3D srcu_read_lock(&srcu); - hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) + hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) /* * If ->release runs before mmu_notifier_unregister it must be * handled, as it's the only way for the driver to flush all @@ -72,10 +317,9 @@ void __mmu_notifier_release(struct mm_struct *mm) if (mn->ops->release) mn->ops->release(mn, mm); =20 - spin_lock(&mm->mmu_notifier_mm->lock); - while (unlikely(!hlist_empty(&mm->mmu_notifier_mm->list))) { - mn =3D hlist_entry(mm->mmu_notifier_mm->list.first, - struct mmu_notifier, + spin_lock(&mmn_mm->lock); + while (unlikely(!hlist_empty(&mmn_mm->list))) { + mn =3D hlist_entry(mmn_mm->list.first, struct mmu_notifier, hlist); /* * We arrived before mmu_notifier_unregister so @@ -85,7 +329,7 @@ void __mmu_notifier_release(struct mm_struct *mm) */ hlist_del_init_rcu(&mn->hlist); } - spin_unlock(&mm->mmu_notifier_mm->lock); + spin_unlock(&mmn_mm->lock); srcu_read_unlock(&srcu, id); =20 /* @@ -100,6 +344,17 @@ void __mmu_notifier_release(struct mm_struct *mm) synchronize_srcu(&srcu); } =20 +void __mmu_notifier_release(struct mm_struct *mm) +{ + struct mmu_notifier_mm *mmn_mm =3D mm->mmu_notifier_mm; + + if (mmn_mm->has_itree) + mn_itree_release(mmn_mm, mm); + + if (!hlist_empty(&mmn_mm->list)) + mn_hlist_release(mmn_mm, mm); +} + /* * If no young bitflag is supported by the hardware, ->clear_flush_young c= an * unmap the address and return 1 or 0 depending if the mapping previously @@ -172,14 +427,43 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, = unsigned long address, srcu_read_unlock(&srcu, id); } =20 -int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) +static int mn_itree_invalidate(struct mmu_notifier_mm *mmn_mm, + const struct mmu_notifier_range *range) +{ + struct mmu_interval_notifier *mni; + unsigned long cur_seq; + + for (mni =3D mn_itree_inv_start_range(mmn_mm, range, &cur_seq); mni; + mni =3D mn_itree_inv_next(mni, range)) { + bool ret; + + ret =3D mni->ops->invalidate(mni, range, cur_seq); + if (!ret) { + if (WARN_ON(mmu_notifier_range_blockable(range))) + continue; + goto out_would_block; + } + } + return 0; + +out_would_block: + /* + * On -EAGAIN the non-blocking caller is not allowed to call + * invalidate_range_end() + */ + mn_itree_inv_end(mmn_mm); + return -EAGAIN; +} + +static int mn_hlist_invalidate_range_start(struct mmu_notifier_mm *mmn_mm, + struct mmu_notifier_range *range) { struct mmu_notifier *mn; int ret =3D 0; int id; =20 id =3D srcu_read_lock(&srcu); - hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) { + hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) { if (mn->ops->invalidate_range_start) { int _ret; =20 @@ -203,15 +487,30 @@ int __mmu_notifier_invalidate_range_start(struct mmu_= notifier_range *range) return ret; } =20 -void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, - bool only_end) +int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) +{ + struct mmu_notifier_mm *mmn_mm =3D range->mm->mmu_notifier_mm; + int ret; + + if (mmn_mm->has_itree) { + ret =3D mn_itree_invalidate(mmn_mm, range); + if (ret) + return ret; + } + if (!hlist_empty(&mmn_mm->list)) + return mn_hlist_invalidate_range_start(mmn_mm, range); + return 0; +} + +static void mn_hlist_invalidate_end(struct mmu_notifier_mm *mmn_mm, + struct mmu_notifier_range *range, + bool only_end) { struct mmu_notifier *mn; int id; =20 - lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); id =3D srcu_read_lock(&srcu); - hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) { + hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) { /* * Call invalidate_range here too to avoid the need for the * subsystem of having to register an invalidate_range_end @@ -238,6 +537,19 @@ void __mmu_notifier_invalidate_range_end(struct mmu_no= tifier_range *range, } } srcu_read_unlock(&srcu, id); +} + +void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, + bool only_end) +{ + struct mmu_notifier_mm *mmn_mm =3D range->mm->mmu_notifier_mm; + + lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); + if (mmn_mm->has_itree) + mn_itree_inv_end(mmn_mm); + + if (!hlist_empty(&mmn_mm->list)) + mn_hlist_invalidate_end(mmn_mm, range, only_end); lock_map_release(&__mmu_notifier_invalidate_range_start_map); } =20 @@ -256,8 +568,9 @@ void __mmu_notifier_invalidate_range(struct mm_struct *= mm, } =20 /* - * Same as mmu_notifier_register but here the caller must hold the - * mmap_sem in write mode. + * Same as mmu_notifier_register but here the caller must hold the mmap_se= m in + * write mode. A NULL mn signals the notifier is being registered for itree + * mode. */ int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm) { @@ -274,9 +587,6 @@ int __mmu_notifier_register(struct mmu_notifier *mn, st= ruct mm_struct *mm) fs_reclaim_release(GFP_KERNEL); } =20 - mn->mm =3D mm; - mn->users =3D 1; - if (!mm->mmu_notifier_mm) { /* * kmalloc cannot be called under mm_take_all_locks(), but we @@ -284,21 +594,22 @@ int __mmu_notifier_register(struct mmu_notifier *mn, = struct mm_struct *mm) * the write side of the mmap_sem. */ mmu_notifier_mm =3D - kmalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL); + kzalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL); if (!mmu_notifier_mm) return -ENOMEM; =20 INIT_HLIST_HEAD(&mmu_notifier_mm->list); spin_lock_init(&mmu_notifier_mm->lock); + mmu_notifier_mm->invalidate_seq =3D 2; + mmu_notifier_mm->itree =3D RB_ROOT_CACHED; + init_waitqueue_head(&mmu_notifier_mm->wq); + INIT_HLIST_HEAD(&mmu_notifier_mm->deferred_list); } =20 ret =3D mm_take_all_locks(mm); if (unlikely(ret)) goto out_clean; =20 - /* Pairs with the mmdrop in mmu_notifier_unregister_* */ - mmgrab(mm); - /* * Serialize the update against mmu_notifier_unregister. A * side note: mmu_notifier_release can't run concurrently with @@ -306,13 +617,28 @@ int __mmu_notifier_register(struct mmu_notifier *mn, = struct mm_struct *mm) * current->mm or explicitly with get_task_mm() or similar). * We can't race against any other mmu notifier method either * thanks to mm_take_all_locks(). + * + * release semantics on the initialization of the mmu_notifier_mm's + * contents are provided for unlocked readers. acquire can only be + * used while holding the mmgrab or mmget, and is safe because once + * created the mmu_notififer_mm is not freed until the mm is + * destroyed. As above, users holding the mmap_sem or one of the + * mm_take_all_locks() do not need to use acquire semantics. */ if (mmu_notifier_mm) - mm->mmu_notifier_mm =3D mmu_notifier_mm; + smp_store_release(&mm->mmu_notifier_mm, mmu_notifier_mm); =20 - spin_lock(&mm->mmu_notifier_mm->lock); - hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list); - spin_unlock(&mm->mmu_notifier_mm->lock); + if (mn) { + /* Pairs with the mmdrop in mmu_notifier_unregister_* */ + mmgrab(mm); + mn->mm =3D mm; + mn->users =3D 1; + + spin_lock(&mm->mmu_notifier_mm->lock); + hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list); + spin_unlock(&mm->mmu_notifier_mm->lock); + } else + mm->mmu_notifier_mm->has_itree =3D true; =20 mm_drop_all_locks(mm); BUG_ON(atomic_read(&mm->mm_users) <=3D 0); @@ -529,6 +855,180 @@ void mmu_notifier_put(struct mmu_notifier *mn) } EXPORT_SYMBOL_GPL(mmu_notifier_put); =20 +static int __mmu_interval_notifier_insert( + struct mmu_interval_notifier *mni, struct mm_struct *mm, + struct mmu_notifier_mm *mmn_mm, unsigned long start, + unsigned long length, const struct mmu_interval_notifier_ops *ops) +{ + mni->mm =3D mm; + mni->ops =3D ops; + RB_CLEAR_NODE(&mni->interval_tree.rb); + mni->interval_tree.start =3D start; + /* + * Note that the representation of the intervals in the interval tree + * considers the ending point as contained in the interval. + */ + if (length =3D=3D 0 || + check_add_overflow(start, length - 1, &mni->interval_tree.last)) + return -EOVERFLOW; + + /* Must call with a mmget() held */ + if (WARN_ON(atomic_read(&mm->mm_count) <=3D 0)) + return -EINVAL; + + /* pairs with mmdrop in mmu_interval_notifier_remove() */ + mmgrab(mm); + + /* + * If some invalidate_range_start/end region is going on in parallel + * we don't know what VA ranges are affected, so we must assume this + * new range is included. + * + * If the itree is invalidating then we are not allowed to change + * it. Retrying until invalidation is done is tricky due to the + * possibility for live lock, instead defer the add to + * mn_itree_inv_end() so this algorithm is deterministic. + * + * In all cases the value for the mni->mr_invalidate_seq should be + * odd, see mmu_interval_read_begin() + */ + spin_lock(&mmn_mm->lock); + if (mmn_mm->active_invalidate_ranges) { + if (mn_itree_is_invalidating(mmn_mm)) + hlist_add_head(&mni->deferred_item, + &mmn_mm->deferred_list); + else { + mmn_mm->invalidate_seq |=3D 1; + interval_tree_insert(&mni->interval_tree, + &mmn_mm->itree); + } + mni->invalidate_seq =3D mmn_mm->invalidate_seq; + } else { + WARN_ON(mn_itree_is_invalidating(mmn_mm)); + /* + * The starting seq for a mni not under invalidation should be + * odd, not equal to the current invalidate_seq and + * invalidate_seq should not 'wrap' to the new seq any time + * soon. + */ + mni->invalidate_seq =3D mmn_mm->invalidate_seq - 1; + interval_tree_insert(&mni->interval_tree, &mmn_mm->itree); + } + spin_unlock(&mmn_mm->lock); + return 0; +} + +/** + * mmu_interval_notifier_insert - Insert an interval notifier + * @mni: Interval notifier to register + * @start: Starting virtual address to monitor + * @length: Length of the range to monitor + * @mm : mm_struct to attach to + * + * This function subscribes the interval notifier for notifications from t= he + * mm. Upon return the ops related to mmu_interval_notifier will be called + * whenever an event that intersects with the given range occurs. + * + * Upon return the range_notifier may not be present in the interval tree = yet. + * The caller must use the normal interval notifier read flow via + * mmu_interval_read_begin() to establish SPTEs for this range. + */ +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni, + struct mm_struct *mm, unsigned long start, + unsigned long length, + const struct mmu_interval_notifier_ops *ops) +{ + struct mmu_notifier_mm *mmn_mm; + int ret; + + might_lock(&mm->mmap_sem); + + mmn_mm =3D smp_load_acquire(&mm->mmu_notifier_mm); + if (!mmn_mm || !mmn_mm->has_itree) { + ret =3D mmu_notifier_register(NULL, mm); + if (ret) + return ret; + mmn_mm =3D mm->mmu_notifier_mm; + } + return __mmu_interval_notifier_insert(mni, mm, mmn_mm, start, length, + ops); +} +EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert); + +int mmu_interval_notifier_insert_locked( + struct mmu_interval_notifier *mni, struct mm_struct *mm, + unsigned long start, unsigned long length, + const struct mmu_interval_notifier_ops *ops) +{ + struct mmu_notifier_mm *mmn_mm; + int ret; + + lockdep_assert_held_write(&mm->mmap_sem); + + mmn_mm =3D mm->mmu_notifier_mm; + if (!mmn_mm || !mmn_mm->has_itree) { + ret =3D __mmu_notifier_register(NULL, mm); + if (ret) + return ret; + mmn_mm =3D mm->mmu_notifier_mm; + } + return __mmu_interval_notifier_insert(mni, mm, mmn_mm, start, length, + ops); +} +EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert_locked); + +/** + * mmu_interval_notifier_remove - Remove a interval notifier + * @mni: Interval notifier to unregister + * + * This function must be paired with mmu_interval_notifier_insert(). It ca= nnot be + * called from any ops callback. + * + * Once this returns ops callbacks are no longer running on other CPUs and + * will not be called in future. + */ +void mmu_interval_notifier_remove(struct mmu_interval_notifier *mni) +{ + struct mm_struct *mm =3D mni->mm; + struct mmu_notifier_mm *mmn_mm =3D mm->mmu_notifier_mm; + unsigned long seq =3D 0; + + might_sleep(); + + spin_lock(&mmn_mm->lock); + if (mn_itree_is_invalidating(mmn_mm)) { + /* + * remove is being called after insert put this on the + * deferred list, but before the deferred list was processed. + */ + if (RB_EMPTY_NODE(&mni->interval_tree.rb)) { + hlist_del(&mni->deferred_item); + } else { + hlist_add_head(&mni->deferred_item, + &mmn_mm->deferred_list); + seq =3D mmn_mm->invalidate_seq; + } + } else { + WARN_ON(RB_EMPTY_NODE(&mni->interval_tree.rb)); + interval_tree_remove(&mni->interval_tree, &mmn_mm->itree); + } + spin_unlock(&mmn_mm->lock); + + /* + * The possible sleep on progress in the invalidation requires the + * caller not hold any locks held by invalidation callbacks. + */ + lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); + lock_map_release(&__mmu_notifier_invalidate_range_start_map); + if (seq) + wait_event(mmn_mm->wq, + READ_ONCE(mmn_mm->invalidate_seq) !=3D seq); + + /* pairs with mmgrab in mmu_interval_notifier_insert() */ + mmdrop(mm); +} +EXPORT_SYMBOL_GPL(mmu_interval_notifier_remove); + /** * mmu_notifier_synchronize - Ensure all mmu_notifiers are freed * --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590248; cv=none; d=zoho.com; s=zohoarc; b=H/dWGCfXU19xBMW7LXVSm9Bf+yYCXU8avRYmDGWQPh5GahdENLCmpAYBubg9SKpZOr/5WXaAEF4B7i3AsKTL8Qh/SmrOSRlEuqBudVb5qtaxIO7xtWR8N9yD1AdIKbK9qy1BbhigD/Xf5apKAmMr+aurMj8pqP+O1diGiZOEUAE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590248; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=fFWNhAL2qURMYvcIS0pRIpF8x25LPKcERgktdKi9WiM=; b=NmUXsRhku7iyxiNEAF7p3l68PU/onuWjsjB6fKofIKy+B5dBlR1VioPGbNp9qBxVEtkW9d8E8QISPtle2EsgjR/1aguXcFE/tY2JCDmyAZD98ws3tqnHOXm1nlhw7SP0B/sD5wM3GLSplf2CqZmmefR6u5914pvG1u9y9tWbYqU= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590248175404.3020822485935; Tue, 12 Nov 2019 12:24:08 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgr-00032O-Mn; Tue, 12 Nov 2019 20:23:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgq-000323-DF for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:04 +0000 Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 32a53978-058a-11ea-b678-bc764e2007e4; Tue, 12 Nov 2019 20:22:50 +0000 (UTC) Received: by mail-qk1-x744.google.com with SMTP id 15so15687705qkh.6 for ; Tue, 12 Nov 2019 12:22:50 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id p33sm13643736qtf.80.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003jm-8M; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 32a53978-058a-11ea-b678-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RW58zr8YTNVHuTBfIVxZyU6EJd/oHLVvWKcypECtMjs=; b=mSX+wJ+8VUX1382IHhLVx6fBdAWs6gXvU0I32j+wK2yrM6ZkKGtovH1/NPjehZ8ovE 1T0Z0w4CHSgxOjOWg78rUMLlihvA2VQuqm44HX3C/1ZLAMQjsARlaHNBCW9gz6N48wKJ y9PM2onSIWQgFDrhE8sOQPU0nJxnAlm4dGvf735qGtY5kABa7ifc2bFJa2gjCHgJHDWB H9uN853ltamgQii1TxpifGWrSt8UcwbEHZ/zxACuyEsslH7wGCXdsqUB6pj/D0T30D1q 5KfZ1V6veRSoxZqHsr/Z2Op+C435uIf2bvVnVSct0GKZjzoJD+J9DajjzRnO10FRijpY fRpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RW58zr8YTNVHuTBfIVxZyU6EJd/oHLVvWKcypECtMjs=; b=qbig3C0YzO9ZAy9n5zNE+G667tKyBzIF/OAjZW6wEYfxhuHD2pVJsJ4VbusZ9j3NwP no9FqQk2ISWu/EHVgOE2RBrrgyz4Srx1Lc7//wiLJG2AhYgUSrrwbF8EPNUtRH/xCp7g OBTYJOyWnmaFuxVAP+ru0a8G77UuFkWcAxcIrOOjXn1aAqu5b6H+5ZgYrFQqTHct962u nZ8u8zg1Nd9thzelamiXDUf23vz7yIuiyUN2qpZZyNbb8R8sK5G05xiChLLukF5mW6lF 3mmAdNOZwDpMA5VAxcDKNS5ZIdtiEO8tiFI/GbXuA7oHl/ETT5/Syr/ZAlvw5a3mEpB/ YcCA== X-Gm-Message-State: APjAAAUPQllgvZwVQffQ7NfIrRviiVwKBvSMbXN/1XMK4eSU+HjTfOUZ mE8/5OrXcYRcTZNjA64P9qdqdg== X-Google-Smtp-Source: APXvYqwQcC+LL9E9LKfiZNHrcVJ95fvA4RxWhxexVl340jx3LOfHzjk66AKsaYtFFr6K39s9sLHHzA== X-Received: by 2002:a05:620a:110f:: with SMTP id o15mr14029771qkk.127.1573590169690; Tue, 12 Nov 2019 12:22:49 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:20 -0400 Message-Id: <20191112202231.3856-4-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 03/14] mm/hmm: allow hmm_range to be used with a mmu_interval_notifier or hmm_mirror X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Philip Yang , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe hmm_mirror's handling of ranges does not use a sequence count which results in this bug: CPU0 CPU1 hmm_range_wait_until_valid(range) valid =3D=3D true hmm_range_fault(range) hmm_invalidate_range_start() range->valid =3D false hmm_invalidate_range_end() range->valid =3D true hmm_range_valid(range) valid =3D=3D true Where the hmm_range_valid() should not have succeeded. Adding the required sequence count would make it nearly identical to the new mmu_interval_notifier. Instead replace the hmm_mirror stuff with mmu_interval_notifier. Co-existence of the two APIs is the first step. Reviewed-by: J=C3=A9r=C3=B4me Glisse Tested-by: Philip Yang Tested-by: Ralph Campbell Signed-off-by: Jason Gunthorpe Reviewed-by: Christoph Hellwig --- include/linux/hmm.h | 5 +++++ mm/hmm.c | 25 +++++++++++++++++++------ 2 files changed, 24 insertions(+), 6 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 3fec513b9c00f1..fbb35c78637e57 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -145,6 +145,9 @@ enum hmm_pfn_value_e { /* * struct hmm_range - track invalidation lock on virtual address range * + * @notifier: an optional mmu_interval_notifier + * @notifier_seq: when notifier is used this is the result of + * mmu_interval_read_begin() * @hmm: the core HMM structure this range is active against * @vma: the vm area struct for the range * @list: all range lock are on a list @@ -159,6 +162,8 @@ enum hmm_pfn_value_e { * @valid: pfns array did not change since it has been fill by an HMM func= tion */ struct hmm_range { + struct mmu_interval_notifier *notifier; + unsigned long notifier_seq; struct hmm *hmm; struct list_head list; unsigned long start; diff --git a/mm/hmm.c b/mm/hmm.c index 6b0136665407a3..8d060c5dabe37b 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -858,6 +858,14 @@ void hmm_range_unregister(struct hmm_range *range) } EXPORT_SYMBOL(hmm_range_unregister); =20 +static bool needs_retry(struct hmm_range *range) +{ + if (range->notifier) + return mmu_interval_check_retry(range->notifier, + range->notifier_seq); + return !range->valid; +} + static const struct mm_walk_ops hmm_walk_ops =3D { .pud_entry =3D hmm_vma_walk_pud, .pmd_entry =3D hmm_vma_walk_pmd, @@ -898,18 +906,23 @@ long hmm_range_fault(struct hmm_range *range, unsigne= d int flags) const unsigned long device_vma =3D VM_IO | VM_PFNMAP | VM_MIXEDMAP; unsigned long start =3D range->start, end; struct hmm_vma_walk hmm_vma_walk; - struct hmm *hmm =3D range->hmm; + struct mm_struct *mm; struct vm_area_struct *vma; int ret; =20 - lockdep_assert_held(&hmm->mmu_notifier.mm->mmap_sem); + if (range->notifier) + mm =3D range->notifier->mm; + else + mm =3D range->hmm->mmu_notifier.mm; + + lockdep_assert_held(&mm->mmap_sem); =20 do { /* If range is no longer valid force retry. */ - if (!range->valid) + if (needs_retry(range)) return -EBUSY; =20 - vma =3D find_vma(hmm->mmu_notifier.mm, start); + vma =3D find_vma(mm, start); if (vma =3D=3D NULL || (vma->vm_flags & device_vma)) return -EFAULT; =20 @@ -939,7 +952,7 @@ long hmm_range_fault(struct hmm_range *range, unsigned = int flags) start =3D hmm_vma_walk.last; =20 /* Keep trying while the range is valid. */ - } while (ret =3D=3D -EBUSY && range->valid); + } while (ret =3D=3D -EBUSY && !needs_retry(range)); =20 if (ret) { unsigned long i; @@ -997,7 +1010,7 @@ long hmm_range_dma_map(struct hmm_range *range, struct= device *device, continue; =20 /* Check if range is being invalidated */ - if (!range->valid) { + if (needs_retry(range)) { ret =3D -EBUSY; goto unmap; } --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590277; cv=none; d=zoho.com; s=zohoarc; b=GCHm/RHJxR7B/MOIjZdxWQbnX/9NkuSVUOeAQXnYY0uplk6qRJXIYrbeEwQGe0xWA608aL3b7eKjMyYvodOb2eb2flOzHIaFnNseh2z9nuu7DFaLqPu1IpmC4caohQ6cHWUgXs/N0mnrwqvGgINz9aw202tin/tzhpF2FOM9SE0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590277; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=EGt0R/VkG2GQIGLEzrYeT9NsePNSnitUIknS0Ae8XEA=; b=lVqj/aBE8gJofQ9rIMVJTwdfdtflkjhygOaqL3CsASUGO3uRg5fciONw3Gee1mJBORTZK3jumD4tpsgtEhN5fSFgj1lMKnat353WjUtv6w9X+9QfILoM3VCzLcU06RVvfg3HMWvcJ5/6HszjooopwmXOVdrnNGQM8l0m3yFoI1E= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590277978494.3961814675895; Tue, 12 Nov 2019 12:24:37 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchL-0003I4-Vw; Tue, 12 Nov 2019 20:23:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchK-0003HH-Ek for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:34 +0000 Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 351d1d9c-058a-11ea-984a-bc764e2007e4; Tue, 12 Nov 2019 20:22:54 +0000 (UTC) Received: by mail-qk1-x741.google.com with SMTP id 15so15687892qkh.6 for ; Tue, 12 Nov 2019 12:22:54 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id j89sm10542127qte.72.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003js-9n; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 351d1d9c-058a-11ea-984a-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=msUbALhd2xtO/lF1E2JMgXZu+n1aQVj3OW/hKQoForA=; b=Ud3HNeMvcBpWwpT3gZXaL68W5jc4GRJ5drmnCYR67cEqOCjBlZs5P2wDSQe6CODuSB J1EAZioZQ9rcwECNjBjxhvN/h7A8p86VQI+/cscWfNwd3d0hIbRq/S2rxZ3SgPq3SQsh IC5AA3IVMduEINV7agpnd2NheQMqXocW+A0dljsNBHcTFpF8kfqMqZZ9OeN7jjwlh7Vu +RMljsG8VsNCOP//jt8ErHFHC5yQyz4YCmqouEU/yr/aAtN1riu5Te8FNu87M6hJsHPz F2H3fOv/F0OG7KYT7tZbF/C2LTE0BLosUZkpzCl4MCjvJ2aFpMdhYX7vyVQP+AuIs2dr 1tcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=msUbALhd2xtO/lF1E2JMgXZu+n1aQVj3OW/hKQoForA=; b=ZwPfoH3g+1mhY5H0t7VjTDBiuJv7faMKMMBkJJVDXld/I4yWgknFhipD5CnZMHNtw2 XPGPy9FnJ9kvhfkUp8Bvm2ZkHSY750DyUvTBP5CqxZXCo76xnBCxX8YBh43b1m/2BAM7 BfCkmzC8tKbwPMDIndjsS6aKcZ8Nc+RnDJBtBdoNOFf4oUjK6F5QoEhbCxYmJApmyL8N JlprEYR0C0k6sNmaxJ1KuxVQcjuKBHbfVqRRWwIRh2YQTUPXvyF1U9xNy87m7yNONhes Wk562F/GM0ugvjkTd6ojWhv1ao+CeHDJId7iPBoDEQ69LbtbmZI6g+Sb51jEw5QX4h1P wJhw== X-Gm-Message-State: APjAAAUK1+OBDE/h9UE/E34OM3i85XD6x+EtReoxVSqIS8/0kcnLmubj RZXlX6K83uUznkkZJq/LejSUVg== X-Google-Smtp-Source: APXvYqwTjQHMd6Wnulr6l6RAuSHWZi+1OAZIts410/rBP3+s9ZvtJbjVwjBC9m86HVn1dWXuguyRLg== X-Received: by 2002:a37:388:: with SMTP id 130mr17246331qkd.378.1573590173939; Tue, 12 Nov 2019 12:22:53 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:21 -0400 Message-Id: <20191112202231.3856-5-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 04/14] mm/hmm: define the pre-processor related parts of hmm.h even if disabled X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe Only the function calls are stubbed out with static inlines that always fail. This is the standard way to write a header for an optional component and makes it easier for drivers that only optionally need HMM_MIRROR. Reviewed-by: J=C3=A9r=C3=B4me Glisse Tested-by: Ralph Campbell Signed-off-by: Jason Gunthorpe Reviewed-by: Christoph Hellwig --- include/linux/hmm.h | 59 ++++++++++++++++++++++++++++++++++++--------- kernel/fork.c | 1 - 2 files changed, 47 insertions(+), 13 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index fbb35c78637e57..cb69bf10dc788c 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -62,8 +62,6 @@ #include #include =20 -#ifdef CONFIG_HMM_MIRROR - #include #include #include @@ -374,6 +372,15 @@ struct hmm_mirror { struct list_head list; }; =20 +/* + * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that c= ase. + */ +#define HMM_FAULT_ALLOW_RETRY (1 << 0) + +/* Don't fault in missing PTEs, just snapshot the current state. */ +#define HMM_FAULT_SNAPSHOT (1 << 1) + +#ifdef CONFIG_HMM_MIRROR int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm); void hmm_mirror_unregister(struct hmm_mirror *mirror); =20 @@ -383,14 +390,6 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror); int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror); void hmm_range_unregister(struct hmm_range *range); =20 -/* - * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that c= ase. - */ -#define HMM_FAULT_ALLOW_RETRY (1 << 0) - -/* Don't fault in missing PTEs, just snapshot the current state. */ -#define HMM_FAULT_SNAPSHOT (1 << 1) - long hmm_range_fault(struct hmm_range *range, unsigned int flags); =20 long hmm_range_dma_map(struct hmm_range *range, @@ -401,6 +400,44 @@ long hmm_range_dma_unmap(struct hmm_range *range, struct device *device, dma_addr_t *daddrs, bool dirty); +#else +int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm) +{ + return -EOPNOTSUPP; +} + +void hmm_mirror_unregister(struct hmm_mirror *mirror) +{ +} + +int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror) +{ + return -EOPNOTSUPP; +} + +void hmm_range_unregister(struct hmm_range *range) +{ +} + +static inline long hmm_range_fault(struct hmm_range *range, unsigned int f= lags) +{ + return -EOPNOTSUPP; +} + +static inline long hmm_range_dma_map(struct hmm_range *range, + struct device *device, dma_addr_t *daddrs, + unsigned int flags) +{ + return -EOPNOTSUPP; +} + +static inline long hmm_range_dma_unmap(struct hmm_range *range, + struct device *device, + dma_addr_t *daddrs, bool dirty) +{ + return -EOPNOTSUPP; +} +#endif =20 /* * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a ran= ge @@ -411,6 +448,4 @@ long hmm_range_dma_unmap(struct hmm_range *range, */ #define HMM_RANGE_DEFAULT_TIMEOUT 1000 =20 -#endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ - #endif /* LINUX_HMM_H */ diff --git a/kernel/fork.c b/kernel/fork.c index bcdf5312521036..ca39cfc404e3db 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -40,7 +40,6 @@ #include #include #include -#include #include #include #include --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590295; cv=none; d=zoho.com; s=zohoarc; b=M1142ohq/3hM4hcGTbFhlRhSTt6iEzK6jy76lXq7fE0uAM7KS6yy8Q9Gcrg9Zju98hIvIxoU/i6+DuOmDSKJqG6naCZirfYJsHofgBXFjAl8JlbX+v+tbUftc8r/vp2CviBl2SJLr3ufO2MMeU7vEOUoKyOVS25+LwWQIcrK0JI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590295; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=2Z5t4SgOFs3XcucLNty1/51HQiDfgSBVt5zPoZZ4JJY=; b=Q3+UnjXGU79r6DvGFJ4v6YF/F5Hz0BfXWyK/AHMymwnulFKVvEWoHNdy9awhAxBlLchnAtLQAHehRVr5YDMzpUmCHBN+N6jH8CCUaSbc+tYuMrkVfVqYmG321sXGnocZhzJHBSGiqrhq/LA/Ku9bpB9UwDmFqlzxW6JlDX4Ukxc= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 157359029582545.90696597219221; Tue, 12 Nov 2019 12:24:55 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcha-0003TG-9X; Tue, 12 Nov 2019 20:23:50 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchZ-0003Sd-En for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:49 +0000 Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 359d69e8-058a-11ea-b678-bc764e2007e4; Tue, 12 Nov 2019 20:22:55 +0000 (UTC) Received: by mail-qt1-x843.google.com with SMTP id 30so21229388qtz.12 for ; Tue, 12 Nov 2019 12:22:55 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id o1sm11425992qtb.82.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003jy-C1; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 359d69e8-058a-11ea-b678-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZzBbOWCxKZh+cdtoh3q/T1dV0tEgKiGKeoYY4ENd0lI=; b=hxlcUA35dIJ7ZTxuJ6I2F+X4UIQXWwD4F9B/KOdHgDO0MDZk7hbxmngA8xWhHfva0C GTO8bcyvcemO0MLGmFzF/zXMzOkdbSZ5S4yyHO9fJwd2bcTqIM/mPrOmqWncKIk9U2Ax NMGGAUdYfsMwfC22ZtNajVcLj0d3uLYGr2CEPCP/nLBXpjQ7BYKFo/dWTJEhswSRuzL0 Zr5FNpZqzYtIVtWp27UHffE0IPoNr1OEttLstivBmu6q+JSsCzebt44ktVwWBjFZO+aS ki2L9iCRTAJJcxBI3OwTgSV0FcASoFmHfaUIrliz1rRm3vacNtEtpnrefMkCyM5WcCNG wKkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZzBbOWCxKZh+cdtoh3q/T1dV0tEgKiGKeoYY4ENd0lI=; b=hm6ro+cJLQHYqMMvDQBEhEpL68S3dPQ44h/7MYQVKW775zg3ihtOD8DHzI+7Gk4hjT FUgpd9OEdLGxnBX6MTK/9rmpem/zY9GBylDJKhmpD7EDC+uYTe90GnUzxgvlKjBlWe53 y5ddWdOx0xJbrQyvf5BoafftyEkviHPtacu/gAtXYV79EWIixIwWUBPoi/AJ7Ey4Yv7e DUhMIqAAuYUIyGDeNHepMaBrhYoLMottLg8Hm2jncLHhvw3Tz2UlnR1iAo9+0C20Jk1u BZ53kZsYSO9JlbvTpGjZI/mUMmQl07ecl1MDMzLLIT7faXqJVUfqN3C1ROor5KAZQhbP xJRg== X-Gm-Message-State: APjAAAV2qY/Xh/aoVlsZDnyHkubnuk6C8KgY/n/HXlQTtNwM3vtrgctW sbzx/24ssRnxcw9RsaEW7ygj+A== X-Google-Smtp-Source: APXvYqxu7SJ/mVMkw3vmBcyZaKcGGZcaJusbCaEAqmtA8bZg06/oxd/+7uSG8a9wMIOFTEKi4zbLEg== X-Received: by 2002:ac8:30cd:: with SMTP id w13mr33149349qta.201.1573590174357; Tue, 12 Nov 2019 12:22:54 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:22 -0400 Message-Id: <20191112202231.3856-6-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 05/14] RDMA/odp: Use mmu_interval_notifier_insert() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Artemy Kovalyov , Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe Replace the internal interval tree based mmu notifier with the new common mmu_interval_notifier_insert() API. This removes a lot of code and fixes a deadlock that can be triggered in ODP: zap_page_range() mmu_notifier_invalidate_range_start() [..] ib_umem_notifier_invalidate_range_start() down_read(&per_mm->umem_rwsem) unmap_single_vma() [..] __split_huge_page_pmd() mmu_notifier_invalidate_range_start() [..] ib_umem_notifier_invalidate_range_start() down_read(&per_mm->umem_rwsem) // DEADLOCK mmu_notifier_invalidate_range_end() up_read(&per_mm->umem_rwsem) mmu_notifier_invalidate_range_end() up_read(&per_mm->umem_rwsem) The umem_rwsem is held across the range_start/end as the ODP algorithm for invalidate_range_end cannot tolerate changes to the interval tree. However, due to the nested invalidation regions the second down_read() can deadlock if there are competing writers. The new core code provides an alternative scheme to solve this problem. Fixes: ca748c39ea3f ("RDMA/umem: Get rid of per_mm->notifier_count") Tested-by: Artemy Kovalyov Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/device.c | 1 - drivers/infiniband/core/umem_odp.c | 303 ++++----------------------- drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 +- drivers/infiniband/hw/mlx5/mr.c | 3 +- drivers/infiniband/hw/mlx5/odp.c | 50 ++--- include/rdma/ib_umem_odp.h | 68 ++---- include/rdma/ib_verbs.h | 2 - 7 files changed, 82 insertions(+), 352 deletions(-) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/dev= ice.c index 2dd2cfe9b56136..ac7924b3c73abe 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -2617,7 +2617,6 @@ void ib_set_device_ops(struct ib_device *dev, const s= truct ib_device_ops *ops) SET_DEVICE_OP(dev_ops, get_vf_config); SET_DEVICE_OP(dev_ops, get_vf_stats); SET_DEVICE_OP(dev_ops, init_port); - SET_DEVICE_OP(dev_ops, invalidate_range); SET_DEVICE_OP(dev_ops, iw_accept); SET_DEVICE_OP(dev_ops, iw_add_ref); SET_DEVICE_OP(dev_ops, iw_connect); diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/u= mem_odp.c index d7d5fadf0899ad..e42d44e501fd54 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -48,197 +48,33 @@ =20 #include "uverbs.h" =20 -static void ib_umem_notifier_start_account(struct ib_umem_odp *umem_odp) +static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, + const struct mmu_interval_notifier_ops *ops) { - mutex_lock(&umem_odp->umem_mutex); - if (umem_odp->notifiers_count++ =3D=3D 0) - /* - * Initialize the completion object for waiting on - * notifiers. Since notifier_count is zero, no one should be - * waiting right now. - */ - reinit_completion(&umem_odp->notifier_completion); - mutex_unlock(&umem_odp->umem_mutex); -} - -static void ib_umem_notifier_end_account(struct ib_umem_odp *umem_odp) -{ - mutex_lock(&umem_odp->umem_mutex); - /* - * This sequence increase will notify the QP page fault that the page - * that is going to be mapped in the spte could have been freed. - */ - ++umem_odp->notifiers_seq; - if (--umem_odp->notifiers_count =3D=3D 0) - complete_all(&umem_odp->notifier_completion); - mutex_unlock(&umem_odp->umem_mutex); -} - -static void ib_umem_notifier_release(struct mmu_notifier *mn, - struct mm_struct *mm) -{ - struct ib_ucontext_per_mm *per_mm =3D - container_of(mn, struct ib_ucontext_per_mm, mn); - struct rb_node *node; - - down_read(&per_mm->umem_rwsem); - if (!per_mm->mn.users) - goto out; - - for (node =3D rb_first_cached(&per_mm->umem_tree); node; - node =3D rb_next(node)) { - struct ib_umem_odp *umem_odp =3D - rb_entry(node, struct ib_umem_odp, interval_tree.rb); - - /* - * Increase the number of notifiers running, to prevent any - * further fault handling on this MR. - */ - ib_umem_notifier_start_account(umem_odp); - complete_all(&umem_odp->notifier_completion); - umem_odp->umem.ibdev->ops.invalidate_range( - umem_odp, ib_umem_start(umem_odp), - ib_umem_end(umem_odp)); - } - -out: - up_read(&per_mm->umem_rwsem); -} - -static int invalidate_range_start_trampoline(struct ib_umem_odp *item, - u64 start, u64 end, void *cookie) -{ - ib_umem_notifier_start_account(item); - item->umem.ibdev->ops.invalidate_range(item, start, end); - return 0; -} - -static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn, - const struct mmu_notifier_range *range) -{ - struct ib_ucontext_per_mm *per_mm =3D - container_of(mn, struct ib_ucontext_per_mm, mn); - int rc; - - if (mmu_notifier_range_blockable(range)) - down_read(&per_mm->umem_rwsem); - else if (!down_read_trylock(&per_mm->umem_rwsem)) - return -EAGAIN; - - if (!per_mm->mn.users) { - up_read(&per_mm->umem_rwsem); - /* - * At this point users is permanently zero and visible to this - * CPU without a lock, that fact is relied on to skip the unlock - * in range_end. - */ - return 0; - } - - rc =3D rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start, - range->end, - invalidate_range_start_trampoline, - mmu_notifier_range_blockable(range), - NULL); - if (rc) - up_read(&per_mm->umem_rwsem); - return rc; -} - -static int invalidate_range_end_trampoline(struct ib_umem_odp *item, u64 s= tart, - u64 end, void *cookie) -{ - ib_umem_notifier_end_account(item); - return 0; -} - -static void ib_umem_notifier_invalidate_range_end(struct mmu_notifier *mn, - const struct mmu_notifier_range *range) -{ - struct ib_ucontext_per_mm *per_mm =3D - container_of(mn, struct ib_ucontext_per_mm, mn); - - if (unlikely(!per_mm->mn.users)) - return; - - rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start, - range->end, - invalidate_range_end_trampoline, true, NULL); - up_read(&per_mm->umem_rwsem); -} - -static struct mmu_notifier *ib_umem_alloc_notifier(struct mm_struct *mm) -{ - struct ib_ucontext_per_mm *per_mm; - - per_mm =3D kzalloc(sizeof(*per_mm), GFP_KERNEL); - if (!per_mm) - return ERR_PTR(-ENOMEM); - - per_mm->umem_tree =3D RB_ROOT_CACHED; - init_rwsem(&per_mm->umem_rwsem); - - WARN_ON(mm !=3D current->mm); - rcu_read_lock(); - per_mm->tgid =3D get_task_pid(current->group_leader, PIDTYPE_PID); - rcu_read_unlock(); - return &per_mm->mn; -} - -static void ib_umem_free_notifier(struct mmu_notifier *mn) -{ - struct ib_ucontext_per_mm *per_mm =3D - container_of(mn, struct ib_ucontext_per_mm, mn); - - WARN_ON(!RB_EMPTY_ROOT(&per_mm->umem_tree.rb_root)); - - put_pid(per_mm->tgid); - kfree(per_mm); -} - -static const struct mmu_notifier_ops ib_umem_notifiers =3D { - .release =3D ib_umem_notifier_release, - .invalidate_range_start =3D ib_umem_notifier_invalidate_range_start, - .invalidate_range_end =3D ib_umem_notifier_invalidate_range_end, - .alloc_notifier =3D ib_umem_alloc_notifier, - .free_notifier =3D ib_umem_free_notifier, -}; - -static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp) -{ - struct ib_ucontext_per_mm *per_mm; - struct mmu_notifier *mn; int ret; =20 umem_odp->umem.is_odp =3D 1; + mutex_init(&umem_odp->umem_mutex); + if (!umem_odp->is_implicit_odp) { size_t page_size =3D 1UL << umem_odp->page_shift; + unsigned long start; + unsigned long end; size_t pages; =20 - umem_odp->interval_tree.start =3D - ALIGN_DOWN(umem_odp->umem.address, page_size); + start =3D ALIGN_DOWN(umem_odp->umem.address, page_size); if (check_add_overflow(umem_odp->umem.address, (unsigned long)umem_odp->umem.length, - &umem_odp->interval_tree.last)) + &end)) return -EOVERFLOW; - umem_odp->interval_tree.last =3D - ALIGN(umem_odp->interval_tree.last, page_size); - if (unlikely(umem_odp->interval_tree.last < page_size)) + end =3D ALIGN(end, page_size); + if (unlikely(end < page_size)) return -EOVERFLOW; =20 - pages =3D (umem_odp->interval_tree.last - - umem_odp->interval_tree.start) >> - umem_odp->page_shift; + pages =3D (end - start) >> umem_odp->page_shift; if (!pages) return -EINVAL; =20 - /* - * Note that the representation of the intervals in the - * interval tree considers the ending point as contained in - * the interval. - */ - umem_odp->interval_tree.last--; - umem_odp->page_list =3D kvcalloc( pages, sizeof(*umem_odp->page_list), GFP_KERNEL); if (!umem_odp->page_list) @@ -250,26 +86,13 @@ static inline int ib_init_umem_odp(struct ib_umem_odp = *umem_odp) ret =3D -ENOMEM; goto out_page_list; } - } =20 - mn =3D mmu_notifier_get(&ib_umem_notifiers, umem_odp->umem.owning_mm); - if (IS_ERR(mn)) { - ret =3D PTR_ERR(mn); - goto out_dma_list; + ret =3D mmu_interval_notifier_insert(&umem_odp->notifier, + umem_odp->umem.owning_mm, + start, end - start, ops); + if (ret) + goto out_dma_list; } - umem_odp->per_mm =3D per_mm =3D - container_of(mn, struct ib_ucontext_per_mm, mn); - - mutex_init(&umem_odp->umem_mutex); - init_completion(&umem_odp->notifier_completion); - - if (!umem_odp->is_implicit_odp) { - down_write(&per_mm->umem_rwsem); - interval_tree_insert(&umem_odp->interval_tree, - &per_mm->umem_tree); - up_write(&per_mm->umem_rwsem); - } - mmgrab(umem_odp->umem.owning_mm); =20 return 0; =20 @@ -305,8 +128,6 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct i= b_udata *udata, =20 if (!context) return ERR_PTR(-EIO); - if (WARN_ON_ONCE(!context->device->ops.invalidate_range)) - return ERR_PTR(-EINVAL); =20 umem_odp =3D kzalloc(sizeof(*umem_odp), GFP_KERNEL); if (!umem_odp) @@ -318,8 +139,10 @@ struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct = ib_udata *udata, umem_odp->is_implicit_odp =3D 1; umem_odp->page_shift =3D PAGE_SHIFT; =20 - ret =3D ib_init_umem_odp(umem_odp); + umem_odp->tgid =3D get_task_pid(current->group_leader, PIDTYPE_PID); + ret =3D ib_init_umem_odp(umem_odp, NULL); if (ret) { + put_pid(umem_odp->tgid); kfree(umem_odp); return ERR_PTR(ret); } @@ -336,8 +159,10 @@ EXPORT_SYMBOL(ib_umem_odp_alloc_implicit); * @addr: The starting userspace VA * @size: The length of the userspace VA */ -struct ib_umem_odp *ib_umem_odp_alloc_child(struct ib_umem_odp *root, - unsigned long addr, size_t size) +struct ib_umem_odp * +ib_umem_odp_alloc_child(struct ib_umem_odp *root, unsigned long addr, + size_t size, + const struct mmu_interval_notifier_ops *ops) { /* * Caller must ensure that root cannot be freed during the call to @@ -360,9 +185,12 @@ struct ib_umem_odp *ib_umem_odp_alloc_child(struct ib_= umem_odp *root, umem->writable =3D root->umem.writable; umem->owning_mm =3D root->umem.owning_mm; odp_data->page_shift =3D PAGE_SHIFT; + odp_data->notifier.ops =3D ops; =20 - ret =3D ib_init_umem_odp(odp_data); + odp_data->tgid =3D get_pid(root->tgid); + ret =3D ib_init_umem_odp(odp_data, ops); if (ret) { + put_pid(odp_data->tgid); kfree(odp_data); return ERR_PTR(ret); } @@ -383,7 +211,8 @@ EXPORT_SYMBOL(ib_umem_odp_alloc_child); * conjunction with MMU notifiers. */ struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long = addr, - size_t size, int access) + size_t size, int access, + const struct mmu_interval_notifier_ops *ops) { struct ib_umem_odp *umem_odp; struct ib_ucontext *context; @@ -398,8 +227,7 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *ud= ata, unsigned long addr, if (!context) return ERR_PTR(-EIO); =20 - if (WARN_ON_ONCE(!(access & IB_ACCESS_ON_DEMAND)) || - WARN_ON_ONCE(!context->device->ops.invalidate_range)) + if (WARN_ON_ONCE(!(access & IB_ACCESS_ON_DEMAND))) return ERR_PTR(-EINVAL); =20 umem_odp =3D kzalloc(sizeof(struct ib_umem_odp), GFP_KERNEL); @@ -411,6 +239,7 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *ud= ata, unsigned long addr, umem_odp->umem.address =3D addr; umem_odp->umem.writable =3D ib_access_writable(access); umem_odp->umem.owning_mm =3D mm =3D current->mm; + umem_odp->notifier.ops =3D ops; =20 umem_odp->page_shift =3D PAGE_SHIFT; if (access & IB_ACCESS_HUGETLB) { @@ -429,11 +258,14 @@ struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *= udata, unsigned long addr, up_read(&mm->mmap_sem); } =20 - ret =3D ib_init_umem_odp(umem_odp); + umem_odp->tgid =3D get_task_pid(current->group_leader, PIDTYPE_PID); + ret =3D ib_init_umem_odp(umem_odp, ops); if (ret) - goto err_free; + goto err_put_pid; return umem_odp; =20 +err_put_pid: + put_pid(umem_odp->tgid); err_free: kfree(umem_odp); return ERR_PTR(ret); @@ -442,8 +274,6 @@ EXPORT_SYMBOL(ib_umem_odp_get); =20 void ib_umem_odp_release(struct ib_umem_odp *umem_odp) { - struct ib_ucontext_per_mm *per_mm =3D umem_odp->per_mm; - /* * Ensure that no more pages are mapped in the umem. * @@ -455,28 +285,11 @@ void ib_umem_odp_release(struct ib_umem_odp *umem_odp) ib_umem_odp_unmap_dma_pages(umem_odp, ib_umem_start(umem_odp), ib_umem_end(umem_odp)); mutex_unlock(&umem_odp->umem_mutex); + mmu_interval_notifier_remove(&umem_odp->notifier); kvfree(umem_odp->dma_list); kvfree(umem_odp->page_list); + put_pid(umem_odp->tgid); } - - down_write(&per_mm->umem_rwsem); - if (!umem_odp->is_implicit_odp) { - interval_tree_remove(&umem_odp->interval_tree, - &per_mm->umem_tree); - complete_all(&umem_odp->notifier_completion); - } - /* - * NOTE! mmu_notifier_unregister() can happen between a start/end - * callback, resulting in a missing end, and thus an unbalanced - * lock. This doesn't really matter to us since we are about to kfree - * the memory that holds the lock, however LOCKDEP doesn't like this. - * Thus we call the mmu_notifier_put under the rwsem and test the - * internal users count to reliably see if we are past this point. - */ - mmu_notifier_put(&per_mm->mn); - up_write(&per_mm->umem_rwsem); - - mmdrop(umem_odp->umem.owning_mm); kfree(umem_odp); } EXPORT_SYMBOL(ib_umem_odp_release); @@ -501,7 +314,7 @@ EXPORT_SYMBOL(ib_umem_odp_release); */ static int ib_umem_odp_map_dma_single_page( struct ib_umem_odp *umem_odp, - int page_index, + unsigned int page_index, struct page *page, u64 access_mask, unsigned long current_seq) @@ -510,12 +323,7 @@ static int ib_umem_odp_map_dma_single_page( dma_addr_t dma_addr; int ret =3D 0; =20 - /* - * Note: we avoid writing if seq is different from the initial seq, to - * handle case of a racing notifier. This check also allows us to bail - * early if we have a notifier running in parallel with us. - */ - if (ib_umem_mmu_notifier_retry(umem_odp, current_seq)) { + if (mmu_interval_check_retry(&umem_odp->notifier, current_seq)) { ret =3D -EAGAIN; goto out; } @@ -618,7 +426,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_= odp, u64 user_virt, * existing beyond the lifetime of the originating process.. Presumably * mmget_not_zero will fail in this case. */ - owning_process =3D get_pid_task(umem_odp->per_mm->tgid, PIDTYPE_PID); + owning_process =3D get_pid_task(umem_odp->tgid, PIDTYPE_PID); if (!owning_process || !mmget_not_zero(owning_mm)) { ret =3D -EINVAL; goto out_put_task; @@ -762,32 +570,3 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *u= mem_odp, u64 virt, } } EXPORT_SYMBOL(ib_umem_odp_unmap_dma_pages); - -/* @last is not a part of the interval. See comment for function - * node_last. - */ -int rbt_ib_umem_for_each_in_range(struct rb_root_cached *root, - u64 start, u64 last, - umem_call_back cb, - bool blockable, - void *cookie) -{ - int ret_val =3D 0; - struct interval_tree_node *node, *next; - struct ib_umem_odp *umem; - - if (unlikely(start =3D=3D last)) - return ret_val; - - for (node =3D interval_tree_iter_first(root, start, last - 1); - node; node =3D next) { - /* TODO move the blockable decision up to the callback */ - if (!blockable) - return -EAGAIN; - next =3D interval_tree_iter_next(node, start, last - 1); - umem =3D container_of(node, struct ib_umem_odp, interval_tree); - ret_val =3D cb(umem, start, last, cookie) || ret_val; - } - - return ret_val; -} diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/m= lx5/mlx5_ib.h index f61d4005c6c379..108cadf9af1fda 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1263,8 +1263,6 @@ int mlx5_ib_odp_init_one(struct mlx5_ib_dev *ibdev); void mlx5_ib_odp_cleanup_one(struct mlx5_ib_dev *ibdev); int __init mlx5_ib_odp_init(void); void mlx5_ib_odp_cleanup(void); -void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, unsigned long = start, - unsigned long end); void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_ent *ent); void mlx5_odp_populate_klm(struct mlx5_klm *pklm, size_t offset, size_t nentries, struct mlx5_ib_mr *mr, int flags); @@ -1294,11 +1292,10 @@ mlx5_ib_advise_mr_prefetch(struct ib_pd *pd, { return -EOPNOTSUPP; } -static inline void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, - unsigned long start, - unsigned long end){}; #endif /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */ =20 +extern const struct mmu_interval_notifier_ops mlx5_mn_ops; + /* Needed for rep profile */ void __mlx5_ib_remove(struct mlx5_ib_dev *dev, const struct mlx5_ib_profile *profile, diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/m= r.c index 199f7959aaa510..fbe31830b22807 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -743,7 +743,8 @@ static int mr_umem_get(struct mlx5_ib_dev *dev, struct = ib_udata *udata, if (access_flags & IB_ACCESS_ON_DEMAND) { struct ib_umem_odp *odp; =20 - odp =3D ib_umem_odp_get(udata, start, length, access_flags); + odp =3D ib_umem_odp_get(udata, start, length, access_flags, + &mlx5_mn_ops); if (IS_ERR(odp)) { mlx5_ib_dbg(dev, "umem get failed (%ld)\n", PTR_ERR(odp)); diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/= odp.c index bcfc098466977e..63e0ebd1ae9d0c 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -241,17 +241,26 @@ static void destroy_unused_implicit_child_mr(struct m= lx5_ib_mr *mr) xa_unlock(&imr->implicit_children); } =20 -void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, unsigned long = start, - unsigned long end) +static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range, + unsigned long cur_seq) { + struct ib_umem_odp *umem_odp =3D + container_of(mni, struct ib_umem_odp, notifier); struct mlx5_ib_mr *mr; const u64 umr_block_mask =3D (MLX5_UMR_MTT_ALIGNMENT / sizeof(struct mlx5_mtt)) - 1; u64 idx =3D 0, blk_start_idx =3D 0; + unsigned long start; + unsigned long end; int in_block =3D 0; u64 addr; =20 + if (!mmu_notifier_range_blockable(range)) + return false; + mutex_lock(&umem_odp->umem_mutex); + mmu_interval_set_seq(mni, cur_seq); /* * If npages is zero then umem_odp->private may not be setup yet. This * does not complete until after the first page is mapped for DMA. @@ -260,8 +269,8 @@ void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_= odp, unsigned long start, goto out; mr =3D umem_odp->private; =20 - start =3D max_t(u64, ib_umem_start(umem_odp), start); - end =3D min_t(u64, ib_umem_end(umem_odp), end); + start =3D max_t(u64, ib_umem_start(umem_odp), range->start); + end =3D min_t(u64, ib_umem_end(umem_odp), range->end); =20 /* * Iteration one - zap the HW's MTTs. The notifiers_count ensures that @@ -312,8 +321,13 @@ void mlx5_ib_invalidate_range(struct ib_umem_odp *umem= _odp, unsigned long start, destroy_unused_implicit_child_mr(mr); out: mutex_unlock(&umem_odp->umem_mutex); + return true; } =20 +const struct mmu_interval_notifier_ops mlx5_mn_ops =3D { + .invalidate =3D mlx5_ib_invalidate_range, +}; + void mlx5_ib_internal_fill_odp_caps(struct mlx5_ib_dev *dev) { struct ib_odp_caps *caps =3D &dev->odp_caps; @@ -414,7 +428,7 @@ static struct mlx5_ib_mr *implicit_get_child_mr(struct = mlx5_ib_mr *imr, =20 odp =3D ib_umem_odp_alloc_child(to_ib_umem_odp(imr->umem), idx * MLX5_IMR_MTT_SIZE, - MLX5_IMR_MTT_SIZE); + MLX5_IMR_MTT_SIZE, &mlx5_mn_ops); if (IS_ERR(odp)) return ERR_CAST(odp); =20 @@ -600,8 +614,9 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, str= uct ib_umem_odp *odp, u64 user_va, size_t bcnt, u32 *bytes_mapped, u32 flags) { - int current_seq, page_shift, ret, np; + int page_shift, ret, np; bool downgrade =3D flags & MLX5_PF_FLAGS_DOWNGRADE; + unsigned long current_seq; u64 access_mask; u64 start_idx, page_mask; =20 @@ -613,12 +628,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, st= ruct ib_umem_odp *odp, if (odp->umem.writable && !downgrade) access_mask |=3D ODP_WRITE_ALLOWED_BIT; =20 - current_seq =3D READ_ONCE(odp->notifiers_seq); - /* - * Ensure the sequence number is valid for some time before we call - * gup. - */ - smp_rmb(); + current_seq =3D mmu_interval_read_begin(&odp->notifier); =20 np =3D ib_umem_odp_map_dma_pages(odp, user_va, bcnt, access_mask, current_seq); @@ -626,7 +636,7 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, str= uct ib_umem_odp *odp, return np; =20 mutex_lock(&odp->umem_mutex); - if (!ib_umem_mmu_notifier_retry(odp, current_seq)) { + if (!mmu_interval_read_retry(&odp->notifier, current_seq)) { /* * No need to check whether the MTTs really belong to * this MR, since ib_umem_odp_map_dma_pages already @@ -656,19 +666,6 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, st= ruct ib_umem_odp *odp, return np << (page_shift - PAGE_SHIFT); =20 out: - if (ret =3D=3D -EAGAIN) { - unsigned long timeout =3D msecs_to_jiffies(MMU_NOTIFIER_TIMEOUT); - - if (!wait_for_completion_timeout(&odp->notifier_completion, - timeout)) { - mlx5_ib_warn( - mr->dev, - "timeout waiting for mmu notifier. seq %d against %d. notifiers_count= =3D%d\n", - current_seq, odp->notifiers_seq, - odp->notifiers_count); - } - } - return ret; } =20 @@ -1609,7 +1606,6 @@ void mlx5_odp_init_mr_cache_entry(struct mlx5_cache_e= nt *ent) =20 static const struct ib_device_ops mlx5_ib_dev_odp_ops =3D { .advise_mr =3D mlx5_ib_advise_mr, - .invalidate_range =3D mlx5_ib_invalidate_range, }; =20 int mlx5_ib_odp_init_one(struct mlx5_ib_dev *dev) diff --git a/include/rdma/ib_umem_odp.h b/include/rdma/ib_umem_odp.h index 09b0e4494986a9..81429acc825774 100644 --- a/include/rdma/ib_umem_odp.h +++ b/include/rdma/ib_umem_odp.h @@ -35,11 +35,11 @@ =20 #include #include -#include =20 struct ib_umem_odp { struct ib_umem umem; - struct ib_ucontext_per_mm *per_mm; + struct mmu_interval_notifier notifier; + struct pid *tgid; =20 /* * An array of the pages included in the on-demand paging umem. @@ -62,13 +62,8 @@ struct ib_umem_odp { struct mutex umem_mutex; void *private; /* for the HW driver to use. */ =20 - int notifiers_seq; - int notifiers_count; int npages; =20 - /* Tree tracking */ - struct interval_tree_node interval_tree; - /* * An implicit odp umem cannot be DMA mapped, has 0 length, and serves * only as an anchor for the driver to hold onto the per_mm. FIXME: @@ -77,7 +72,6 @@ struct ib_umem_odp { */ bool is_implicit_odp; =20 - struct completion notifier_completion; unsigned int page_shift; }; =20 @@ -89,13 +83,13 @@ static inline struct ib_umem_odp *to_ib_umem_odp(struct= ib_umem *umem) /* Returns the first page of an ODP umem. */ static inline unsigned long ib_umem_start(struct ib_umem_odp *umem_odp) { - return umem_odp->interval_tree.start; + return umem_odp->notifier.interval_tree.start; } =20 /* Returns the address of the page after the last one of an ODP umem. */ static inline unsigned long ib_umem_end(struct ib_umem_odp *umem_odp) { - return umem_odp->interval_tree.last + 1; + return umem_odp->notifier.interval_tree.last + 1; } =20 static inline size_t ib_umem_odp_num_pages(struct ib_umem_odp *umem_odp) @@ -119,21 +113,15 @@ static inline size_t ib_umem_odp_num_pages(struct ib_= umem_odp *umem_odp) =20 #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING =20 -struct ib_ucontext_per_mm { - struct mmu_notifier mn; - struct pid *tgid; - - struct rb_root_cached umem_tree; - /* Protects umem_tree */ - struct rw_semaphore umem_rwsem; -}; - -struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, unsigned long = addr, - size_t size, int access); +struct ib_umem_odp * +ib_umem_odp_get(struct ib_udata *udata, unsigned long addr, size_t size, + int access, const struct mmu_interval_notifier_ops *ops); struct ib_umem_odp *ib_umem_odp_alloc_implicit(struct ib_udata *udata, int access); -struct ib_umem_odp *ib_umem_odp_alloc_child(struct ib_umem_odp *root_umem, - unsigned long addr, size_t size); +struct ib_umem_odp * +ib_umem_odp_alloc_child(struct ib_umem_odp *root_umem, unsigned long addr, + size_t size, + const struct mmu_interval_notifier_ops *ops); void ib_umem_odp_release(struct ib_umem_odp *umem_odp); =20 int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 start_offs= et, @@ -143,39 +131,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *ume= m_odp, u64 start_offset, void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 start_o= ffset, u64 bound); =20 -typedef int (*umem_call_back)(struct ib_umem_odp *item, u64 start, u64 end, - void *cookie); -/* - * Call the callback on each ib_umem in the range. Returns the logical or = of - * the return values of the functions called. - */ -int rbt_ib_umem_for_each_in_range(struct rb_root_cached *root, - u64 start, u64 end, - umem_call_back cb, - bool blockable, void *cookie); - -static inline int ib_umem_mmu_notifier_retry(struct ib_umem_odp *umem_odp, - unsigned long mmu_seq) -{ - /* - * This code is strongly based on the KVM code from - * mmu_notifier_retry. Should be called with - * the relevant locks taken (umem_odp->umem_mutex - * and the ucontext umem_mutex semaphore locked for read). - */ - - if (unlikely(umem_odp->notifiers_count)) - return 1; - if (umem_odp->notifiers_seq !=3D mmu_seq) - return 1; - return 0; -} - #else /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */ =20 -static inline struct ib_umem_odp *ib_umem_odp_get(struct ib_udata *udata, - unsigned long addr, - size_t size, int access) +static inline struct ib_umem_odp * +ib_umem_odp_get(struct ib_udata *udata, unsigned long addr, size_t size, + int access, const struct mmu_interval_notifier_ops *ops) { return ERR_PTR(-EINVAL); } diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 6a47ba85c54c11..2c30c859ae0d13 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2422,8 +2422,6 @@ struct ib_device_ops { u64 iova); int (*unmap_fmr)(struct list_head *fmr_list); int (*dealloc_fmr)(struct ib_fmr *fmr); - void (*invalidate_range)(struct ib_umem_odp *umem_odp, - unsigned long start, unsigned long end); int (*attach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid); int (*detach_mcast)(struct ib_qp *qp, union ib_gid *gid, u16 lid); struct ib_xrcd *(*alloc_xrcd)(struct ib_device *device, --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590267; cv=none; d=zoho.com; s=zohoarc; b=bpvLofd7LZUWImA+dTj9F6AoKbIgt8kJoVC9LF7Psk6pKUVM6BlRQqfSl1I9/1Ho0vgZ6CMucYbS8zkkYZYP0ZUeIiqDeA2vsIW9U6tHEyTlmDEJg31js/LXvd484IesXXtm0Wf76bi5hSXC+35e4VijA8HkqRSKXFVD+Git4Ok= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590267; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=6MhueyuhRkQCWYa7UjVMLNCpN12XIedAVp+sUNH/gOg=; b=AM9+KQzWr86nJz91I70gr2WRPv2DW7rCpe2YldliWMCrlj4vbMKE5YNScBUdR7Hj+FFQiIEzfrJx0yyp1aGJ2RNlp/AfsIwWlMn/ZgAwrP4kLBJI5RPrn42shJl2+3/Db5UfsEc3M3O642C0BslZ44fa3xgTFwtE4ijVctbp82s= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 15735902675571.146571310347099; Tue, 12 Nov 2019 12:24:27 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchC-0003C7-Bl; Tue, 12 Nov 2019 20:23:26 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchA-0003B4-EX for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:24 +0000 Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 343e4392-058a-11ea-adbe-bc764e2007e4; Tue, 12 Nov 2019 20:22:52 +0000 (UTC) Received: by mail-qt1-x844.google.com with SMTP id p20so21248428qtq.5 for ; Tue, 12 Nov 2019 12:22:52 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id q16sm7487987qkm.27.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003k4-DE; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 343e4392-058a-11ea-adbe-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BSFUg/W9A6HnwsbPbuDzq/9L6ti4sGNMmL25wGbaBxI=; b=U5tMGzDohM88Dhb5OT+KNGasGrxJPURV3VJCgcEbLNaNLggNPtNvaJWNxh0oGCxd+G e1TTaZrAQm6MtqEffeVMDJceNyB8NHz11Rhj2PRrGCd72azzt1/3JDGtzA2cy4YX6/0u mAW2VzEiinIlE+wrNwjawDIV7wV6QvRGUW8Ksp2RYkql90eX+Syq6crGDA8ezIa/znBG nlMhQqdyxhN7pb6Jafd7qjYQ0G5GzVjTbGXchwZj6vkoj/p7ny/lKxLlNyJPBZEnc930 TG5K5USOVXpcnWbR/x/G0Wx3Ef54r2hr8Dpu4fajLIw10HvmTDAD5WQT4pVIm1hLNM6F P6MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BSFUg/W9A6HnwsbPbuDzq/9L6ti4sGNMmL25wGbaBxI=; b=I5GU4kRm5aT+pMEFfDkOCusHcFZ2Qjtq1f2O8mb6wsklnZZi6idQ4tmZCSIkqt9/fo FF2mnroxLvbObuBiBVgBn/MrPvVq+Vs5bfZCmOt5yfR0udDai3x37cpaqeoseQH2ezMj rsG+62DfqRDCTn+MErpRiaSAUzWkQaYsK8ii8/JdpG5kc4ENSZO2KCmypOv0BnVzXFDr YE4LaWT/995ePTP8Zq8r6fJj04e6o+sbH3bTRFflToLUOch4SpZ70QDQ4hejJQcpf4Fq LsYvcfOGwO1ZEfHSPH9nQVpDzQJ6II1e94NF3XsB5qOdrT51K+9+KjFtO1xJGwK4Zzdc 6QCA== X-Gm-Message-State: APjAAAUednS1R+HKJ3XUk8jFjC4pwegpcEa9GKWlb1/3aeMR6iPx+x/O g1z4QHkPxFSi+4CeX88Ctzc0OA== X-Google-Smtp-Source: APXvYqyh7GmjUHdAPkfQnL2h1rs4I1Aq/QuqAEqebx0t7FVTldSAgXXVkrqtZm6Vk74H36QsCmVYvg== X-Received: by 2002:ac8:f88:: with SMTP id b8mr33625110qtk.382.1573590172345; Tue, 12 Nov 2019 12:22:52 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:23 -0400 Message-Id: <20191112202231.3856-7-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 06/14] RDMA/hfi1: Use mmu_interval_notifier_insert for user_exp_rcv X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe This converts one of the two users of mmu_notifiers to use the new API. The conversion is fairly straightforward, however the existing use of notifiers here seems to be racey. Tested-by: Dennis Dalessandro Signed-off-by: Jason Gunthorpe --- drivers/infiniband/hw/hfi1/file_ops.c | 2 +- drivers/infiniband/hw/hfi1/hfi.h | 2 +- drivers/infiniband/hw/hfi1/user_exp_rcv.c | 146 +++++++++------------- drivers/infiniband/hw/hfi1/user_exp_rcv.h | 3 +- 4 files changed, 60 insertions(+), 93 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/= hfi1/file_ops.c index f9a7e9d29c8ba2..7c5e3fb224139a 100644 --- a/drivers/infiniband/hw/hfi1/file_ops.c +++ b/drivers/infiniband/hw/hfi1/file_ops.c @@ -1138,7 +1138,7 @@ static int get_ctxt_info(struct hfi1_filedata *fd, un= signed long arg, u32 len) HFI1_CAP_UGET_MASK(uctxt->flags, MASK) | HFI1_CAP_KGET_MASK(uctxt->flags, K2U); /* adjust flag if this fd is not able to cache */ - if (!fd->handler) + if (!fd->use_mn) cinfo.runtime_flags |=3D HFI1_CAP_TID_UNMAP; /* no caching */ =20 cinfo.num_active =3D hfi1_count_active_units(); diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/= hfi.h index fa45350a9a1d32..fc10d65fc3e13c 100644 --- a/drivers/infiniband/hw/hfi1/hfi.h +++ b/drivers/infiniband/hw/hfi1/hfi.h @@ -1444,7 +1444,7 @@ struct hfi1_filedata { /* for cpu affinity; -1 if none */ int rec_cpu_num; u32 tid_n_pinned; - struct mmu_rb_handler *handler; + bool use_mn; struct tid_rb_node **entry_to_rb; spinlock_t tid_lock; /* protect tid_[limit,used] counters */ u32 tid_limit; diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.c b/drivers/infiniband= /hw/hfi1/user_exp_rcv.c index 3592a9ec155e85..75a378162162d3 100644 --- a/drivers/infiniband/hw/hfi1/user_exp_rcv.c +++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.c @@ -59,11 +59,11 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd, struct tid_user_buf *tbuf, u32 rcventry, struct tid_group *grp, u16 pageidx, unsigned int npages); -static int tid_rb_insert(void *arg, struct mmu_rb_node *node); static void cacheless_tid_rb_remove(struct hfi1_filedata *fdata, struct tid_rb_node *tnode); -static void tid_rb_remove(void *arg, struct mmu_rb_node *node); -static int tid_rb_invalidate(void *arg, struct mmu_rb_node *mnode); +static bool tid_rb_invalidate(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range, + unsigned long cur_seq); static int program_rcvarray(struct hfi1_filedata *fd, struct tid_user_buf = *, struct tid_group *grp, unsigned int start, u16 count, @@ -73,10 +73,8 @@ static int unprogram_rcvarray(struct hfi1_filedata *fd, = u32 tidinfo, struct tid_group **grp); static void clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *n= ode); =20 -static struct mmu_rb_ops tid_rb_ops =3D { - .insert =3D tid_rb_insert, - .remove =3D tid_rb_remove, - .invalidate =3D tid_rb_invalidate +static const struct mmu_interval_notifier_ops tid_mn_ops =3D { + .invalidate =3D tid_rb_invalidate, }; =20 /* @@ -87,7 +85,6 @@ static struct mmu_rb_ops tid_rb_ops =3D { int hfi1_user_exp_rcv_init(struct hfi1_filedata *fd, struct hfi1_ctxtdata *uctxt) { - struct hfi1_devdata *dd =3D uctxt->dd; int ret =3D 0; =20 spin_lock_init(&fd->tid_lock); @@ -109,20 +106,7 @@ int hfi1_user_exp_rcv_init(struct hfi1_filedata *fd, fd->entry_to_rb =3D NULL; return -ENOMEM; } - - /* - * Register MMU notifier callbacks. If the registration - * fails, continue without TID caching for this context. - */ - ret =3D hfi1_mmu_rb_register(fd, fd->mm, &tid_rb_ops, - dd->pport->hfi1_wq, - &fd->handler); - if (ret) { - dd_dev_info(dd, - "Failed MMU notifier registration %d\n", - ret); - ret =3D 0; - } + fd->use_mn =3D true; } =20 /* @@ -139,7 +123,7 @@ int hfi1_user_exp_rcv_init(struct hfi1_filedata *fd, * init. */ spin_lock(&fd->tid_lock); - if (uctxt->subctxt_cnt && fd->handler) { + if (uctxt->subctxt_cnt && fd->use_mn) { u16 remainder; =20 fd->tid_limit =3D uctxt->expected_count / uctxt->subctxt_cnt; @@ -158,18 +142,10 @@ void hfi1_user_exp_rcv_free(struct hfi1_filedata *fd) { struct hfi1_ctxtdata *uctxt =3D fd->uctxt; =20 - /* - * The notifier would have been removed when the process'es mm - * was freed. - */ - if (fd->handler) { - hfi1_mmu_rb_unregister(fd->handler); - } else { - if (!EXP_TID_SET_EMPTY(uctxt->tid_full_list)) - unlock_exp_tids(uctxt, &uctxt->tid_full_list, fd); - if (!EXP_TID_SET_EMPTY(uctxt->tid_used_list)) - unlock_exp_tids(uctxt, &uctxt->tid_used_list, fd); - } + if (!EXP_TID_SET_EMPTY(uctxt->tid_full_list)) + unlock_exp_tids(uctxt, &uctxt->tid_full_list, fd); + if (!EXP_TID_SET_EMPTY(uctxt->tid_used_list)) + unlock_exp_tids(uctxt, &uctxt->tid_used_list, fd); =20 kfree(fd->invalid_tids); fd->invalid_tids =3D NULL; @@ -201,7 +177,7 @@ static void unpin_rcv_pages(struct hfi1_filedata *fd, =20 if (mapped) { pci_unmap_single(dd->pcidev, node->dma_addr, - node->mmu.len, PCI_DMA_FROMDEVICE); + node->npages * PAGE_SIZE, PCI_DMA_FROMDEVICE); pages =3D &node->pages[idx]; } else { pages =3D &tidbuf->pages[idx]; @@ -777,8 +753,8 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd, return -EFAULT; } =20 - node->mmu.addr =3D tbuf->vaddr + (pageidx * PAGE_SIZE); - node->mmu.len =3D npages * PAGE_SIZE; + node->notifier.ops =3D &tid_mn_ops; + node->fdata =3D fd; node->phys =3D page_to_phys(pages[0]); node->npages =3D npages; node->rcventry =3D rcventry; @@ -787,23 +763,34 @@ static int set_rcvarray_entry(struct hfi1_filedata *f= d, node->freed =3D false; memcpy(node->pages, pages, sizeof(struct page *) * npages); =20 - if (!fd->handler) - ret =3D tid_rb_insert(fd, &node->mmu); - else - ret =3D hfi1_mmu_rb_insert(fd->handler, &node->mmu); - - if (ret) { - hfi1_cdbg(TID, "Failed to insert RB node %u 0x%lx, 0x%lx %d", - node->rcventry, node->mmu.addr, node->phys, ret); - pci_unmap_single(dd->pcidev, phys, npages * PAGE_SIZE, - PCI_DMA_FROMDEVICE); - kfree(node); - return -EFAULT; + if (fd->use_mn) { + ret =3D mmu_interval_notifier_insert( + &node->notifier, tbuf->vaddr + (pageidx * PAGE_SIZE), + npages * PAGE_SIZE, fd->mm); + if (ret) + goto out_unmap; + /* + * FIXME: This is in the wrong order, the notifier should be + * established before the pages are pinned by pin_rcv_pages. + */ + mmu_interval_read_begin(&node->notifier); } + fd->entry_to_rb[node->rcventry - uctxt->expected_base] =3D node; + hfi1_put_tid(dd, rcventry, PT_EXPECTED, phys, ilog2(npages) + 1); trace_hfi1_exp_tid_reg(uctxt->ctxt, fd->subctxt, rcventry, npages, - node->mmu.addr, node->phys, phys); + node->notifier.interval_tree.start, node->phys, + phys); return 0; + +out_unmap: + hfi1_cdbg(TID, "Failed to insert RB node %u 0x%lx, 0x%lx %d", + node->rcventry, node->notifier.interval_tree.start, + node->phys, ret); + pci_unmap_single(dd->pcidev, phys, npages * PAGE_SIZE, + PCI_DMA_FROMDEVICE); + kfree(node); + return -EFAULT; } =20 static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo, @@ -833,10 +820,9 @@ static int unprogram_rcvarray(struct hfi1_filedata *fd= , u32 tidinfo, if (grp) *grp =3D node->grp; =20 - if (!fd->handler) - cacheless_tid_rb_remove(fd, node); - else - hfi1_mmu_rb_remove(fd->handler, &node->mmu); + if (fd->use_mn) + mmu_interval_notifier_remove(&node->notifier); + cacheless_tid_rb_remove(fd, node); =20 return 0; } @@ -847,7 +833,8 @@ static void clear_tid_node(struct hfi1_filedata *fd, st= ruct tid_rb_node *node) struct hfi1_devdata *dd =3D uctxt->dd; =20 trace_hfi1_exp_tid_unreg(uctxt->ctxt, fd->subctxt, node->rcventry, - node->npages, node->mmu.addr, node->phys, + node->npages, + node->notifier.interval_tree.start, node->phys, node->dma_addr); =20 /* @@ -894,30 +881,29 @@ static void unlock_exp_tids(struct hfi1_ctxtdata *uct= xt, if (!node || node->rcventry !=3D rcventry) continue; =20 + if (fd->use_mn) + mmu_interval_notifier_remove( + &node->notifier); cacheless_tid_rb_remove(fd, node); } } } } =20 -/* - * Always return 0 from this function. A non-zero return indicates that t= he - * remove operation will be called and that memory should be unpinned. - * However, the driver cannot unpin out from under PSM. Instead, retain t= he - * memory (by returning 0) and inform PSM that the memory is going away. = PSM - * will call back later when it has removed the memory from its list. - */ -static int tid_rb_invalidate(void *arg, struct mmu_rb_node *mnode) +static bool tid_rb_invalidate(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range, + unsigned long cur_seq) { - struct hfi1_filedata *fdata =3D arg; - struct hfi1_ctxtdata *uctxt =3D fdata->uctxt; struct tid_rb_node *node =3D - container_of(mnode, struct tid_rb_node, mmu); + container_of(mni, struct tid_rb_node, notifier); + struct hfi1_filedata *fdata =3D node->fdata; + struct hfi1_ctxtdata *uctxt =3D fdata->uctxt; =20 if (node->freed) - return 0; + return true; =20 - trace_hfi1_exp_tid_inval(uctxt->ctxt, fdata->subctxt, node->mmu.addr, + trace_hfi1_exp_tid_inval(uctxt->ctxt, fdata->subctxt, + node->notifier.interval_tree.start, node->rcventry, node->npages, node->dma_addr); node->freed =3D true; =20 @@ -946,18 +932,7 @@ static int tid_rb_invalidate(void *arg, struct mmu_rb_= node *mnode) fdata->invalid_tid_idx++; } spin_unlock(&fdata->invalid_lock); - return 0; -} - -static int tid_rb_insert(void *arg, struct mmu_rb_node *node) -{ - struct hfi1_filedata *fdata =3D arg; - struct tid_rb_node *tnode =3D - container_of(node, struct tid_rb_node, mmu); - u32 base =3D fdata->uctxt->expected_base; - - fdata->entry_to_rb[tnode->rcventry - base] =3D tnode; - return 0; + return true; } =20 static void cacheless_tid_rb_remove(struct hfi1_filedata *fdata, @@ -968,12 +943,3 @@ static void cacheless_tid_rb_remove(struct hfi1_fileda= ta *fdata, fdata->entry_to_rb[tnode->rcventry - base] =3D NULL; clear_tid_node(fdata, tnode); } - -static void tid_rb_remove(void *arg, struct mmu_rb_node *node) -{ - struct hfi1_filedata *fdata =3D arg; - struct tid_rb_node *tnode =3D - container_of(node, struct tid_rb_node, mmu); - - cacheless_tid_rb_remove(fdata, tnode); -} diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.h b/drivers/infiniband= /hw/hfi1/user_exp_rcv.h index 43b105de1d5427..6257eee083a1a3 100644 --- a/drivers/infiniband/hw/hfi1/user_exp_rcv.h +++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.h @@ -65,7 +65,8 @@ struct tid_user_buf { }; =20 struct tid_rb_node { - struct mmu_rb_node mmu; + struct mmu_interval_notifier notifier; + struct hfi1_filedata *fdata; unsigned long phys; struct tid_group *grp; u32 rcventry; --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590260; cv=none; d=zoho.com; s=zohoarc; b=XqDFtraZ2ineaa4Oy8DUALmsaS2DdGaOen2rXxqlv0n2SBJNXenjT7axfLqjg+ndw1eA6s+Tqc3TeGl63ZNh5/7gjBfQeO8hX+WtI8b6hyTBIneQoQNSy+Y1SM9KnC16L5a6DNRsY5z89hRje90Wtqe0bHY3qDM3H2VVwDLW/oc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590260; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=FWGrsOWqXo3SZrODr/G3WLymdA2M2Gfuqrz99smQ6Kg=; b=WLf1DNFzkmIoibQiHEgXj5m76dOs9YkwcMWUUSzwVZUpUKN5rj8h1MRxSbN/HqVfs22NNmhXWaCEndF3K+TpS8qbGCp+Cd6hhaRZVVrdDdbf8Dz+WV73NY5hmqndMJsjhwbWVSBkHuZZKD/I3gxT/rrx9b8pFuGNdzobU9fCLyg= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590260020396.9318611664428; Tue, 12 Nov 2019 12:24:20 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUch2-00036G-Bq; Tue, 12 Nov 2019 20:23:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUch0-00035e-Dm for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:14 +0000 Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 33cd432c-058a-11ea-984a-bc764e2007e4; Tue, 12 Nov 2019 20:22:52 +0000 (UTC) Received: by mail-qk1-x742.google.com with SMTP id d13so15696338qko.3 for ; Tue, 12 Nov 2019 12:22:52 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id p33sm13643740qtf.80.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003kA-ET; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 33cd432c-058a-11ea-984a-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xw2ai8Vr1f5l5+zlCN78rO+ZKXMukpf6DQd9uHD6kc8=; b=gQiQtULXW3UV8Oh9FfwyO3E18ceB+rUPG/7scE/QRLgHK/rJI+SicVvgByu0b7cRv1 P3EL30/4moeXzNPr0YSFt0etyt/LoUcyiGveq05UEy1cjdvgc8H03dV4KNgaLRWhg9SJ sEU3WHn8tRfj0JLd5NgCYAKsnBziH7B9wB+ZfHNMGxItNMaEHHUgE5GfxAZUDedA7tXH vqudTxfFOdHTkfg9bLmvjeSIlXNekjsJ7IqVXuwlNT/I6AEE/DXJPB1uX5GUADdT+erJ slvQKvXE5+X0O5xSM2b8/r+Fjf5BRR7KuYudjTmdtC+/VPOJiCLgaVwVqd4lHjWSV5YS dcrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xw2ai8Vr1f5l5+zlCN78rO+ZKXMukpf6DQd9uHD6kc8=; b=RRlyYEYH1275QHaC/2/XPNv2xj/suZIDheMssLhi2oHIr4FharIK1NjmUp1WaAJYdj kRBMonDeTLe5ZzwsT1kgbXNi3BnA/mrwiGB8FkG5Lcy1ArMJ2PUsm0uOsYvbEhbSSJN3 Kh3jgdYbA74xhv/3CPvVz07AdFxV/poUQHse59Nn7bWGmpjNVjnqsjJkX2iSzlkx+d3P ksm/dgqXDMv7GZ3OYZfcg/Q/WTAr96qbpCF63nLRUIJ8uEyYFfC/4koNPvjl6mczrwL0 NyWnQJWQ+uVKZ2eDtEdx/wIUPfwqEqgH519U+RPCYes2TWvrwh+KLyWAN1L7DGab23CB hgRg== X-Gm-Message-State: APjAAAXRwaOxP4mmdpjiuZXmJ0cpcZSHTC0gjMqrspRou9FkDnTreCb+ XLDj1XZhFYxrC+hAXTJnljANEw== X-Google-Smtp-Source: APXvYqxgWgKtyMbX/5h2iyVelZCuU14MNQbnuaor7pMdC+cfcmNuVKUEvAj36OMo4uowOJ2sBx30tw== X-Received: by 2002:a05:620a:999:: with SMTP id x25mr767745qkx.189.1573590171692; Tue, 12 Nov 2019 12:22:51 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:24 -0400 Message-Id: <20191112202231.3856-8-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 07/14] drm/radeon: use mmu_interval_notifier_insert X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe The new API is an exact match for the needs of radeon. For some reason radeon tries to remove overlapping ranges from the interval tree, but interval trees (and mmu_interval_notifier_insert()) support overlapping ranges directly. Simply delete all this code. Since this driver is missing a invalidate_range_end callback, but still calls get_user_pages(), it cannot be correct against all races. Reviewed-by: Christian K=C3=B6nig Signed-off-by: Jason Gunthorpe --- drivers/gpu/drm/radeon/radeon.h | 9 +- drivers/gpu/drm/radeon/radeon_mn.c | 218 ++++++----------------------- 2 files changed, 51 insertions(+), 176 deletions(-) diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeo= n.h index d59b004f669583..30e32adc1fc666 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -68,6 +68,10 @@ #include #include =20 +#ifdef CONFIG_MMU_NOTIFIER +#include +#endif + #include #include #include @@ -509,8 +513,9 @@ struct radeon_bo { struct ttm_bo_kmap_obj dma_buf_vmap; pid_t pid; =20 - struct radeon_mn *mn; - struct list_head mn_list; +#ifdef CONFIG_MMU_NOTIFIER + struct mmu_interval_notifier notifier; +#endif }; #define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, tbo.= base) =20 diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/ra= deon_mn.c index dbab9a3a969b9e..f93829f08a4dc1 100644 --- a/drivers/gpu/drm/radeon/radeon_mn.c +++ b/drivers/gpu/drm/radeon/radeon_mn.c @@ -36,131 +36,51 @@ =20 #include "radeon.h" =20 -struct radeon_mn { - struct mmu_notifier mn; - - /* objects protected by lock */ - struct mutex lock; - struct rb_root_cached objects; -}; - -struct radeon_mn_node { - struct interval_tree_node it; - struct list_head bos; -}; - /** - * radeon_mn_invalidate_range_start - callback to notify about mm change + * radeon_mn_invalidate - callback to notify about mm change * * @mn: our notifier - * @mn: the mm this callback is about - * @start: start of updated range - * @end: end of updated range + * @range: the VMA under invalidation * * We block for all BOs between start and end to be idle and * unmap them by move them into system domain again. */ -static int radeon_mn_invalidate_range_start(struct mmu_notifier *mn, - const struct mmu_notifier_range *range) +static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn, + const struct mmu_notifier_range *range, + unsigned long cur_seq) { - struct radeon_mn *rmn =3D container_of(mn, struct radeon_mn, mn); + struct radeon_bo *bo =3D container_of(mn, struct radeon_bo, notifier); struct ttm_operation_ctx ctx =3D { false, false }; - struct interval_tree_node *it; - unsigned long end; - int ret =3D 0; - - /* notification is exclusive, but interval is inclusive */ - end =3D range->end - 1; - - /* TODO we should be able to split locking for interval tree and - * the tear down. - */ - if (mmu_notifier_range_blockable(range)) - mutex_lock(&rmn->lock); - else if (!mutex_trylock(&rmn->lock)) - return -EAGAIN; - - it =3D interval_tree_iter_first(&rmn->objects, range->start, end); - while (it) { - struct radeon_mn_node *node; - struct radeon_bo *bo; - long r; - - if (!mmu_notifier_range_blockable(range)) { - ret =3D -EAGAIN; - goto out_unlock; - } - - node =3D container_of(it, struct radeon_mn_node, it); - it =3D interval_tree_iter_next(it, range->start, end); + long r; =20 - list_for_each_entry(bo, &node->bos, mn_list) { + if (!bo->tbo.ttm || bo->tbo.ttm->state !=3D tt_bound) + return true; =20 - if (!bo->tbo.ttm || bo->tbo.ttm->state !=3D tt_bound) - continue; + if (!mmu_notifier_range_blockable(range)) + return false; =20 - r =3D radeon_bo_reserve(bo, true); - if (r) { - DRM_ERROR("(%ld) failed to reserve user bo\n", r); - continue; - } - - r =3D dma_resv_wait_timeout_rcu(bo->tbo.base.resv, - true, false, MAX_SCHEDULE_TIMEOUT); - if (r <=3D 0) - DRM_ERROR("(%ld) failed to wait for user bo\n", r); - - radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU); - r =3D ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); - if (r) - DRM_ERROR("(%ld) failed to validate user bo\n", r); - - radeon_bo_unreserve(bo); - } + r =3D radeon_bo_reserve(bo, true); + if (r) { + DRM_ERROR("(%ld) failed to reserve user bo\n", r); + return true; } -=09 -out_unlock: - mutex_unlock(&rmn->lock); - - return ret; -} - -static void radeon_mn_release(struct mmu_notifier *mn, struct mm_struct *m= m) -{ - struct mmu_notifier_range range =3D { - .mm =3D mm, - .start =3D 0, - .end =3D ULONG_MAX, - .flags =3D 0, - .event =3D MMU_NOTIFY_UNMAP, - }; - - radeon_mn_invalidate_range_start(mn, &range); -} - -static struct mmu_notifier *radeon_mn_alloc_notifier(struct mm_struct *mm) -{ - struct radeon_mn *rmn; =20 - rmn =3D kzalloc(sizeof(*rmn), GFP_KERNEL); - if (!rmn) - return ERR_PTR(-ENOMEM); + r =3D dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false, + MAX_SCHEDULE_TIMEOUT); + if (r <=3D 0) + DRM_ERROR("(%ld) failed to wait for user bo\n", r); =20 - mutex_init(&rmn->lock); - rmn->objects =3D RB_ROOT_CACHED; - return &rmn->mn; -} + radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU); + r =3D ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); + if (r) + DRM_ERROR("(%ld) failed to validate user bo\n", r); =20 -static void radeon_mn_free_notifier(struct mmu_notifier *mn) -{ - kfree(container_of(mn, struct radeon_mn, mn)); + radeon_bo_unreserve(bo); + return true; } =20 -static const struct mmu_notifier_ops radeon_mn_ops =3D { - .release =3D radeon_mn_release, - .invalidate_range_start =3D radeon_mn_invalidate_range_start, - .alloc_notifier =3D radeon_mn_alloc_notifier, - .free_notifier =3D radeon_mn_free_notifier, +static const struct mmu_interval_notifier_ops radeon_mn_ops =3D { + .invalidate =3D radeon_mn_invalidate, }; =20 /** @@ -174,51 +94,20 @@ static const struct mmu_notifier_ops radeon_mn_ops =3D= { */ int radeon_mn_register(struct radeon_bo *bo, unsigned long addr) { - unsigned long end =3D addr + radeon_bo_size(bo) - 1; - struct mmu_notifier *mn; - struct radeon_mn *rmn; - struct radeon_mn_node *node =3D NULL; - struct list_head bos; - struct interval_tree_node *it; - - mn =3D mmu_notifier_get(&radeon_mn_ops, current->mm); - if (IS_ERR(mn)) - return PTR_ERR(mn); - rmn =3D container_of(mn, struct radeon_mn, mn); - - INIT_LIST_HEAD(&bos); - - mutex_lock(&rmn->lock); - - while ((it =3D interval_tree_iter_first(&rmn->objects, addr, end))) { - kfree(node); - node =3D container_of(it, struct radeon_mn_node, it); - interval_tree_remove(&node->it, &rmn->objects); - addr =3D min(it->start, addr); - end =3D max(it->last, end); - list_splice(&node->bos, &bos); - } - - if (!node) { - node =3D kmalloc(sizeof(struct radeon_mn_node), GFP_KERNEL); - if (!node) { - mutex_unlock(&rmn->lock); - return -ENOMEM; - } - } - - bo->mn =3D rmn; - - node->it.start =3D addr; - node->it.last =3D end; - INIT_LIST_HEAD(&node->bos); - list_splice(&bos, &node->bos); - list_add(&bo->mn_list, &node->bos); - - interval_tree_insert(&node->it, &rmn->objects); - - mutex_unlock(&rmn->lock); - + int ret; + + ret =3D mmu_interval_notifier_insert(&bo->notifier, current->mm, addr, + radeon_bo_size(bo), &radeon_mn_ops); + if (ret) + return ret; + + /* + * FIXME: radeon appears to allow get_user_pages to run during + * invalidate_range_start/end, which is not a safe way to read the + * PTEs. It should use the mmu_interval_read_begin() scheme around the + * get_user_pages to ensure that the PTEs are read properly + */ + mmu_interval_read_begin(&bo->notifier); return 0; } =20 @@ -231,27 +120,8 @@ int radeon_mn_register(struct radeon_bo *bo, unsigned = long addr) */ void radeon_mn_unregister(struct radeon_bo *bo) { - struct radeon_mn *rmn =3D bo->mn; - struct list_head *head; - - if (!rmn) + if (!bo->notifier.mm) return; - - mutex_lock(&rmn->lock); - /* save the next list entry for later */ - head =3D bo->mn_list.next; - - list_del(&bo->mn_list); - - if (list_empty(head)) { - struct radeon_mn_node *node; - node =3D container_of(head, struct radeon_mn_node, bos); - interval_tree_remove(&node->it, &rmn->objects); - kfree(node); - } - - mutex_unlock(&rmn->lock); - - mmu_notifier_put(&rmn->mn); - bo->mn =3D NULL; + mmu_interval_notifier_remove(&bo->notifier); + bo->notifier.mm =3D NULL; } --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590254; cv=none; d=zoho.com; s=zohoarc; b=G7Gq5StBPCbjviQqww8f9zBdRvwiSRdzAPdVSwPWH9DQQTcWSfmnXOZuKOksILu2ekFrdHyxhvnOBpQ1MGbF5tm2ox467RC3Kk1tAYWWSe9hbPY134UhaZC9cgvzp8gY77N03HHGMiQhXt1WaiRiEoFLvcZzYiMQmkB47XWNkxE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590254; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ze0XO7ULX18m5L0y08y31c56DHbxRh1KSke4rIMHrOA=; b=bX/z+04RELVujoiieeQQUDCeeJFz+9jojuQFFxcD9ZdKEDC3azCVI76gW211y/dJjDmPexTESAISRnEvBI3jpQU5RpLYuqYIsvagHQUVwTjtZtXlCuDIb4QDSnM/hIc9c57F0StD4jGGBnk9/zq/roS0hBFWou/mIbg2w4PbLpU= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590254988821.6815531965002; Tue, 12 Nov 2019 12:24:14 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgx-00034C-17; Tue, 12 Nov 2019 20:23:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgv-00033f-E3 for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:09 +0000 Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 337b22a4-058a-11ea-b678-bc764e2007e4; Tue, 12 Nov 2019 20:22:51 +0000 (UTC) Received: by mail-qk1-x744.google.com with SMTP id z23so15661900qkj.10 for ; Tue, 12 Nov 2019 12:22:51 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id h27sm11695982qtk.37.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003kG-Fs; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 337b22a4-058a-11ea-b678-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pTAE3C6UkPphbB8bkoC7OIstj7M49YCEG83oq3fCqt8=; b=b6CK4yU3UoL1CgOKkjCuHrBDiEHENtFT8feX58DmCzY3bqiu4ZK/Od2V3upqZFZ5q7 zm5H9MRQbRsh0K4H8EnHPsoAOVpkNGbeQTWESbKT7P1dzjMaa36fJQDItWwIT4VRKg3R Gas8YxYUJLlyuUQYpfr0WFPgm8oZMHWzG77Crk8koct7WiOYQzWm92xQWPw7pPLdfWH2 NYRVu5/wK0lZF5PCsWAVQL7hRAsVZrhf1u7s5+qbe9Y9tLc5b1GA1xrg+xi8AQKDdxZe ibyjaFHn3hUIJ+4muJB7bymx2DOpJUqTbrvG73Ef8vCtg1rq/D41eg/PfUtqDhj5VWop W2PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pTAE3C6UkPphbB8bkoC7OIstj7M49YCEG83oq3fCqt8=; b=hX6aow4pN46bxvXXPSsAv1Uo8Sez/ZEe/zdiH6EPXU308iQCh5sSHYX55IjNmdz6qO ZT3mTm2bfwVBeKz67D6rcnMhs7s6t55OVXFPo3J42vbT7EY06F7latyOk5+5e8lv/0Vp s8VbvaKUrUtuWEcSnmQXuy9k4MO9Yzf9VacmPHeO29fX3ap3WqiCncDbTU7e3++wjNcN e+CP0RMuLu+jooN0ruWdVn/RTES82UYyvK8RIp8MCp9komUeAN8eUnvQxqJa503/b7+m 63+ozpblCLsv8dd3celUvR8wt0RpzJvcydbo1Iznzyq5r8VckIbHSC+dnXiqKZHVwIo0 RukA== X-Gm-Message-State: APjAAAUDkDOpq1Ey1U/5pkw7fOfwGcR8uepFy49AdQRZWD/ZxMFEuARU XsCg4r/AZ2pnmtEgzMukw1HEkg== X-Google-Smtp-Source: APXvYqzaFmdsuGMd+sc3S8UlqP5hfIuFeBKJzgQIPlueQUEvMNlEoPp1r+TNgyLS5EenfDQ/yuhiOA== X-Received: by 2002:a37:9e89:: with SMTP id h131mr17485213qke.477.1573590171112; Tue, 12 Nov 2019 12:22:51 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:25 -0400 Message-Id: <20191112202231.3856-9-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 08/14] nouveau: use mmu_notifier directly for invalidate_range_start X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe There is no reason to get the invalidate_range_start() callback via an indirection through hmm_mirror, just register a normal notifier directly. Tested-by: Ralph Campbell Signed-off-by: Jason Gunthorpe --- drivers/gpu/drm/nouveau/nouveau_svm.c | 95 ++++++++++++++++++--------- 1 file changed, 63 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouvea= u/nouveau_svm.c index 668d4bd0c118f1..577f8811925a59 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -88,6 +88,7 @@ nouveau_ivmm_find(struct nouveau_svm *svm, u64 inst) } =20 struct nouveau_svmm { + struct mmu_notifier notifier; struct nouveau_vmm *vmm; struct { unsigned long start; @@ -96,7 +97,6 @@ struct nouveau_svmm { =20 struct mutex mutex; =20 - struct mm_struct *mm; struct hmm_mirror mirror; }; =20 @@ -251,10 +251,11 @@ nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u6= 4 start, u64 limit) } =20 static int -nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, - const struct mmu_notifier_range *update) +nouveau_svmm_invalidate_range_start(struct mmu_notifier *mn, + const struct mmu_notifier_range *update) { - struct nouveau_svmm *svmm =3D container_of(mirror, typeof(*svmm), mirror); + struct nouveau_svmm *svmm =3D + container_of(mn, struct nouveau_svmm, notifier); unsigned long start =3D update->start; unsigned long limit =3D update->end; =20 @@ -264,6 +265,9 @@ nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirr= or *mirror, SVMM_DBG(svmm, "invalidate %016lx-%016lx", start, limit); =20 mutex_lock(&svmm->mutex); + if (unlikely(!svmm->vmm)) + goto out; + if (limit > svmm->unmanaged.start && start < svmm->unmanaged.limit) { if (start < svmm->unmanaged.start) { nouveau_svmm_invalidate(svmm, start, @@ -273,19 +277,31 @@ nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mi= rror *mirror, } =20 nouveau_svmm_invalidate(svmm, start, limit); + +out: mutex_unlock(&svmm->mutex); return 0; } =20 -static void -nouveau_svmm_release(struct hmm_mirror *mirror) +static void nouveau_svmm_free_notifier(struct mmu_notifier *mn) +{ + kfree(container_of(mn, struct nouveau_svmm, notifier)); +} + +static const struct mmu_notifier_ops nouveau_mn_ops =3D { + .invalidate_range_start =3D nouveau_svmm_invalidate_range_start, + .free_notifier =3D nouveau_svmm_free_notifier, +}; + +static int +nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, + const struct mmu_notifier_range *update) { + return 0; } =20 -static const struct hmm_mirror_ops -nouveau_svmm =3D { +static const struct hmm_mirror_ops nouveau_svmm =3D { .sync_cpu_device_pagetables =3D nouveau_svmm_sync_cpu_device_pagetables, - .release =3D nouveau_svmm_release, }; =20 void @@ -294,7 +310,10 @@ nouveau_svmm_fini(struct nouveau_svmm **psvmm) struct nouveau_svmm *svmm =3D *psvmm; if (svmm) { hmm_mirror_unregister(&svmm->mirror); - kfree(*psvmm); + mutex_lock(&svmm->mutex); + svmm->vmm =3D NULL; + mutex_unlock(&svmm->mutex); + mmu_notifier_put(&svmm->notifier); *psvmm =3D NULL; } } @@ -320,7 +339,7 @@ nouveau_svmm_init(struct drm_device *dev, void *data, mutex_lock(&cli->mutex); if (cli->svm.cli) { ret =3D -EBUSY; - goto done; + goto out_free; } =20 /* Allocate a new GPU VMM that can support SVM (managed by the @@ -335,24 +354,33 @@ nouveau_svmm_init(struct drm_device *dev, void *data, .fault_replay =3D true, }, sizeof(struct gp100_vmm_v0), &cli->svm.vmm); if (ret) - goto done; + goto out_free; =20 - /* Enable HMM mirroring of CPU address-space to VMM. */ - svmm->mm =3D get_task_mm(current); - down_write(&svmm->mm->mmap_sem); + down_write(¤t->mm->mmap_sem); svmm->mirror.ops =3D &nouveau_svmm; - ret =3D hmm_mirror_register(&svmm->mirror, svmm->mm); - if (ret =3D=3D 0) { - cli->svm.svmm =3D svmm; - cli->svm.cli =3D cli; - } - up_write(&svmm->mm->mmap_sem); - mmput(svmm->mm); + ret =3D hmm_mirror_register(&svmm->mirror, current->mm); + if (ret) + goto out_mm_unlock; =20 -done: + svmm->notifier.ops =3D &nouveau_mn_ops; + ret =3D __mmu_notifier_register(&svmm->notifier, current->mm); if (ret) - nouveau_svmm_fini(&svmm); + goto out_hmm_unregister; + /* Note, ownership of svmm transfers to mmu_notifier */ + + cli->svm.svmm =3D svmm; + cli->svm.cli =3D cli; + up_write(¤t->mm->mmap_sem); mutex_unlock(&cli->mutex); + return 0; + +out_hmm_unregister: + hmm_mirror_unregister(&svmm->mirror); +out_mm_unlock: + up_write(¤t->mm->mmap_sem); +out_free: + mutex_unlock(&cli->mutex); + kfree(svmm); return ret; } =20 @@ -494,12 +522,12 @@ nouveau_range_fault(struct nouveau_svmm *svmm, struct= hmm_range *range) =20 ret =3D hmm_range_register(range, &svmm->mirror); if (ret) { - up_read(&svmm->mm->mmap_sem); + up_read(&svmm->notifier.mm->mmap_sem); return (int)ret; } =20 if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { - up_read(&svmm->mm->mmap_sem); + up_read(&svmm->notifier.mm->mmap_sem); return -EBUSY; } =20 @@ -507,7 +535,7 @@ nouveau_range_fault(struct nouveau_svmm *svmm, struct h= mm_range *range) if (ret <=3D 0) { if (ret =3D=3D 0) ret =3D -EBUSY; - up_read(&svmm->mm->mmap_sem); + up_read(&svmm->notifier.mm->mmap_sem); hmm_range_unregister(range); return ret; } @@ -587,12 +615,15 @@ nouveau_svm_fault(struct nvif_notify *notify) args.i.p.version =3D 0; =20 for (fi =3D 0; fn =3D fi + 1, fi < buffer->fault_nr; fi =3D fn) { + struct mm_struct *mm; + /* Cancel any faults from non-SVM channels. */ if (!(svmm =3D buffer->fault[fi]->svmm)) { nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } SVMM_DBG(svmm, "addr %016llx", buffer->fault[fi]->addr); + mm =3D svmm->notifier.mm; =20 /* We try and group handling of faults within a small * window into a single update. @@ -609,11 +640,11 @@ nouveau_svm_fault(struct nvif_notify *notify) /* Intersect fault window with the CPU VMA, cancelling * the fault if the address is invalid. */ - down_read(&svmm->mm->mmap_sem); - vma =3D find_vma_intersection(svmm->mm, start, limit); + down_read(&mm->mmap_sem); + vma =3D find_vma_intersection(mm, start, limit); if (!vma) { SVMM_ERR(svmm, "wndw %016llx-%016llx", start, limit); - up_read(&svmm->mm->mmap_sem); + up_read(&mm->mmap_sem); nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } @@ -623,7 +654,7 @@ nouveau_svm_fault(struct nvif_notify *notify) =20 if (buffer->fault[fi]->addr !=3D start) { SVMM_ERR(svmm, "addr %016llx", buffer->fault[fi]->addr); - up_read(&svmm->mm->mmap_sem); + up_read(&mm->mmap_sem); nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } @@ -704,7 +735,7 @@ nouveau_svm_fault(struct nvif_notify *notify) NULL); svmm->vmm->vmm.object.client->super =3D false; mutex_unlock(&svmm->mutex); - up_read(&svmm->mm->mmap_sem); + up_read(&mm->mmap_sem); } =20 /* Cancel any faults in the window whose pages didn't manage --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590247; cv=none; d=zoho.com; s=zohoarc; b=iNlHtkbZuigZp++ir/K1yFwi5rMFcerexiacjm7Xug2V/7me7Rnqn264yuobbF63IE/aK8bCCRykuy/4Z+f7+9WKb++rv6QdoAGXBtMUR5qDZGRodo2BklWXZiperal/wL0zObv9dQv+Kujvjh4NGeuH/CvjO9HlnMGS0lFKcDk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590247; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=tiYEw2/UgrWnPCVIgNjwBnkaK5MwBcZg5+Mbpybs+nQ=; b=LeHiAPvMevjHWtuRyzpe6zu6/mFcx4fQnOS3SfqVbkycptSTu0x6RraUMj68T/5E7r/4bBLAZMt0KBy1pe6gcstZtGPC/dTXSJbiD4kNF2jCN8oDZeKGKjpUUPEGdEvNh6jJ17aYmAHNZBXkjeUSatbBTzLSpYScBoq2T6sos/M= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590247536993.7686329881393; Tue, 12 Nov 2019 12:24:07 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgn-00030q-8n; Tue, 12 Nov 2019 20:23:01 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUcgl-00030T-Db for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:22:59 +0000 Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 31f45388-058a-11ea-b678-bc764e2007e4; Tue, 12 Nov 2019 20:22:49 +0000 (UTC) Received: by mail-qk1-x742.google.com with SMTP id z23so15661767qkj.10 for ; Tue, 12 Nov 2019 12:22:49 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id x11sm11977678qtk.93.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003kM-HA; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 31f45388-058a-11ea-b678-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vqvN4B+GZDHJfLAROKnQtObwYcG8NnWCYaeQr/r4Yjg=; b=i4FIIV1i6W5YGmBKiivnhgSWn7eyQivhbmggGezQ1djWPpk+dSxp4UPWSbxX8yNZIF bgYyGPFz71/fPKWkTVutZ4CbeB+KhFw+4GIMak7HwQvDMnRpsTzv8rjTFlw20BMZ1x1+ fcHS+VrPml4IoTKhndxWOk4Ww2cMeEJjs+EnpZwz5rAJKDPj4CiUjaOL6ZykOdsHkQmw U7deZJScunIlf/k9Pwk7lDnBwqtQx3mPakMBr19FjvFEaErQzKw7cagTQpFRY15G4lOx NqjtWrp6HazQ5Z8uTZzHt8aZ7hCUm35K2JauHR7vB0jwOrcAyNPuBHOL6IxqQ47IHVo6 Ivjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vqvN4B+GZDHJfLAROKnQtObwYcG8NnWCYaeQr/r4Yjg=; b=NbhHNchQN0bnNPJ7g0FyA5O4+C4CWeY7AmiENeYWdbFoENJsJLQYbBcxbg1KcRSXm2 RLA+SNbCCXX0lJdADllj5DVrrXcvyOJGhpIu7ZsqWr+JCKLxV2T00AD9ifTgn9qTXIjv 6Sf9SDhjKwjMfRuj+RmYJyjiMWG9APZTLwPjosFunR/w9SNEZvGuqqyI2qEZvs2GQDkj 3JkXwYYREMdshvhg9AT6LMjy5GOVGtkTOq+4gqC/cdrjDsZ10Ihq7X9PwdLl2THqSz6k 3u3pclmms/bxyn69cRXiJUMAnDcHtnuzpeObGprxXp4YwejkMCtRcSaR4ZlnuQjbRZf4 H/Rg== X-Gm-Message-State: APjAAAU6mIJLzYPC031aNPBdVoFVlnPDapegL9DN0ifgW7cq/vhTeOfx 0GK4MbcmncZCyJhUEcWz92vxRg== X-Google-Smtp-Source: APXvYqzFu20HWA4PgtAhJHB+bLJq8aqbJo1eqsXCE4xmuI4E5wY3L+M+VCzEsjkRVcXxyC6V5RzbGA== X-Received: by 2002:a37:4f10:: with SMTP id d16mr17145608qkb.80.1573590168615; Tue, 12 Nov 2019 12:22:48 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:26 -0400 Message-Id: <20191112202231.3856-10-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 09/14] nouveau: use mmu_interval_notifier instead of hmm_mirror X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe Remove the hmm_mirror object and use the mmu_interval_notifier API instead for the range, and use the normal mmu_notifier API for the general invalidation callback. While here re-organize the pagefault path so the locking pattern is clear. nouveau is the only driver that uses a temporary range object and instead forwards nearly every invalidation range directly to the HW. While this is not how the mmu_interval_notifier was intended to be used, the overheads on the pagefaulting path are similar to the existing hmm_mirror version. Particularly since the interval tree will be small. Tested-by: Ralph Campbell Signed-off-by: Jason Gunthorpe --- drivers/gpu/drm/nouveau/nouveau_svm.c | 179 ++++++++++++++------------ 1 file changed, 99 insertions(+), 80 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouvea= u/nouveau_svm.c index 577f8811925a59..df9bf1fd1bc0be 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -96,8 +96,6 @@ struct nouveau_svmm { } unmanaged; =20 struct mutex mutex; - - struct hmm_mirror mirror; }; =20 #define SVMM_DBG(s,f,a...) = \ @@ -293,23 +291,11 @@ static const struct mmu_notifier_ops nouveau_mn_ops = =3D { .free_notifier =3D nouveau_svmm_free_notifier, }; =20 -static int -nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, - const struct mmu_notifier_range *update) -{ - return 0; -} - -static const struct hmm_mirror_ops nouveau_svmm =3D { - .sync_cpu_device_pagetables =3D nouveau_svmm_sync_cpu_device_pagetables, -}; - void nouveau_svmm_fini(struct nouveau_svmm **psvmm) { struct nouveau_svmm *svmm =3D *psvmm; if (svmm) { - hmm_mirror_unregister(&svmm->mirror); mutex_lock(&svmm->mutex); svmm->vmm =3D NULL; mutex_unlock(&svmm->mutex); @@ -357,15 +343,10 @@ nouveau_svmm_init(struct drm_device *dev, void *data, goto out_free; =20 down_write(¤t->mm->mmap_sem); - svmm->mirror.ops =3D &nouveau_svmm; - ret =3D hmm_mirror_register(&svmm->mirror, current->mm); - if (ret) - goto out_mm_unlock; - svmm->notifier.ops =3D &nouveau_mn_ops; ret =3D __mmu_notifier_register(&svmm->notifier, current->mm); if (ret) - goto out_hmm_unregister; + goto out_mm_unlock; /* Note, ownership of svmm transfers to mmu_notifier */ =20 cli->svm.svmm =3D svmm; @@ -374,8 +355,6 @@ nouveau_svmm_init(struct drm_device *dev, void *data, mutex_unlock(&cli->mutex); return 0; =20 -out_hmm_unregister: - hmm_mirror_unregister(&svmm->mirror); out_mm_unlock: up_write(¤t->mm->mmap_sem); out_free: @@ -503,43 +482,90 @@ nouveau_svm_fault_cache(struct nouveau_svm *svm, fault->inst, fault->addr, fault->access); } =20 -static inline bool -nouveau_range_done(struct hmm_range *range) +struct svm_notifier { + struct mmu_interval_notifier notifier; + struct nouveau_svmm *svmm; +}; + +static bool nouveau_svm_range_invalidate(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range, + unsigned long cur_seq) { - bool ret =3D hmm_range_valid(range); + struct svm_notifier *sn =3D + container_of(mni, struct svm_notifier, notifier); =20 - hmm_range_unregister(range); - return ret; + /* + * serializes the update to mni->invalidate_seq done by caller and + * prevents invalidation of the PTE from progressing while HW is being + * programmed. This is very hacky and only works because the normal + * notifier that does invalidation is always called after the range + * notifier. + */ + if (mmu_notifier_range_blockable(range)) + mutex_lock(&sn->svmm->mutex); + else if (!mutex_trylock(&sn->svmm->mutex)) + return false; + mmu_interval_set_seq(mni, cur_seq); + mutex_unlock(&sn->svmm->mutex); + return true; } =20 -static int -nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range) +static const struct mmu_interval_notifier_ops nouveau_svm_mni_ops =3D { + .invalidate =3D nouveau_svm_range_invalidate, +}; + +static int nouveau_range_fault(struct nouveau_svmm *svmm, + struct nouveau_drm *drm, void *data, u32 size, + u64 *pfns, struct svm_notifier *notifier) { + unsigned long timeout =3D + jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); + /* Have HMM fault pages within the fault window to the GPU. */ + struct hmm_range range =3D { + .notifier =3D ¬ifier->notifier, + .start =3D notifier->notifier.interval_tree.start, + .end =3D notifier->notifier.interval_tree.last + 1, + .pfns =3D pfns, + .flags =3D nouveau_svm_pfn_flags, + .values =3D nouveau_svm_pfn_values, + .pfn_shift =3D NVIF_VMM_PFNMAP_V0_ADDR_SHIFT, + }; + struct mm_struct *mm =3D notifier->notifier.mm; long ret; =20 - range->default_flags =3D 0; - range->pfn_flags_mask =3D -1UL; + while (true) { + if (time_after(jiffies, timeout)) + return -EBUSY; =20 - ret =3D hmm_range_register(range, &svmm->mirror); - if (ret) { - up_read(&svmm->notifier.mm->mmap_sem); - return (int)ret; - } + range.notifier_seq =3D mmu_interval_read_begin(range.notifier); + range.default_flags =3D 0; + range.pfn_flags_mask =3D -1UL; + down_read(&mm->mmap_sem); + ret =3D hmm_range_fault(&range, 0); + up_read(&mm->mmap_sem); + if (ret <=3D 0) { + if (ret =3D=3D 0 || ret =3D=3D -EBUSY) + continue; + return ret; + } =20 - if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { - up_read(&svmm->notifier.mm->mmap_sem); - return -EBUSY; + mutex_lock(&svmm->mutex); + if (mmu_interval_read_retry(range.notifier, + range.notifier_seq)) { + mutex_unlock(&svmm->mutex); + continue; + } + break; } =20 - ret =3D hmm_range_fault(range, 0); - if (ret <=3D 0) { - if (ret =3D=3D 0) - ret =3D -EBUSY; - up_read(&svmm->notifier.mm->mmap_sem); - hmm_range_unregister(range); - return ret; - } - return 0; + nouveau_dmem_convert_pfn(drm, &range); + + svmm->vmm->vmm.object.client->super =3D true; + ret =3D nvif_object_ioctl(&svmm->vmm->vmm.object, data, size, NULL); + svmm->vmm->vmm.object.client->super =3D false; + mutex_unlock(&svmm->mutex); + + return ret; } =20 static int @@ -559,7 +585,6 @@ nouveau_svm_fault(struct nvif_notify *notify) } i; u64 phys[16]; } args; - struct hmm_range range; struct vm_area_struct *vma; u64 inst, start, limit; int fi, fn, pi, fill; @@ -615,6 +640,7 @@ nouveau_svm_fault(struct nvif_notify *notify) args.i.p.version =3D 0; =20 for (fi =3D 0; fn =3D fi + 1, fi < buffer->fault_nr; fi =3D fn) { + struct svm_notifier notifier; struct mm_struct *mm; =20 /* Cancel any faults from non-SVM channels. */ @@ -623,7 +649,6 @@ nouveau_svm_fault(struct nvif_notify *notify) continue; } SVMM_DBG(svmm, "addr %016llx", buffer->fault[fi]->addr); - mm =3D svmm->notifier.mm; =20 /* We try and group handling of faults within a small * window into a single update. @@ -637,6 +662,12 @@ nouveau_svm_fault(struct nvif_notify *notify) start =3D max_t(u64, start, svmm->unmanaged.limit); SVMM_DBG(svmm, "wndw %016llx-%016llx", start, limit); =20 + mm =3D svmm->notifier.mm; + if (!mmget_not_zero(mm)) { + nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); + continue; + } + /* Intersect fault window with the CPU VMA, cancelling * the fault if the address is invalid. */ @@ -645,16 +676,18 @@ nouveau_svm_fault(struct nvif_notify *notify) if (!vma) { SVMM_ERR(svmm, "wndw %016llx-%016llx", start, limit); up_read(&mm->mmap_sem); + mmput(mm); nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } start =3D max_t(u64, start, vma->vm_start); limit =3D min_t(u64, limit, vma->vm_end); + up_read(&mm->mmap_sem); SVMM_DBG(svmm, "wndw %016llx-%016llx", start, limit); =20 if (buffer->fault[fi]->addr !=3D start) { SVMM_ERR(svmm, "addr %016llx", buffer->fault[fi]->addr); - up_read(&mm->mmap_sem); + mmput(mm); nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } @@ -710,33 +743,19 @@ nouveau_svm_fault(struct nvif_notify *notify) args.i.p.addr, args.i.p.addr + args.i.p.size, fn - fi); =20 - /* Have HMM fault pages within the fault window to the GPU. */ - range.start =3D args.i.p.addr; - range.end =3D args.i.p.addr + args.i.p.size; - range.pfns =3D args.phys; - range.flags =3D nouveau_svm_pfn_flags; - range.values =3D nouveau_svm_pfn_values; - range.pfn_shift =3D NVIF_VMM_PFNMAP_V0_ADDR_SHIFT; -again: - ret =3D nouveau_range_fault(svmm, &range); - if (ret =3D=3D 0) { - mutex_lock(&svmm->mutex); - if (!nouveau_range_done(&range)) { - mutex_unlock(&svmm->mutex); - goto again; - } - - nouveau_dmem_convert_pfn(svm->drm, &range); - - svmm->vmm->vmm.object.client->super =3D true; - ret =3D nvif_object_ioctl(&svmm->vmm->vmm.object, - &args, sizeof(args.i) + - pi * sizeof(args.phys[0]), - NULL); - svmm->vmm->vmm.object.client->super =3D false; - mutex_unlock(&svmm->mutex); - up_read(&mm->mmap_sem); + notifier.svmm =3D svmm; + ret =3D mmu_interval_notifier_insert(¬ifier.notifier, + svmm->notifier.mm, + args.i.p.addr, args.i.p.size, + &nouveau_svm_mni_ops); + if (!ret) { + ret =3D nouveau_range_fault( + svmm, svm->drm, &args, + sizeof(args.i) + pi * sizeof(args.phys[0]), + args.phys, ¬ifier); + mmu_interval_notifier_remove(¬ifier.notifier); } + mmput(mm); =20 /* Cancel any faults in the window whose pages didn't manage * to keep their valid bit, or stay writeable when required. @@ -745,10 +764,10 @@ nouveau_svm_fault(struct nvif_notify *notify) */ while (fi < fn) { struct nouveau_svm_fault *fault =3D buffer->fault[fi++]; - pi =3D (fault->addr - range.start) >> PAGE_SHIFT; + pi =3D (fault->addr - args.i.p.addr) >> PAGE_SHIFT; if (ret || - !(range.pfns[pi] & NVIF_VMM_PFNMAP_V0_V) || - (!(range.pfns[pi] & NVIF_VMM_PFNMAP_V0_W) && + !(args.phys[pi] & NVIF_VMM_PFNMAP_V0_V) || + (!(args.phys[pi] & NVIF_VMM_PFNMAP_V0_W) && fault->access !=3D 0 && fault->access !=3D 3)) { nouveau_svm_fault_cancel_fault(svm, fault); continue; --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590272; cv=none; d=zoho.com; s=zohoarc; b=DvfC5Asb216BCGHudYGl/u3q72AORJOzYmYlsf/gILHoHySQRgDn1TJV2XPHJ4JxBRAiaTS8VnbPaajYCAODPfle6m68hYo78FDakXZKmfQMDjaFPL6Y2hlhE6p8Ckt1zufuVoKEOerAZKaO3D81dMONkJDLlyRmzXNPlivBMQo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590272; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=xfrCSp5D/6sHwuPhkBUOXNE6M6jHt52LvL01qorTn+o=; b=hdg8VCsq4035lnHLzyoUjPXXFLtvXKIR5b5gqJer9LEejTQCuGp7gE/mb56lCYUeV//mj5A0JPJoVifCiA7Fx9U17gBJN3dcOG18PFOqzb9U+6Pb4MGYChUH6CHdqJ5vZBdpd4wuG1tp3o3vzX612tikfXlXHBCgslLULCYexZw= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590272210359.7834237266204; Tue, 12 Nov 2019 12:24:32 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchG-0003Ek-Kb; Tue, 12 Nov 2019 20:23:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchF-0003E9-EX for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:29 +0000 Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 348b9ffc-058a-11ea-adbe-bc764e2007e4; Tue, 12 Nov 2019 20:22:53 +0000 (UTC) Received: by mail-qt1-x843.google.com with SMTP id o3so21223868qtj.8 for ; Tue, 12 Nov 2019 12:22:53 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id a70sm4290549qkg.1.2019.11.12.12.22.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003kT-JQ; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 348b9ffc-058a-11ea-adbe-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LnkAv+iMLBMvlxFpCUEV0XkR28K5MHEmxHckti3wqpM=; b=j2l7IUJdqB5K/eDQVUavlGzX7aJ/ZiPkS1yEN2IJtHVZrrCehSLhfQr7M+8cMjxgCr ZscponRTH+KICL748W2mp4Sqz3kyUnHh62I2CS0yQvwd24pGk5OnQ3+I/QxcyS7voy6J GvhBrTWztilOuY+S2RfoGj+IE1GYk7niaam0rTVWWRk2c/xKYBa3LSRGpsqijaidDvht qXeYOwWb+eFndzehMcQtUDxSnuQAzllrqQMMQa/pIhD5iEHz9mX5deTKSKMKCEXrTIFD ZjwFpobxsN/NtOgIGZiooNAHBZUyhl3c/c1AvOqey8/ZP0YvuN7STvUkXVmoCwyCq0j3 iOJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LnkAv+iMLBMvlxFpCUEV0XkR28K5MHEmxHckti3wqpM=; b=VbSBYny5aFT6+LdXiGdeygrFv3cSKY06cs46w2eGwqRSsHfBAOQSc8TEz3Z1+Yjg0t 8W29cnCFt5fNgpgyZT4pQ0O2kRNbrG+3GoebhSwrFgQ9/WuS1DZ4S0/yvHbRmxV5DIOi UtKmrpcpUYet61ms+Xb0sbocIzFcXJbyvFnPgrUsYOUoMueZBQ+Y3U2twWEOnhO59tQe t1XQofs0DrCytAYedt2XS0IiMoTvfycLi6FF+U9zceYxPqnNTJtcAZygotqc3CMWSM1c V8oBV7iLgN2UHiej4sodUOfJsK5nAENYtvWQM1s1cdrBzKc0RUkx72/bbzKcyYLr0/re zCWw== X-Gm-Message-State: APjAAAXPkgfJBcdVymJrYGubBLeX0iQ1K/agZOiYHKVZVWttnXqsPybp QbEWAusOqWRYw9UA9dPaSE1J1RQjqgc= X-Google-Smtp-Source: APXvYqw6qliK2wZJ/FUk7ArzuqacA0YzehJQI0koFeIUELWZpshcb9zAxOBz88vcUQ7sqd+EHHHmrw== X-Received: by 2002:ac8:60da:: with SMTP id i26mr34434892qtm.43.1573590173007; Tue, 12 Nov 2019 12:22:53 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:27 -0400 Message-Id: <20191112202231.3856-11-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 10/14] drm/amdgpu: Call find_vma under mmap_sem X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Philip Yang , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe find_vma() must be called under the mmap_sem, reorganize this code to do the vma check after entering the lock. Further, fix the unlocked use of struct task_struct's mm, instead use the mm from hmm_mirror which has an active mm_grab. Also the mm_grab must be converted to a mm_get before acquiring mmap_sem or calling find_vma(). Fixes: 66c45500bfdc ("drm/amdgpu: use new HMM APIs and helpers") Fixes: 0919195f2b0d ("drm/amdgpu: Enable amdgpu_ttm_tt_get_user_pages in wo= rker threads") Acked-by: Christian K=C3=B6nig Reviewed-by: Felix Kuehling Reviewed-by: Philip Yang Tested-by: Philip Yang Signed-off-by: Jason Gunthorpe --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 37 ++++++++++++++----------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/= amdgpu/amdgpu_ttm.c index dff41d0a85fe96..c0e41f1f0c2365 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -788,7 +789,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, = struct page **pages) struct hmm_mirror *mirror =3D bo->mn ? &bo->mn->mirror : NULL; struct ttm_tt *ttm =3D bo->tbo.ttm; struct amdgpu_ttm_tt *gtt =3D (void *)ttm; - struct mm_struct *mm =3D gtt->usertask->mm; + struct mm_struct *mm; unsigned long start =3D gtt->userptr; struct vm_area_struct *vma; struct hmm_range *range; @@ -796,25 +797,14 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo= , struct page **pages) uint64_t *pfns; int r =3D 0; =20 - if (!mm) /* Happens during process shutdown */ - return -ESRCH; - if (unlikely(!mirror)) { DRM_DEBUG_DRIVER("Failed to get hmm_mirror\n"); - r =3D -EFAULT; - goto out; + return -EFAULT; } =20 - vma =3D find_vma(mm, start); - if (unlikely(!vma || start < vma->vm_start)) { - r =3D -EFAULT; - goto out; - } - if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) && - vma->vm_file)) { - r =3D -EPERM; - goto out; - } + mm =3D mirror->hmm->mmu_notifier.mm; + if (!mmget_not_zero(mm)) /* Happens during process shutdown */ + return -ESRCH; =20 range =3D kzalloc(sizeof(*range), GFP_KERNEL); if (unlikely(!range)) { @@ -847,6 +837,17 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo,= struct page **pages) hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT); =20 down_read(&mm->mmap_sem); + vma =3D find_vma(mm, start); + if (unlikely(!vma || start < vma->vm_start)) { + r =3D -EFAULT; + goto out_unlock; + } + if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) && + vma->vm_file)) { + r =3D -EPERM; + goto out_unlock; + } + r =3D hmm_range_fault(range, 0); up_read(&mm->mmap_sem); =20 @@ -865,15 +866,19 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo= , struct page **pages) } =20 gtt->range =3D range; + mmput(mm); =20 return 0; =20 +out_unlock: + up_read(&mm->mmap_sem); out_free_pfns: hmm_range_unregister(range); kvfree(pfns); out_free_ranges: kfree(range); out: + mmput(mm); return r; } =20 --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590287; cv=none; d=zoho.com; s=zohoarc; b=iAt1J0m7F5aC8rjJggL3YoNhbRbe/GG8wFDukLWbCqYmt/z82d2VCL1A4Qfn2G1kO+pvh1TRAO540al9Aucugr9XF30YjYSOGSeR7NqErACFN86pJyZWIKxZcAvcSt6geBAgHyhHz8UW6wYR8l6jBOgyVFesNsZwEaQ32Td+sNs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590287; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=GvIO6xHzvrEV2ehCh+pmaCsi2oZBRzsynGnZ37FiKzs=; b=d1paAe3Y/72dvYKG3d5iCap6pzzGe3Oz7tkq4wli22ZYuHbJXoQ1ULo5zodEL+tw65aAUAt0Wf1BTV1bmCcV7naLJFFO2yhBOPMvMbs7Kjm+0NeKhpBWtp2rqeCyJRzid1oD4qidbonx5dTrJ4FJVZLKnipHVqGnYFRbkpbWTMI= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590287774162.5911265259864; Tue, 12 Nov 2019 12:24:47 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchQ-0003La-F8; Tue, 12 Nov 2019 20:23:40 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchP-0003Km-F0 for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:39 +0000 Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 34fe1a8c-058a-11ea-984a-bc764e2007e4; Tue, 12 Nov 2019 20:22:54 +0000 (UTC) Received: by mail-qv1-xf44.google.com with SMTP id x14so6962940qvu.0 for ; Tue, 12 Nov 2019 12:22:54 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id y33sm14091065qta.18.2019.11.12.12.22.48 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003kZ-Kj; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 34fe1a8c-058a-11ea-984a-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PZrtIQ1CqQ3aV6eDiT4AIT6dNu/ovkVG0MAXXUVRQPA=; b=N5sdBv6BsoyFRqhiisu/umdYUbVy9Q0vhIpmeXK0k1Z2n976NtwEGSe4f6xaVeM9aI 0iie8PeQxU6Pzl7PaExiv5wXSdCmcakfQQYqtL301GS/N6tmsTuOCxyh5Z4J2Bx5LI9m hZ4ACJgY65YwCw1ca0bak+o9en98SxP+zjmcbIJZPCTLYUFbr1VHsequPzlbRw2PEpUT 2Ckc1P6jvX312NS5kcr5DuEEPeQLvwDvBBXNW4q4UHyiDRaRhTqT9pCJpwlIbkJ6jBE4 8jvzjAPKLI0LwepsLvXIgFWuCUpUgBzKcRXeCAmU8vxboEVzSYCAKwi2mLcjpWBsqfMs WaVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PZrtIQ1CqQ3aV6eDiT4AIT6dNu/ovkVG0MAXXUVRQPA=; b=QFpT83B7OUWo4EtUtrt3LtZsw2d9cYOQqTL4DgqpZ5oVC3WHS9vGLJ56iehc8W8q9G Sp0VPcLIwhliMx/KqqG9tJCx3ivRK/LlFgrav22EkEbDhJ0OYRB/AJfr2DUbdq9nap4/ JStf+TVr6io3OsWhLXiczgRvOKMasMaO83lU4LhXRl/kcDZTg+tBV8NCcWnBaRtKUUx2 +YLeEncaGDwHzsVEcRPjjrIZ9xlR8pnIJyhIffE7IfPL79KwPxHAXc8aeQFI2pV5+oXF CnhvGeZy6282E1yHcZ+Qs0DRaDiFtOylpx5Dzbh2yRQs9lJH33G5ba27ixw/2UiPqWmK SWLw== X-Gm-Message-State: APjAAAWnGLIkTKtTS2HYZCogz6/SpHlnvOuy4DR2RLuMsCE7+irdPrgh cKiJSk6WO4aN0UG83yKcUjkC6A== X-Google-Smtp-Source: APXvYqwjHVASiImpnLd/ILwe5StdJi3zozsyC3ebdiRQZc5AWQDBgQKjG4QUL+HhygmNxsbKFK0SMQ== X-Received: by 2002:ad4:53ab:: with SMTP id j11mr5278002qvv.47.1573590173518; Tue, 12 Nov 2019 12:22:53 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:28 -0400 Message-Id: <20191112202231.3856-12-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 11/14] drm/amdgpu: Use mmu_interval_insert instead of hmm_mirror X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Philip Yang , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe Remove the interval tree in the driver and rely on the tree maintained by the mmu_notifier for delivering mmu_notifier invalidation callbacks. For some reason amdgpu has a very complicated arrangement where it tries to prevent duplicate entries in the interval_tree, this is not necessary, each amdgpu_bo can be its own stand alone entry. interval_tree already allows duplicates and overlaps in the tree. Also, there is no need to remove entries upon a release callback, the mmu_interval API safely allows objects to remain registered beyond the lifetime of the mm. The driver only has to stop touching the pages during release. Reviewed-by: Philip Yang Tested-by: Philip Yang Signed-off-by: Jason Gunthorpe --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 + .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 333 ++++-------------- drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h | 4 - drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 13 +- 6 files changed, 77 insertions(+), 281 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdg= pu/amdgpu.h index bd37df5dd6d048..60591a5d420021 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -1006,6 +1006,8 @@ struct amdgpu_device { struct mutex lock_reset; struct amdgpu_doorbell_index doorbell_index; =20 + struct mutex notifier_lock; + int asic_reset_res; struct work_struct xgmi_reset_work; =20 diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu= /drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 6d021ecc8d598f..47700302a08b7f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -481,8 +481,7 @@ static void remove_kgd_mem_from_kfd_bo_list(struct kgd_= mem *mem, * * Returns 0 for success, negative errno for errors. */ -static int init_user_pages(struct kgd_mem *mem, struct mm_struct *mm, - uint64_t user_addr) +static int init_user_pages(struct kgd_mem *mem, uint64_t user_addr) { struct amdkfd_process_info *process_info =3D mem->process_info; struct amdgpu_bo *bo =3D mem->bo; @@ -1195,7 +1194,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu( add_kgd_mem_to_kfd_bo_list(*mem, avm->process_info, user_addr); =20 if (user_addr) { - ret =3D init_user_pages(*mem, current->mm, user_addr); + ret =3D init_user_pages(*mem, user_addr); if (ret) goto allocate_init_user_pages_failed; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/a= md/amdgpu/amdgpu_device.c index 5a1939dbd4e3e6..38f97998aaddb2 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -2633,6 +2633,7 @@ int amdgpu_device_init(struct amdgpu_device *adev, mutex_init(&adev->virt.vf_errors.lock); hash_init(adev->mn_hash); mutex_init(&adev->lock_reset); + mutex_init(&adev->notifier_lock); mutex_init(&adev->virt.dpm_mutex); mutex_init(&adev->psp.mutex); =20 diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/a= mdgpu/amdgpu_mn.c index 31d4deb5d29484..9fe1c31ce17a30 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c @@ -50,66 +50,6 @@ #include "amdgpu.h" #include "amdgpu_amdkfd.h" =20 -/** - * struct amdgpu_mn_node - * - * @it: interval node defining start-last of the affected address range - * @bos: list of all BOs in the affected address range - * - * Manages all BOs which are affected of a certain range of address space. - */ -struct amdgpu_mn_node { - struct interval_tree_node it; - struct list_head bos; -}; - -/** - * amdgpu_mn_destroy - destroy the HMM mirror - * - * @work: previously sheduled work item - * - * Lazy destroys the notifier from a work item - */ -static void amdgpu_mn_destroy(struct work_struct *work) -{ - struct amdgpu_mn *amn =3D container_of(work, struct amdgpu_mn, work); - struct amdgpu_device *adev =3D amn->adev; - struct amdgpu_mn_node *node, *next_node; - struct amdgpu_bo *bo, *next_bo; - - mutex_lock(&adev->mn_lock); - down_write(&amn->lock); - hash_del(&amn->node); - rbtree_postorder_for_each_entry_safe(node, next_node, - &amn->objects.rb_root, it.rb) { - list_for_each_entry_safe(bo, next_bo, &node->bos, mn_list) { - bo->mn =3D NULL; - list_del_init(&bo->mn_list); - } - kfree(node); - } - up_write(&amn->lock); - mutex_unlock(&adev->mn_lock); - - hmm_mirror_unregister(&amn->mirror); - kfree(amn); -} - -/** - * amdgpu_hmm_mirror_release - callback to notify about mm destruction - * - * @mirror: the HMM mirror (mm) this callback is about - * - * Shedule a work item to lazy destroy HMM mirror. - */ -static void amdgpu_hmm_mirror_release(struct hmm_mirror *mirror) -{ - struct amdgpu_mn *amn =3D container_of(mirror, struct amdgpu_mn, mirror); - - INIT_WORK(&amn->work, amdgpu_mn_destroy); - schedule_work(&amn->work); -} - /** * amdgpu_mn_lock - take the write side lock for this notifier * @@ -133,157 +73,80 @@ void amdgpu_mn_unlock(struct amdgpu_mn *mn) } =20 /** - * amdgpu_mn_read_lock - take the read side lock for this notifier - * - * @amn: our notifier - */ -static int amdgpu_mn_read_lock(struct amdgpu_mn *amn, bool blockable) -{ - if (blockable) - down_read(&amn->lock); - else if (!down_read_trylock(&amn->lock)) - return -EAGAIN; - - return 0; -} - -/** - * amdgpu_mn_read_unlock - drop the read side lock for this notifier - * - * @amn: our notifier - */ -static void amdgpu_mn_read_unlock(struct amdgpu_mn *amn) -{ - up_read(&amn->lock); -} - -/** - * amdgpu_mn_invalidate_node - unmap all BOs of a node + * amdgpu_mn_invalidate_gfx - callback to notify about mm change * - * @node: the node with the BOs to unmap - * @start: start of address range affected - * @end: end of address range affected + * @mni: the range (mm) is about to update + * @range: details on the invalidation + * @cur_seq: Value to pass to mmu_interval_set_seq() * * Block for operations on BOs to finish and mark pages as accessed and * potentially dirty. */ -static void amdgpu_mn_invalidate_node(struct amdgpu_mn_node *node, - unsigned long start, - unsigned long end) +static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range, + unsigned long cur_seq) { - struct amdgpu_bo *bo; + struct amdgpu_bo *bo =3D container_of(mni, struct amdgpu_bo, notifier); + struct amdgpu_device *adev =3D amdgpu_ttm_adev(bo->tbo.bdev); long r; =20 - list_for_each_entry(bo, &node->bos, mn_list) { - - if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, start, end)) - continue; + if (!mmu_notifier_range_blockable(range)) + return false; =20 - r =3D dma_resv_wait_timeout_rcu(bo->tbo.base.resv, - true, false, MAX_SCHEDULE_TIMEOUT); - if (r <=3D 0) - DRM_ERROR("(%ld) failed to wait for user bo\n", r); - } + mutex_lock(&adev->notifier_lock); + r =3D dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false, + MAX_SCHEDULE_TIMEOUT); + mutex_unlock(&adev->notifier_lock); + if (r <=3D 0) + DRM_ERROR("(%ld) failed to wait for user bo\n", r); + return true; } =20 +static const struct mmu_interval_notifier_ops amdgpu_mn_gfx_ops =3D { + .invalidate =3D amdgpu_mn_invalidate_gfx, +}; + /** - * amdgpu_mn_sync_pagetables_gfx - callback to notify about mm change + * amdgpu_mn_invalidate_hsa - callback to notify about mm change * - * @mirror: the hmm_mirror (mm) is about to update - * @update: the update start, end address + * @mni: the range (mm) is about to update + * @range: details on the invalidation + * @cur_seq: Value to pass to mmu_interval_set_seq() * - * Block for operations on BOs to finish and mark pages as accessed and - * potentially dirty. + * We temporarily evict the BO attached to this range. This necessitates + * evicting all user-mode queues of the process. */ -static int -amdgpu_mn_sync_pagetables_gfx(struct hmm_mirror *mirror, - const struct mmu_notifier_range *update) +static bool amdgpu_mn_invalidate_hsa(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range, + unsigned long cur_seq) { - struct amdgpu_mn *amn =3D container_of(mirror, struct amdgpu_mn, mirror); - unsigned long start =3D update->start; - unsigned long end =3D update->end; - bool blockable =3D mmu_notifier_range_blockable(update); - struct interval_tree_node *it; - - /* notification is exclusive, but interval is inclusive */ - end -=3D 1; - - /* TODO we should be able to split locking for interval tree and - * amdgpu_mn_invalidate_node - */ - if (amdgpu_mn_read_lock(amn, blockable)) - return -EAGAIN; - - it =3D interval_tree_iter_first(&amn->objects, start, end); - while (it) { - struct amdgpu_mn_node *node; - - if (!blockable) { - amdgpu_mn_read_unlock(amn); - return -EAGAIN; - } - - node =3D container_of(it, struct amdgpu_mn_node, it); - it =3D interval_tree_iter_next(it, start, end); + struct amdgpu_bo *bo =3D container_of(mni, struct amdgpu_bo, notifier); + struct amdgpu_device *adev =3D amdgpu_ttm_adev(bo->tbo.bdev); =20 - amdgpu_mn_invalidate_node(node, start, end); - } + if (!mmu_notifier_range_blockable(range)) + return false; =20 - amdgpu_mn_read_unlock(amn); + mutex_lock(&adev->notifier_lock); + amdgpu_amdkfd_evict_userptr(bo->kfd_bo, bo->notifier.mm); + mutex_unlock(&adev->notifier_lock); =20 - return 0; + return true; } =20 -/** - * amdgpu_mn_sync_pagetables_hsa - callback to notify about mm change - * - * @mirror: the hmm_mirror (mm) is about to update - * @update: the update start, end address - * - * We temporarily evict all BOs between start and end. This - * necessitates evicting all user-mode queues of the process. The BOs - * are restorted in amdgpu_mn_invalidate_range_end_hsa. - */ -static int -amdgpu_mn_sync_pagetables_hsa(struct hmm_mirror *mirror, - const struct mmu_notifier_range *update) +static const struct mmu_interval_notifier_ops amdgpu_mn_hsa_ops =3D { + .invalidate =3D amdgpu_mn_invalidate_hsa, +}; + +static int amdgpu_mn_sync_pagetables(struct hmm_mirror *mirror, + const struct mmu_notifier_range *update) { struct amdgpu_mn *amn =3D container_of(mirror, struct amdgpu_mn, mirror); - unsigned long start =3D update->start; - unsigned long end =3D update->end; - bool blockable =3D mmu_notifier_range_blockable(update); - struct interval_tree_node *it; =20 - /* notification is exclusive, but interval is inclusive */ - end -=3D 1; - - if (amdgpu_mn_read_lock(amn, blockable)) + if (!mmu_notifier_range_blockable(update)) return -EAGAIN; =20 - it =3D interval_tree_iter_first(&amn->objects, start, end); - while (it) { - struct amdgpu_mn_node *node; - struct amdgpu_bo *bo; - - if (!blockable) { - amdgpu_mn_read_unlock(amn); - return -EAGAIN; - } - - node =3D container_of(it, struct amdgpu_mn_node, it); - it =3D interval_tree_iter_next(it, start, end); - - list_for_each_entry(bo, &node->bos, mn_list) { - struct kgd_mem *mem =3D bo->kfd_bo; - - if (amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, - start, end)) - amdgpu_amdkfd_evict_userptr(mem, amn->mm); - } - } - - amdgpu_mn_read_unlock(amn); - + down_read(&amn->lock); + up_read(&amn->lock); return 0; } =20 @@ -295,12 +158,10 @@ amdgpu_mn_sync_pagetables_hsa(struct hmm_mirror *mirr= or, =20 static struct hmm_mirror_ops amdgpu_hmm_mirror_ops[] =3D { [AMDGPU_MN_TYPE_GFX] =3D { - .sync_cpu_device_pagetables =3D amdgpu_mn_sync_pagetables_gfx, - .release =3D amdgpu_hmm_mirror_release + .sync_cpu_device_pagetables =3D amdgpu_mn_sync_pagetables, }, [AMDGPU_MN_TYPE_HSA] =3D { - .sync_cpu_device_pagetables =3D amdgpu_mn_sync_pagetables_hsa, - .release =3D amdgpu_hmm_mirror_release + .sync_cpu_device_pagetables =3D amdgpu_mn_sync_pagetables, }, }; =20 @@ -327,7 +188,8 @@ struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *a= dev, } =20 hash_for_each_possible(adev->mn_hash, amn, node, key) - if (AMDGPU_MN_KEY(amn->mm, amn->type) =3D=3D key) + if (AMDGPU_MN_KEY(amn->mirror.hmm->mmu_notifier.mm, + amn->type) =3D=3D key) goto release_locks; =20 amn =3D kzalloc(sizeof(*amn), GFP_KERNEL); @@ -337,10 +199,8 @@ struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *= adev, } =20 amn->adev =3D adev; - amn->mm =3D mm; init_rwsem(&amn->lock); amn->type =3D type; - amn->objects =3D RB_ROOT_CACHED; =20 amn->mirror.ops =3D &amdgpu_hmm_mirror_ops[type]; r =3D hmm_mirror_register(&amn->mirror, mm); @@ -369,100 +229,33 @@ struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device= *adev, * @bo: amdgpu buffer object * @addr: userptr addr we should monitor * - * Registers an HMM mirror for the given BO at the specified address. + * Registers a mmu_notifier for the given BO at the specified address. * Returns 0 on success, -ERRNO if anything goes wrong. */ int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr) { - unsigned long end =3D addr + amdgpu_bo_size(bo) - 1; - struct amdgpu_device *adev =3D amdgpu_ttm_adev(bo->tbo.bdev); - enum amdgpu_mn_type type =3D - bo->kfd_bo ? AMDGPU_MN_TYPE_HSA : AMDGPU_MN_TYPE_GFX; - struct amdgpu_mn *amn; - struct amdgpu_mn_node *node =3D NULL, *new_node; - struct list_head bos; - struct interval_tree_node *it; - - amn =3D amdgpu_mn_get(adev, type); - if (IS_ERR(amn)) - return PTR_ERR(amn); - - new_node =3D kmalloc(sizeof(*new_node), GFP_KERNEL); - if (!new_node) - return -ENOMEM; - - INIT_LIST_HEAD(&bos); - - down_write(&amn->lock); - - while ((it =3D interval_tree_iter_first(&amn->objects, addr, end))) { - kfree(node); - node =3D container_of(it, struct amdgpu_mn_node, it); - interval_tree_remove(&node->it, &amn->objects); - addr =3D min(it->start, addr); - end =3D max(it->last, end); - list_splice(&node->bos, &bos); - } - - if (!node) - node =3D new_node; + if (bo->kfd_bo) + bo->notifier.ops =3D &amdgpu_mn_hsa_ops; else - kfree(new_node); - - bo->mn =3D amn; - - node->it.start =3D addr; - node->it.last =3D end; - INIT_LIST_HEAD(&node->bos); - list_splice(&bos, &node->bos); - list_add(&bo->mn_list, &node->bos); + bo->notifier.ops =3D &amdgpu_mn_gfx_ops; =20 - interval_tree_insert(&node->it, &amn->objects); - - up_write(&amn->lock); - - return 0; + return mmu_interval_notifier_insert(&bo->notifier, addr, + amdgpu_bo_size(bo), current->mm); } =20 /** - * amdgpu_mn_unregister - unregister a BO for HMM mirror updates + * amdgpu_mn_unregister - unregister a BO for notifier updates * * @bo: amdgpu buffer object * - * Remove any registration of HMM mirror updates from the buffer object. + * Remove any registration of mmu notifier updates from the buffer object. */ void amdgpu_mn_unregister(struct amdgpu_bo *bo) { - struct amdgpu_device *adev =3D amdgpu_ttm_adev(bo->tbo.bdev); - struct amdgpu_mn *amn; - struct list_head *head; - - mutex_lock(&adev->mn_lock); - - amn =3D bo->mn; - if (amn =3D=3D NULL) { - mutex_unlock(&adev->mn_lock); + if (!bo->notifier.mm) return; - } - - down_write(&amn->lock); - - /* save the next list entry for later */ - head =3D bo->mn_list.next; - - bo->mn =3D NULL; - list_del_init(&bo->mn_list); - - if (list_empty(head)) { - struct amdgpu_mn_node *node; - - node =3D container_of(head, struct amdgpu_mn_node, bos); - interval_tree_remove(&node->it, &amn->objects); - kfree(node); - } - - up_write(&amn->lock); - mutex_unlock(&adev->mn_lock); + mmu_interval_notifier_remove(&bo->notifier); + bo->notifier.mm =3D NULL; } =20 /* flags used by HMM internal, not related to CPU/GPU PTE flags */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h b/drivers/gpu/drm/amd/a= mdgpu/amdgpu_mn.h index b8ed68943625c2..d73ab2947b22b2 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h @@ -39,12 +39,10 @@ enum amdgpu_mn_type { * struct amdgpu_mn * * @adev: amdgpu device pointer - * @mm: process address space * @type: type of MMU notifier * @work: destruction work item * @node: hash table node to find structure by adev and mn * @lock: rw semaphore protecting the notifier nodes - * @objects: interval tree containing amdgpu_mn_nodes * @mirror: HMM mirror function support * * Data for each amdgpu device and process address space. @@ -52,7 +50,6 @@ enum amdgpu_mn_type { struct amdgpu_mn { /* constant after initialisation */ struct amdgpu_device *adev; - struct mm_struct *mm; enum amdgpu_mn_type type; =20 /* only used on destruction */ @@ -63,7 +60,6 @@ struct amdgpu_mn { =20 /* objects protected by lock */ struct rw_semaphore lock; - struct rb_root_cached objects; =20 #ifdef CONFIG_HMM_MIRROR /* HMM mirror */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/a= md/amdgpu/amdgpu_object.h index 658f4c9779b704..2792c5c70fd10d 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h @@ -30,6 +30,9 @@ =20 #include #include "amdgpu.h" +#ifdef CONFIG_MMU_NOTIFIER +#include +#endif =20 #define AMDGPU_BO_INVALID_OFFSET LONG_MAX #define AMDGPU_BO_MAX_PLACEMENTS 3 @@ -100,10 +103,12 @@ struct amdgpu_bo { struct ttm_bo_kmap_obj dma_buf_vmap; struct amdgpu_mn *mn; =20 - union { - struct list_head mn_list; - struct list_head shadow_list; - }; + +#ifdef CONFIG_MMU_NOTIFIER + struct mmu_interval_notifier notifier; +#endif + + struct list_head shadow_list; =20 struct kgd_mem *kfd_bo; }; --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590292; cv=none; d=zoho.com; s=zohoarc; b=nHoowcS8Rs9Z+pQnJw4C1A6sP8XaQ9ZCLFskkH34iUlDO4dKe4cRIaMGnpWN8a6rnqGbyYOzzAircD/yO4UMEWjfITgub8uoeH76AgxK7Sg65Iy1zVNovfsee74NiKf5eQmigseJRNqAvU8V9SsFSzbNaOiZsWea63wJEJnMRdM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590292; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=3teY+JHgFJP9SycF+lw6YgJ4PGt6dDZeXDCYvW0GrRY=; b=fZEG2B7fhSdoDWkey/rLIPuJtR/SjwdL4UYUPMZG3dYmmBKHvp3RUYwb2KcaQqNMPMX2Cn1rkv1Sjmpx+NM1JfmzknscyBXo9+0/t4Y7KbUoHJmBGmfMcLtVISUW3uRPW4/Z83XH5azbQF5TjEAQT15QGM9WVxmrH1a/aX2ys9Y= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590292590949.2713104666659; Tue, 12 Nov 2019 12:24:52 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchV-0003PX-Q9; Tue, 12 Nov 2019 20:23:45 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchU-0003OV-ES for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:44 +0000 Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 35c70f8c-058a-11ea-b678-bc764e2007e4; Tue, 12 Nov 2019 20:22:55 +0000 (UTC) Received: by mail-qv1-xf41.google.com with SMTP id g18so6946216qvp.8 for ; Tue, 12 Nov 2019 12:22:55 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id s75sm9961165qke.14.2019.11.12.12.22.48 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:48 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003kf-M4; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 35c70f8c-058a-11ea-b678-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=u0LB65+hyNE2MdbzematfIoTXJPEYQKvM3pg9E8VpyY=; b=Gg4dA024LYcPGO6jBEa4JxuJZjwhnlGag2Hh6SiobHigmRqncg6+DmFAIrzAwO+eBW U5aCkvCr7TwFj2+jHTwMcSTNtFbh/GKngo4a59rEbdzPvlVHZ1/NoNdmt1j3sj39U+DP eBi/kzZmYSMPeFKrY+9x6rwx4vlv9WtWKK7hWOivCoKy2v3sQaBRTp6k9kfIqgINfr6L 5lvSv8wZlI2FAS4VlAV8eOyTYKQmYzzMU8pUIRYoNI70Tr163HgnwRLMD5uDofviuTw+ kJyhBvi72CrVEG9FZ2dFV9NV29zzl4ldct5aMhiP3WQ5cmAsdtyNsbbSR75YjTsrm6Zl idTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u0LB65+hyNE2MdbzematfIoTXJPEYQKvM3pg9E8VpyY=; b=Tq6XBcoVSL/t2OL2sYEjcXS4MoiEODetZp6aPcdtHEUZU1aeqVLk60aS+YiC6JKg1o gsEPRh/jsCVeCcyIaq9+zYEtjYKqMMgR4aa8iOS1XXIQaAFEB47M+l63FtZJRhm9lYMn ZiDitP3eL7T6QChzecKFap++Ng28RS73XER8UokGDFDvPH526O4l78Yd8DvS8M6dn+Ot iUkYp8tWwpgVsOEmzsBy8nL+Fc1sWe8/Gl9Jl7mJ9va2v6FZTGGvH4wJPDHYv0gbDWFj k3H3oXideqzBTXKg3lPurTAwNGgj+kKtmPIjJ4adl27WwpfhfirTj5rCKyga/IBAmy6x IoeQ== X-Gm-Message-State: APjAAAV/qAIwD2/33VCiDKNpjt1vOPcz9ssypLwCvcIk+7T1r7Bn1mvM FHr1yGTuQCs9UjUAsyplxaN9pQ== X-Google-Smtp-Source: APXvYqxP7wIi1fh7Txrp+EZOz9AIhTIy7kqoxAvpVziLV7TzuiXel8ccCsBbJFbshfblezg9RuaxVw== X-Received: by 2002:a0c:fe8c:: with SMTP id d12mr31067538qvs.146.1573590174782; Tue, 12 Nov 2019 12:22:54 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:29 -0400 Message-Id: <20191112202231.3856-13-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 12/14] drm/amdgpu: Use mmu_interval_notifier instead of hmm_mirror X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Philip Yang , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe Convert the collision-retry lock around hmm_range_fault to use the one now provided by the mmu_interval notifier. Although this driver does not seem to use the collision retry lock that hmm provides correctly, it can still be converted over to use the mmu_interval_notifier api instead of hmm_mirror without too much trouble. This also deletes another place where a driver is associating additional data (struct amdgpu_mn) with a mmu_struct. Signed-off-by: Philip Yang Reviewed-by: Philip Yang Tested-by: Philip Yang Signed-off-by: Jason Gunthorpe --- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 4 + drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 14 +- drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 148 ++---------------- drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h | 49 ------ drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 116 ++++++++------ 5 files changed, 94 insertions(+), 237 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu= /drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 47700302a08b7f..1bcedb9b477dce 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -1738,6 +1738,10 @@ static int update_invalid_user_pages(struct amdkfd_p= rocess_info *process_info, return ret; } =20 + /* + * FIXME: Cannot ignore the return code, must hold + * notifier_lock + */ amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm); =20 /* Mark the BO as valid unless it was invalidated diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/a= mdgpu/amdgpu_cs.c index 82823d9a8ba887..22c989bca7514c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -603,8 +603,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser= *p, e->tv.num_shared =3D 2; =20 amdgpu_bo_list_get_list(p->bo_list, &p->validated); - if (p->bo_list->first_userptr !=3D p->bo_list->num_entries) - p->mn =3D amdgpu_mn_get(p->adev, AMDGPU_MN_TYPE_GFX); =20 INIT_LIST_HEAD(&duplicates); amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd); @@ -1287,11 +1285,11 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser= *p, if (r) goto error_unlock; =20 - /* No memory allocation is allowed while holding the mn lock. - * p->mn is hold until amdgpu_cs_submit is finished and fence is added - * to BOs. + /* No memory allocation is allowed while holding the notifier lock. + * The lock is held until amdgpu_cs_submit is finished and fence is + * added to BOs. */ - amdgpu_mn_lock(p->mn); + mutex_lock(&p->adev->notifier_lock); =20 /* If userptr are invalidated after amdgpu_cs_parser_bos(), return * -EAGAIN, drmIoctl in libdrm will restart the amdgpu_cs_ioctl. @@ -1334,13 +1332,13 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser= *p, amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm); =20 ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence); - amdgpu_mn_unlock(p->mn); + mutex_unlock(&p->adev->notifier_lock); =20 return 0; =20 error_abort: drm_sched_job_cleanup(&job->base); - amdgpu_mn_unlock(p->mn); + mutex_unlock(&p->adev->notifier_lock); =20 error_unlock: amdgpu_job_free(job); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/a= mdgpu/amdgpu_mn.c index 9fe1c31ce17a30..828b5167ff128f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c @@ -50,28 +50,6 @@ #include "amdgpu.h" #include "amdgpu_amdkfd.h" =20 -/** - * amdgpu_mn_lock - take the write side lock for this notifier - * - * @mn: our notifier - */ -void amdgpu_mn_lock(struct amdgpu_mn *mn) -{ - if (mn) - down_write(&mn->lock); -} - -/** - * amdgpu_mn_unlock - drop the write side lock for this notifier - * - * @mn: our notifier - */ -void amdgpu_mn_unlock(struct amdgpu_mn *mn) -{ - if (mn) - up_write(&mn->lock); -} - /** * amdgpu_mn_invalidate_gfx - callback to notify about mm change * @@ -94,6 +72,9 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_= notifier *mni, return false; =20 mutex_lock(&adev->notifier_lock); + + mmu_interval_set_seq(mni, cur_seq); + r =3D dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false, MAX_SCHEDULE_TIMEOUT); mutex_unlock(&adev->notifier_lock); @@ -127,6 +108,9 @@ static bool amdgpu_mn_invalidate_hsa(struct mmu_interva= l_notifier *mni, return false; =20 mutex_lock(&adev->notifier_lock); + + mmu_interval_set_seq(mni, cur_seq); + amdgpu_amdkfd_evict_userptr(bo->kfd_bo, bo->notifier.mm); mutex_unlock(&adev->notifier_lock); =20 @@ -137,92 +121,6 @@ static const struct mmu_interval_notifier_ops amdgpu_m= n_hsa_ops =3D { .invalidate =3D amdgpu_mn_invalidate_hsa, }; =20 -static int amdgpu_mn_sync_pagetables(struct hmm_mirror *mirror, - const struct mmu_notifier_range *update) -{ - struct amdgpu_mn *amn =3D container_of(mirror, struct amdgpu_mn, mirror); - - if (!mmu_notifier_range_blockable(update)) - return -EAGAIN; - - down_read(&amn->lock); - up_read(&amn->lock); - return 0; -} - -/* Low bits of any reasonable mm pointer will be unused due to struct - * alignment. Use these bits to make a unique key from the mm pointer - * and notifier type. - */ -#define AMDGPU_MN_KEY(mm, type) ((unsigned long)(mm) + (type)) - -static struct hmm_mirror_ops amdgpu_hmm_mirror_ops[] =3D { - [AMDGPU_MN_TYPE_GFX] =3D { - .sync_cpu_device_pagetables =3D amdgpu_mn_sync_pagetables, - }, - [AMDGPU_MN_TYPE_HSA] =3D { - .sync_cpu_device_pagetables =3D amdgpu_mn_sync_pagetables, - }, -}; - -/** - * amdgpu_mn_get - create HMM mirror context - * - * @adev: amdgpu device pointer - * @type: type of MMU notifier context - * - * Creates a HMM mirror context for current->mm. - */ -struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev, - enum amdgpu_mn_type type) -{ - struct mm_struct *mm =3D current->mm; - struct amdgpu_mn *amn; - unsigned long key =3D AMDGPU_MN_KEY(mm, type); - int r; - - mutex_lock(&adev->mn_lock); - if (down_write_killable(&mm->mmap_sem)) { - mutex_unlock(&adev->mn_lock); - return ERR_PTR(-EINTR); - } - - hash_for_each_possible(adev->mn_hash, amn, node, key) - if (AMDGPU_MN_KEY(amn->mirror.hmm->mmu_notifier.mm, - amn->type) =3D=3D key) - goto release_locks; - - amn =3D kzalloc(sizeof(*amn), GFP_KERNEL); - if (!amn) { - amn =3D ERR_PTR(-ENOMEM); - goto release_locks; - } - - amn->adev =3D adev; - init_rwsem(&amn->lock); - amn->type =3D type; - - amn->mirror.ops =3D &amdgpu_hmm_mirror_ops[type]; - r =3D hmm_mirror_register(&amn->mirror, mm); - if (r) - goto free_amn; - - hash_add(adev->mn_hash, &amn->node, AMDGPU_MN_KEY(mm, type)); - -release_locks: - up_write(&mm->mmap_sem); - mutex_unlock(&adev->mn_lock); - - return amn; - -free_amn: - up_write(&mm->mmap_sem); - mutex_unlock(&adev->mn_lock); - kfree(amn); - - return ERR_PTR(r); -} - /** * amdgpu_mn_register - register a BO for notifier updates * @@ -235,12 +133,12 @@ struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device = *adev, int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr) { if (bo->kfd_bo) - bo->notifier.ops =3D &amdgpu_mn_hsa_ops; - else - bo->notifier.ops =3D &amdgpu_mn_gfx_ops; - - return mmu_interval_notifier_insert(&bo->notifier, addr, - amdgpu_bo_size(bo), current->mm); + return mmu_interval_notifier_insert(&bo->notifier, current->mm, + addr, amdgpu_bo_size(bo), + &amdgpu_mn_hsa_ops); + return mmu_interval_notifier_insert(&bo->notifier, current->mm, addr, + amdgpu_bo_size(bo), + &amdgpu_mn_gfx_ops); } =20 /** @@ -257,25 +155,3 @@ void amdgpu_mn_unregister(struct amdgpu_bo *bo) mmu_interval_notifier_remove(&bo->notifier); bo->notifier.mm =3D NULL; } - -/* flags used by HMM internal, not related to CPU/GPU PTE flags */ -static const uint64_t hmm_range_flags[HMM_PFN_FLAG_MAX] =3D { - (1 << 0), /* HMM_PFN_VALID */ - (1 << 1), /* HMM_PFN_WRITE */ - 0 /* HMM_PFN_DEVICE_PRIVATE */ -}; - -static const uint64_t hmm_range_values[HMM_PFN_VALUE_MAX] =3D { - 0xfffffffffffffffeUL, /* HMM_PFN_ERROR */ - 0, /* HMM_PFN_NONE */ - 0xfffffffffffffffcUL /* HMM_PFN_SPECIAL */ -}; - -void amdgpu_hmm_init_range(struct hmm_range *range) -{ - if (range) { - range->flags =3D hmm_range_flags; - range->values =3D hmm_range_values; - range->pfn_shift =3D PAGE_SHIFT; - } -} diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h b/drivers/gpu/drm/amd/a= mdgpu/amdgpu_mn.h index d73ab2947b22b2..a292238f75ebae 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h @@ -30,59 +30,10 @@ #include #include =20 -enum amdgpu_mn_type { - AMDGPU_MN_TYPE_GFX, - AMDGPU_MN_TYPE_HSA, -}; - -/** - * struct amdgpu_mn - * - * @adev: amdgpu device pointer - * @type: type of MMU notifier - * @work: destruction work item - * @node: hash table node to find structure by adev and mn - * @lock: rw semaphore protecting the notifier nodes - * @mirror: HMM mirror function support - * - * Data for each amdgpu device and process address space. - */ -struct amdgpu_mn { - /* constant after initialisation */ - struct amdgpu_device *adev; - enum amdgpu_mn_type type; - - /* only used on destruction */ - struct work_struct work; - - /* protected by adev->mn_lock */ - struct hlist_node node; - - /* objects protected by lock */ - struct rw_semaphore lock; - -#ifdef CONFIG_HMM_MIRROR - /* HMM mirror */ - struct hmm_mirror mirror; -#endif -}; - #if defined(CONFIG_HMM_MIRROR) -void amdgpu_mn_lock(struct amdgpu_mn *mn); -void amdgpu_mn_unlock(struct amdgpu_mn *mn); -struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev, - enum amdgpu_mn_type type); int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr); void amdgpu_mn_unregister(struct amdgpu_bo *bo); -void amdgpu_hmm_init_range(struct hmm_range *range); #else -static inline void amdgpu_mn_lock(struct amdgpu_mn *mn) {} -static inline void amdgpu_mn_unlock(struct amdgpu_mn *mn) {} -static inline struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev, - enum amdgpu_mn_type type) -{ - return NULL; -} static inline int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long a= ddr) { DRM_WARN_ONCE("HMM_MIRROR kernel config option is not enabled, " diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/= amdgpu/amdgpu_ttm.c index c0e41f1f0c2365..c41a26bde852e6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -773,6 +773,20 @@ struct amdgpu_ttm_tt { #endif }; =20 +#ifdef CONFIG_DRM_AMDGPU_USERPTR +/* flags used by HMM internal, not related to CPU/GPU PTE flags */ +static const uint64_t hmm_range_flags[HMM_PFN_FLAG_MAX] =3D { + (1 << 0), /* HMM_PFN_VALID */ + (1 << 1), /* HMM_PFN_WRITE */ + 0 /* HMM_PFN_DEVICE_PRIVATE */ +}; + +static const uint64_t hmm_range_values[HMM_PFN_VALUE_MAX] =3D { + 0xfffffffffffffffeUL, /* HMM_PFN_ERROR */ + 0, /* HMM_PFN_NONE */ + 0xfffffffffffffffcUL /* HMM_PFN_SPECIAL */ +}; + /** * amdgpu_ttm_tt_get_user_pages - get device accessible pages that back us= er * memory and start HMM tracking CPU page table update @@ -780,29 +794,28 @@ struct amdgpu_ttm_tt { * Calling function must call amdgpu_ttm_tt_userptr_range_done() once and = only * once afterwards to stop HMM tracking */ -#if IS_ENABLED(CONFIG_DRM_AMDGPU_USERPTR) - -#define MAX_RETRY_HMM_RANGE_FAULT 16 - int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) { - struct hmm_mirror *mirror =3D bo->mn ? &bo->mn->mirror : NULL; struct ttm_tt *ttm =3D bo->tbo.ttm; struct amdgpu_ttm_tt *gtt =3D (void *)ttm; - struct mm_struct *mm; unsigned long start =3D gtt->userptr; struct vm_area_struct *vma; struct hmm_range *range; + unsigned long timeout; + struct mm_struct *mm; unsigned long i; - uint64_t *pfns; int r =3D 0; =20 - if (unlikely(!mirror)) { - DRM_DEBUG_DRIVER("Failed to get hmm_mirror\n"); + mm =3D bo->notifier.mm; + if (unlikely(!mm)) { + DRM_DEBUG_DRIVER("BO is not registered?\n"); return -EFAULT; } =20 - mm =3D mirror->hmm->mmu_notifier.mm; + /* Another get_user_pages is running at the same time?? */ + if (WARN_ON(gtt->range)) + return -EFAULT; + if (!mmget_not_zero(mm)) /* Happens during process shutdown */ return -ESRCH; =20 @@ -811,31 +824,23 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo= , struct page **pages) r =3D -ENOMEM; goto out; } + range->notifier =3D &bo->notifier; + range->flags =3D hmm_range_flags; + range->values =3D hmm_range_values; + range->pfn_shift =3D PAGE_SHIFT; + range->start =3D bo->notifier.interval_tree.start; + range->end =3D bo->notifier.interval_tree.last + 1; + range->default_flags =3D hmm_range_flags[HMM_PFN_VALID]; + if (!amdgpu_ttm_tt_is_readonly(ttm)) + range->default_flags |=3D range->flags[HMM_PFN_WRITE]; =20 - pfns =3D kvmalloc_array(ttm->num_pages, sizeof(*pfns), GFP_KERNEL); - if (unlikely(!pfns)) { + range->pfns =3D kvmalloc_array(ttm->num_pages, sizeof(*range->pfns), + GFP_KERNEL); + if (unlikely(!range->pfns)) { r =3D -ENOMEM; goto out_free_ranges; } =20 - amdgpu_hmm_init_range(range); - range->default_flags =3D range->flags[HMM_PFN_VALID]; - range->default_flags |=3D amdgpu_ttm_tt_is_readonly(ttm) ? - 0 : range->flags[HMM_PFN_WRITE]; - range->pfn_flags_mask =3D 0; - range->pfns =3D pfns; - range->start =3D start; - range->end =3D start + ttm->num_pages * PAGE_SIZE; - - hmm_range_register(range, mirror); - - /* - * Just wait for range to be valid, safe to ignore return value as we - * will use the return value of hmm_range_fault() below under the - * mmap_sem to ascertain the validity of the range. - */ - hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT); - down_read(&mm->mmap_sem); vma =3D find_vma(mm, start); if (unlikely(!vma || start < vma->vm_start)) { @@ -847,18 +852,31 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo= , struct page **pages) r =3D -EPERM; goto out_unlock; } + up_read(&mm->mmap_sem); + timeout =3D jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); =20 +retry: + range->notifier_seq =3D mmu_interval_read_begin(&bo->notifier); + + down_read(&mm->mmap_sem); r =3D hmm_range_fault(range, 0); up_read(&mm->mmap_sem); - - if (unlikely(r < 0)) + if (unlikely(r <=3D 0)) { + /* + * FIXME: This timeout should encompass the retry from + * mmu_interval_read_retry() as well. + */ + if ((r =3D=3D 0 || r =3D=3D -EBUSY) && !time_after(jiffies, timeout)) + goto retry; goto out_free_pfns; + } =20 for (i =3D 0; i < ttm->num_pages; i++) { - pages[i] =3D hmm_device_entry_to_page(range, pfns[i]); + /* FIXME: The pages cannot be touched outside the notifier_lock */ + pages[i] =3D hmm_device_entry_to_page(range, range->pfns[i]); if (unlikely(!pages[i])) { pr_err("Page fault failed for pfn[%lu] =3D 0x%llx\n", - i, pfns[i]); + i, range->pfns[i]); r =3D -ENOMEM; =20 goto out_free_pfns; @@ -873,8 +891,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, = struct page **pages) out_unlock: up_read(&mm->mmap_sem); out_free_pfns: - hmm_range_unregister(range); - kvfree(pfns); + kvfree(range->pfns); out_free_ranges: kfree(range); out: @@ -903,15 +920,18 @@ bool amdgpu_ttm_tt_get_user_pages_done(struct ttm_tt = *ttm) "No user pages to check\n"); =20 if (gtt->range) { - r =3D hmm_range_valid(gtt->range); - hmm_range_unregister(gtt->range); - + /* + * FIXME: Must always hold notifier_lock for this, and must + * not ignore the return code. + */ + r =3D mmu_interval_read_retry(gtt->range->notifier, + gtt->range->notifier_seq); kvfree(gtt->range->pfns); kfree(gtt->range); gtt->range =3D NULL; } =20 - return r; + return !r; } #endif =20 @@ -992,10 +1012,18 @@ static void amdgpu_ttm_tt_unpin_userptr(struct ttm_t= t *ttm) sg_free_table(ttm->sg); =20 #if IS_ENABLED(CONFIG_DRM_AMDGPU_USERPTR) - if (gtt->range && - ttm->pages[0] =3D=3D hmm_device_entry_to_page(gtt->range, - gtt->range->pfns[0])) - WARN_ONCE(1, "Missing get_user_page_done\n"); + if (gtt->range) { + unsigned long i; + + for (i =3D 0; i < ttm->num_pages; i++) { + if (ttm->pages[i] !=3D + hmm_device_entry_to_page(gtt->range, + gtt->range->pfns[i])) + break; + } + + WARN((i =3D=3D ttm->num_pages), "Missing get_user_page_done\n"); + } #endif } =20 --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590306; cv=none; d=zoho.com; s=zohoarc; b=FORO9UmMdamytdXr451Q1J39Jl8EW/d+GKVKokWW/ZqyCc6gVruT6kdSn5R/CCPxFZAP264zybXQIfsZro8Y9ALBYkqMDva/hQGZeIfAtH6+8U5WoGZKlEeVbguz/L3G/sJoxLUNca9w+yV2NCb2qqEj14BeUmhEzoKt88A5X/w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590306; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zy81xrDNdifojFi3MoILC6a91S7Zl1LTcdOdlSvVm7g=; b=aoNQkKZJAcSTkSDRkqMNP6aluKeH/bxq8hY55HEENCGbbfPC628Qogkp9+kcIQdFxr6Dx3J4O/LsskW4EplRHMouG6jxUV4MwGs1Ki+HkavuKQLS/GH/BVeJtMcRCOrwWwDTs0vKgJICeIwBfPcPx3AX8JUF4XZYcn7eVBhJXyA= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590306467435.9648385736824; Tue, 12 Nov 2019 12:25:06 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchk-0003cK-5x; Tue, 12 Nov 2019 20:24:00 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchj-0003bd-Fi for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:59 +0000 Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3720eaec-058a-11ea-adbe-bc764e2007e4; Tue, 12 Nov 2019 20:22:57 +0000 (UTC) Received: by mail-qt1-x841.google.com with SMTP id 30so21229527qtz.12 for ; Tue, 12 Nov 2019 12:22:57 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id e17sm11976100qtk.65.2019.11.12.12.22.48 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:49 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003km-Og; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 3720eaec-058a-11ea-adbe-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TxMyNoBy2Vh0FuNzLlkoKS+NXMLGUgxJi6WOhoKCkIE=; b=RCYjjsryG/zclfi3HPwvo6swUwJ/3iqPUFaNXc8rVRIr0e1YnbMKzjvqB4hc8ZFG3n qvbzn3xirQNX7sP353mC/diEPnTxL7qTBIhmBEKaUe9mGivLe6ZqUjNWEd83u9A0Q2Jr cLcjNnp5LGh2n8RyqDbXkGAvFWv+bF3D1POMC3b7R8u1ZWIYJS1mrzHick/kaleAVIwD welqaBX/mRrPh1a3CYGJDyWlwSzgFOPeYC+BpsO3I+jvp9MKjNH4mgO30mR5/WPdwr5Z GYKLo859lXQTTCjG1tIDtL3DkoQcwT/RLYPXqJ7tfGY2AJUhNI8k84lHDf9MHk6Ibecr TYSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TxMyNoBy2Vh0FuNzLlkoKS+NXMLGUgxJi6WOhoKCkIE=; b=PNdup5VAF2hvnVNEuGo41i8UN0YOTDSdwXrz5hNRznMdoM9bEYboRaD/lFTrc3Rv4Z DZkkQMx8TKQw3uwFdDJ4wh7UJgVEL8X5//xAuvqchwHqXBSelLwf8MJChoAKXApCsHLR 8LfK3d/vsqwvEx7eKsEYuA+rReY+Sr7yzZ2X64saMWwDFHlpb/EDkUIfiHq1JwD/0OR2 TxXsCQKmDUr0i3G40aImxX0HDT+pIijFEG20QuKj8r3KkirRpetURtB8XMctoiwTTSn+ Q7BdXbPx9KUYB0gP3gQZS3zCSz34RXLBJ0ejUOKLAVbVLxuEi7YeyEgIQG5p0/5fET57 moXA== X-Gm-Message-State: APjAAAV9HScMGJoAfOZeTvllimZNOuOXYyhJMBBSdTmymy7MTlMhcq5t tfYgu/cZqsp4gpXEE4e74aPrTb/mfhg= X-Google-Smtp-Source: APXvYqxRnDplX9KJ365sr0HZFy9cbeVb7VLCzrb4nrSX++bivfrcKFiWw7rl5wyBfz4/9Fp6ENzHZg== X-Received: by 2002:ac8:385d:: with SMTP id r29mr34184611qtb.52.1573590176907; Tue, 12 Nov 2019 12:22:56 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:30 -0400 Message-Id: <20191112202231.3856-14-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 13/14] mm/hmm: remove hmm_mirror and related X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe The only two users of this are now converted to use mmu_interval_notifier, delete all the code and update hmm.rst. Reviewed-by: J=C3=A9r=C3=B4me Glisse Tested-by: Ralph Campbell Signed-off-by: Jason Gunthorpe Reviewed-by: Christoph Hellwig --- Documentation/vm/hmm.rst | 105 ++++----------- include/linux/hmm.h | 183 +------------------------ mm/Kconfig | 1 - mm/hmm.c | 285 ++------------------------------------- 4 files changed, 34 insertions(+), 540 deletions(-) diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst index 0a5960beccf76d..893a8ba0e9fefb 100644 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@ -147,49 +147,16 @@ Address space mirroring implementation and API Address space mirroring's main objective is to allow duplication of a rang= e of CPU page table into a device page table; HMM helps keep both synchronized.= A device driver that wants to mirror a process address space must start with= the -registration of an hmm_mirror struct:: - - int hmm_mirror_register(struct hmm_mirror *mirror, - struct mm_struct *mm); - -The mirror struct has a set of callbacks that are used -to propagate CPU page tables:: - - struct hmm_mirror_ops { - /* release() - release hmm_mirror - * - * @mirror: pointer to struct hmm_mirror - * - * This is called when the mm_struct is being released. The callback - * must ensure that all access to any pages obtained from this mirror - * is halted before the callback returns. All future access should - * fault. - */ - void (*release)(struct hmm_mirror *mirror); - - /* sync_cpu_device_pagetables() - synchronize page tables - * - * @mirror: pointer to struct hmm_mirror - * @update: update information (see struct mmu_notifier_range) - * Return: -EAGAIN if update.blockable false and callback need to - * block, 0 otherwise. - * - * This callback ultimately originates from mmu_notifiers when the CPU - * page table is updated. The device driver must update its page table - * in response to this callback. The update argument tells what action - * to perform. - * - * The device driver must not return from this callback until the dev= ice - * page tables are completely updated (TLBs flushed, etc); this is a - * synchronous call. - */ - int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror, - const struct hmm_update *update); - }; - -The device driver must perform the update action to the range (mark range -read only, or fully unmap, etc.). The device must complete the update befo= re -the driver callback returns. +registration of a mmu_interval_notifier:: + + mni->ops =3D &driver_ops; + int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni, + unsigned long start, unsigned long length, + struct mm_struct *mm); + +During the driver_ops->invalidate() callback the device driver must perform +the update action to the range (mark range read only, or fully unmap, +etc.). The device must complete the update before the driver callback retu= rns. =20 When the device driver wants to populate a range of virtual addresses, it = can use:: @@ -216,70 +183,46 @@ The usage pattern is:: struct hmm_range range; ... =20 + range.notifier =3D &mni; range.start =3D ...; range.end =3D ...; range.pfns =3D ...; range.flags =3D ...; range.values =3D ...; range.pfn_shift =3D ...; - hmm_range_register(&range, mirror); =20 - /* - * Just wait for range to be valid, safe to ignore return value as we - * will use the return value of hmm_range_fault() below under the - * mmap_sem to ascertain the validity of the range. - */ - hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); + if (!mmget_not_zero(mni->notifier.mm)) + return -EFAULT; =20 again: + range.notifier_seq =3D mmu_interval_read_begin(&mni); down_read(&mm->mmap_sem); ret =3D hmm_range_fault(&range, HMM_RANGE_SNAPSHOT); if (ret) { up_read(&mm->mmap_sem); - if (ret =3D=3D -EBUSY) { - /* - * No need to check hmm_range_wait_until_valid() return value - * on retry we will get proper error with hmm_range_fault() - */ - hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); - goto again; - } - hmm_range_unregister(&range); + if (ret =3D=3D -EBUSY) + goto again; return ret; } + up_read(&mm->mmap_sem); + take_lock(driver->update); - if (!hmm_range_valid(&range)) { + if (mmu_interval_read_retry(&ni, range.notifier_seq) { release_lock(driver->update); - up_read(&mm->mmap_sem); goto again; } =20 - // Use pfns array content to update device page table + /* Use pfns array content to update device page table, + * under the update lock */ =20 - hmm_range_unregister(&range); release_lock(driver->update); - up_read(&mm->mmap_sem); return 0; } =20 The driver->update lock is the same lock that the driver takes inside its -sync_cpu_device_pagetables() callback. That lock must be held before calli= ng -hmm_range_valid() to avoid any race with a concurrent CPU page table updat= e. - -HMM implements all this on top of the mmu_notifier API because we wanted a -simpler API and also to be able to perform optimizations latter on like do= ing -concurrent device updates in multi-devices scenario. - -HMM also serves as an impedance mismatch between how CPU page table updates -are done (by CPU write to the page table and TLB flushes) and how devices -update their own page table. Device updates are a multi-step process. Firs= t, -appropriate commands are written to a buffer, then this buffer is schedule= d for -execution on the device. It is only once the device has executed commands = in -the buffer that the update is done. Creating and scheduling the update com= mand -buffer can happen concurrently for multiple devices. Waiting for each devi= ce to -report commands as executed is serialized (there is no point in doing this -concurrently). - +invalidate() callback. That lock must be held before calling +mmu_interval_read_retry() to avoid any race with a concurrent CPU page tab= le +update. =20 Leverage default_flags and pfn_flags_mask =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D diff --git a/include/linux/hmm.h b/include/linux/hmm.h index cb69bf10dc788c..1225b3c87aba05 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -68,29 +68,6 @@ #include #include =20 - -/* - * struct hmm - HMM per mm struct - * - * @mm: mm struct this HMM struct is bound to - * @lock: lock protecting ranges list - * @ranges: list of range being snapshotted - * @mirrors: list of mirrors for this mm - * @mmu_notifier: mmu notifier to track updates to CPU page table - * @mirrors_sem: read/write semaphore protecting the mirrors list - * @wq: wait queue for user waiting on a range invalidation - * @notifiers: count of active mmu notifiers - */ -struct hmm { - struct mmu_notifier mmu_notifier; - spinlock_t ranges_lock; - struct list_head ranges; - struct list_head mirrors; - struct rw_semaphore mirrors_sem; - wait_queue_head_t wq; - long notifiers; -}; - /* * hmm_pfn_flag_e - HMM flag enums * @@ -143,9 +120,8 @@ enum hmm_pfn_value_e { /* * struct hmm_range - track invalidation lock on virtual address range * - * @notifier: an optional mmu_interval_notifier - * @notifier_seq: when notifier is used this is the result of - * mmu_interval_read_begin() + * @notifier: a mmu_interval_notifier that includes the start/end + * @notifier_seq: result of mmu_interval_read_begin() * @hmm: the core HMM structure this range is active against * @vma: the vm area struct for the range * @list: all range lock are on a list @@ -162,8 +138,6 @@ enum hmm_pfn_value_e { struct hmm_range { struct mmu_interval_notifier *notifier; unsigned long notifier_seq; - struct hmm *hmm; - struct list_head list; unsigned long start; unsigned long end; uint64_t *pfns; @@ -172,32 +146,8 @@ struct hmm_range { uint64_t default_flags; uint64_t pfn_flags_mask; uint8_t pfn_shift; - bool valid; }; =20 -/* - * hmm_range_wait_until_valid() - wait for range to be valid - * @range: range affected by invalidation to wait on - * @timeout: time out for wait in ms (ie abort wait after that period of t= ime) - * Return: true if the range is valid, false otherwise. - */ -static inline bool hmm_range_wait_until_valid(struct hmm_range *range, - unsigned long timeout) -{ - return wait_event_timeout(range->hmm->wq, range->valid, - msecs_to_jiffies(timeout)) !=3D 0; -} - -/* - * hmm_range_valid() - test if a range is valid or not - * @range: range - * Return: true if the range is valid, false otherwise. - */ -static inline bool hmm_range_valid(struct hmm_range *range) -{ - return range->valid; -} - /* * hmm_device_entry_to_page() - return struct page pointed to by a device = entry * @range: range use to decode device entry value @@ -267,111 +217,6 @@ static inline uint64_t hmm_device_entry_from_pfn(cons= t struct hmm_range *range, range->flags[HMM_PFN_VALID]; } =20 -/* - * Mirroring: how to synchronize device page table with CPU page table. - * - * A device driver that is participating in HMM mirroring must always - * synchronize with CPU page table updates. For this, device drivers can e= ither - * directly use mmu_notifier APIs or they can use the hmm_mirror API. Devi= ce - * drivers can decide to register one mirror per device per process, or ju= st - * one mirror per process for a group of devices. The pattern is: - * - * int device_bind_address_space(..., struct mm_struct *mm, ...) - * { - * struct device_address_space *das; - * - * // Device driver specific initialization, and allocation of das - * // which contains an hmm_mirror struct as one of its fields. - * ... - * - * ret =3D hmm_mirror_register(&das->mirror, mm, &device_mirror_o= ps); - * if (ret) { - * // Cleanup on error - * return ret; - * } - * - * // Other device driver specific initialization - * ... - * } - * - * Once an hmm_mirror is registered for an address space, the device driver - * will get callbacks through sync_cpu_device_pagetables() operation (see - * hmm_mirror_ops struct). - * - * Device driver must not free the struct containing the hmm_mirror struct - * before calling hmm_mirror_unregister(). The expected usage is to do tha= t when - * the device driver is unbinding from an address space. - * - * - * void device_unbind_address_space(struct device_address_space *das) - * { - * // Device driver specific cleanup - * ... - * - * hmm_mirror_unregister(&das->mirror); - * - * // Other device driver specific cleanup, and now das can be fr= eed - * ... - * } - */ - -struct hmm_mirror; - -/* - * struct hmm_mirror_ops - HMM mirror device operations callback - * - * @update: callback to update range on a device - */ -struct hmm_mirror_ops { - /* release() - release hmm_mirror - * - * @mirror: pointer to struct hmm_mirror - * - * This is called when the mm_struct is being released. The callback - * must ensure that all access to any pages obtained from this mirror - * is halted before the callback returns. All future access should - * fault. - */ - void (*release)(struct hmm_mirror *mirror); - - /* sync_cpu_device_pagetables() - synchronize page tables - * - * @mirror: pointer to struct hmm_mirror - * @update: update information (see struct mmu_notifier_range) - * Return: -EAGAIN if mmu_notifier_range_blockable(update) is false - * and callback needs to block, 0 otherwise. - * - * This callback ultimately originates from mmu_notifiers when the CPU - * page table is updated. The device driver must update its page table - * in response to this callback. The update argument tells what action - * to perform. - * - * The device driver must not return from this callback until the device - * page tables are completely updated (TLBs flushed, etc); this is a - * synchronous call. - */ - int (*sync_cpu_device_pagetables)( - struct hmm_mirror *mirror, - const struct mmu_notifier_range *update); -}; - -/* - * struct hmm_mirror - mirror struct for a device driver - * - * @hmm: pointer to struct hmm (which is unique per mm_struct) - * @ops: device driver callback for HMM mirror operations - * @list: for list of mirrors of a given mm - * - * Each address space (mm_struct) being mirrored by a device must register= one - * instance of an hmm_mirror struct with HMM. HMM will track the list of a= ll - * mirrors for each mm_struct. - */ -struct hmm_mirror { - struct hmm *hmm; - const struct hmm_mirror_ops *ops; - struct list_head list; -}; - /* * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that c= ase. */ @@ -381,15 +226,9 @@ struct hmm_mirror { #define HMM_FAULT_SNAPSHOT (1 << 1) =20 #ifdef CONFIG_HMM_MIRROR -int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm); -void hmm_mirror_unregister(struct hmm_mirror *mirror); - /* * Please see Documentation/vm/hmm.rst for how to use the range API. */ -int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror); -void hmm_range_unregister(struct hmm_range *range); - long hmm_range_fault(struct hmm_range *range, unsigned int flags); =20 long hmm_range_dma_map(struct hmm_range *range, @@ -401,24 +240,6 @@ long hmm_range_dma_unmap(struct hmm_range *range, dma_addr_t *daddrs, bool dirty); #else -int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm) -{ - return -EOPNOTSUPP; -} - -void hmm_mirror_unregister(struct hmm_mirror *mirror) -{ -} - -int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror) -{ - return -EOPNOTSUPP; -} - -void hmm_range_unregister(struct hmm_range *range) -{ -} - static inline long hmm_range_fault(struct hmm_range *range, unsigned int f= lags) { return -EOPNOTSUPP; diff --git a/mm/Kconfig b/mm/Kconfig index d0b5046d9aeffd..e38ff1d5968dbf 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -675,7 +675,6 @@ config DEV_PAGEMAP_OPS config HMM_MIRROR bool depends on MMU - depends on MMU_NOTIFIER =20 config DEVICE_PRIVATE bool "Unaddressable device memory (GPU memory, ...)" diff --git a/mm/hmm.c b/mm/hmm.c index 8d060c5dabe37b..aed2f39d1a986c 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -26,193 +26,6 @@ #include #include =20 -static struct mmu_notifier *hmm_alloc_notifier(struct mm_struct *mm) -{ - struct hmm *hmm; - - hmm =3D kzalloc(sizeof(*hmm), GFP_KERNEL); - if (!hmm) - return ERR_PTR(-ENOMEM); - - init_waitqueue_head(&hmm->wq); - INIT_LIST_HEAD(&hmm->mirrors); - init_rwsem(&hmm->mirrors_sem); - INIT_LIST_HEAD(&hmm->ranges); - spin_lock_init(&hmm->ranges_lock); - hmm->notifiers =3D 0; - return &hmm->mmu_notifier; -} - -static void hmm_free_notifier(struct mmu_notifier *mn) -{ - struct hmm *hmm =3D container_of(mn, struct hmm, mmu_notifier); - - WARN_ON(!list_empty(&hmm->ranges)); - WARN_ON(!list_empty(&hmm->mirrors)); - kfree(hmm); -} - -static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) -{ - struct hmm *hmm =3D container_of(mn, struct hmm, mmu_notifier); - struct hmm_mirror *mirror; - - /* - * Since hmm_range_register() holds the mmget() lock hmm_release() is - * prevented as long as a range exists. - */ - WARN_ON(!list_empty_careful(&hmm->ranges)); - - down_read(&hmm->mirrors_sem); - list_for_each_entry(mirror, &hmm->mirrors, list) { - /* - * Note: The driver is not allowed to trigger - * hmm_mirror_unregister() from this thread. - */ - if (mirror->ops->release) - mirror->ops->release(mirror); - } - up_read(&hmm->mirrors_sem); -} - -static void notifiers_decrement(struct hmm *hmm) -{ - unsigned long flags; - - spin_lock_irqsave(&hmm->ranges_lock, flags); - hmm->notifiers--; - if (!hmm->notifiers) { - struct hmm_range *range; - - list_for_each_entry(range, &hmm->ranges, list) { - if (range->valid) - continue; - range->valid =3D true; - } - wake_up_all(&hmm->wq); - } - spin_unlock_irqrestore(&hmm->ranges_lock, flags); -} - -static int hmm_invalidate_range_start(struct mmu_notifier *mn, - const struct mmu_notifier_range *nrange) -{ - struct hmm *hmm =3D container_of(mn, struct hmm, mmu_notifier); - struct hmm_mirror *mirror; - struct hmm_range *range; - unsigned long flags; - int ret =3D 0; - - spin_lock_irqsave(&hmm->ranges_lock, flags); - hmm->notifiers++; - list_for_each_entry(range, &hmm->ranges, list) { - if (nrange->end < range->start || nrange->start >=3D range->end) - continue; - - range->valid =3D false; - } - spin_unlock_irqrestore(&hmm->ranges_lock, flags); - - if (mmu_notifier_range_blockable(nrange)) - down_read(&hmm->mirrors_sem); - else if (!down_read_trylock(&hmm->mirrors_sem)) { - ret =3D -EAGAIN; - goto out; - } - - list_for_each_entry(mirror, &hmm->mirrors, list) { - int rc; - - rc =3D mirror->ops->sync_cpu_device_pagetables(mirror, nrange); - if (rc) { - if (WARN_ON(mmu_notifier_range_blockable(nrange) || - rc !=3D -EAGAIN)) - continue; - ret =3D -EAGAIN; - break; - } - } - up_read(&hmm->mirrors_sem); - -out: - if (ret) - notifiers_decrement(hmm); - return ret; -} - -static void hmm_invalidate_range_end(struct mmu_notifier *mn, - const struct mmu_notifier_range *nrange) -{ - struct hmm *hmm =3D container_of(mn, struct hmm, mmu_notifier); - - notifiers_decrement(hmm); -} - -static const struct mmu_notifier_ops hmm_mmu_notifier_ops =3D { - .release =3D hmm_release, - .invalidate_range_start =3D hmm_invalidate_range_start, - .invalidate_range_end =3D hmm_invalidate_range_end, - .alloc_notifier =3D hmm_alloc_notifier, - .free_notifier =3D hmm_free_notifier, -}; - -/* - * hmm_mirror_register() - register a mirror against an mm - * - * @mirror: new mirror struct to register - * @mm: mm to register against - * Return: 0 on success, -ENOMEM if no memory, -EINVAL if invalid arguments - * - * To start mirroring a process address space, the device driver must regi= ster - * an HMM mirror struct. - * - * The caller cannot unregister the hmm_mirror while any ranges are - * registered. - * - * Callers using this function must put a call to mmu_notifier_synchronize= () - * in their module exit functions. - */ -int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm) -{ - struct mmu_notifier *mn; - - lockdep_assert_held_write(&mm->mmap_sem); - - /* Sanity check */ - if (!mm || !mirror || !mirror->ops) - return -EINVAL; - - mn =3D mmu_notifier_get_locked(&hmm_mmu_notifier_ops, mm); - if (IS_ERR(mn)) - return PTR_ERR(mn); - mirror->hmm =3D container_of(mn, struct hmm, mmu_notifier); - - down_write(&mirror->hmm->mirrors_sem); - list_add(&mirror->list, &mirror->hmm->mirrors); - up_write(&mirror->hmm->mirrors_sem); - - return 0; -} -EXPORT_SYMBOL(hmm_mirror_register); - -/* - * hmm_mirror_unregister() - unregister a mirror - * - * @mirror: mirror struct to unregister - * - * Stop mirroring a process address space, and cleanup. - */ -void hmm_mirror_unregister(struct hmm_mirror *mirror) -{ - struct hmm *hmm =3D mirror->hmm; - - down_write(&hmm->mirrors_sem); - list_del(&mirror->list); - up_write(&hmm->mirrors_sem); - mmu_notifier_put(&hmm->mmu_notifier); -} -EXPORT_SYMBOL(hmm_mirror_unregister); - struct hmm_vma_walk { struct hmm_range *range; struct dev_pagemap *pgmap; @@ -785,87 +598,6 @@ static void hmm_pfns_clear(struct hmm_range *range, *pfns =3D range->values[HMM_PFN_NONE]; } =20 -/* - * hmm_range_register() - start tracking change to CPU page table over a r= ange - * @range: range - * @mm: the mm struct for the range of virtual address - * - * Return: 0 on success, -EFAULT if the address space is no longer valid - * - * Track updates to the CPU page table see include/linux/hmm.h - */ -int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror) -{ - struct hmm *hmm =3D mirror->hmm; - unsigned long flags; - - range->valid =3D false; - range->hmm =3D NULL; - - if ((range->start & (PAGE_SIZE - 1)) || (range->end & (PAGE_SIZE - 1))) - return -EINVAL; - if (range->start >=3D range->end) - return -EINVAL; - - /* Prevent hmm_release() from running while the range is valid */ - if (!mmget_not_zero(hmm->mmu_notifier.mm)) - return -EFAULT; - - /* Initialize range to track CPU page table updates. */ - spin_lock_irqsave(&hmm->ranges_lock, flags); - - range->hmm =3D hmm; - list_add(&range->list, &hmm->ranges); - - /* - * If there are any concurrent notifiers we have to wait for them for - * the range to be valid (see hmm_range_wait_until_valid()). - */ - if (!hmm->notifiers) - range->valid =3D true; - spin_unlock_irqrestore(&hmm->ranges_lock, flags); - - return 0; -} -EXPORT_SYMBOL(hmm_range_register); - -/* - * hmm_range_unregister() - stop tracking change to CPU page table over a = range - * @range: range - * - * Range struct is used to track updates to the CPU page table after a cal= l to - * hmm_range_register(). See include/linux/hmm.h for how to use it. - */ -void hmm_range_unregister(struct hmm_range *range) -{ - struct hmm *hmm =3D range->hmm; - unsigned long flags; - - spin_lock_irqsave(&hmm->ranges_lock, flags); - list_del_init(&range->list); - spin_unlock_irqrestore(&hmm->ranges_lock, flags); - - /* Drop reference taken by hmm_range_register() */ - mmput(hmm->mmu_notifier.mm); - - /* - * The range is now invalid and the ref on the hmm is dropped, so - * poison the pointer. Leave other fields in place, for the caller's - * use. - */ - range->valid =3D false; - memset(&range->hmm, POISON_INUSE, sizeof(range->hmm)); -} -EXPORT_SYMBOL(hmm_range_unregister); - -static bool needs_retry(struct hmm_range *range) -{ - if (range->notifier) - return mmu_interval_check_retry(range->notifier, - range->notifier_seq); - return !range->valid; -} - static const struct mm_walk_ops hmm_walk_ops =3D { .pud_entry =3D hmm_vma_walk_pud, .pmd_entry =3D hmm_vma_walk_pmd, @@ -906,20 +638,16 @@ long hmm_range_fault(struct hmm_range *range, unsigne= d int flags) const unsigned long device_vma =3D VM_IO | VM_PFNMAP | VM_MIXEDMAP; unsigned long start =3D range->start, end; struct hmm_vma_walk hmm_vma_walk; - struct mm_struct *mm; + struct mm_struct *mm =3D range->notifier->mm; struct vm_area_struct *vma; int ret; =20 - if (range->notifier) - mm =3D range->notifier->mm; - else - mm =3D range->hmm->mmu_notifier.mm; - lockdep_assert_held(&mm->mmap_sem); =20 do { /* If range is no longer valid force retry. */ - if (needs_retry(range)) + if (mmu_interval_check_retry(range->notifier, + range->notifier_seq)) return -EBUSY; =20 vma =3D find_vma(mm, start); @@ -952,7 +680,9 @@ long hmm_range_fault(struct hmm_range *range, unsigned = int flags) start =3D hmm_vma_walk.last; =20 /* Keep trying while the range is valid. */ - } while (ret =3D=3D -EBUSY && !needs_retry(range)); + } while (ret =3D=3D -EBUSY && + !mmu_interval_check_retry(range->notifier, + range->notifier_seq)); =20 if (ret) { unsigned long i; @@ -1010,7 +740,8 @@ long hmm_range_dma_map(struct hmm_range *range, struct= device *device, continue; =20 /* Check if range is being invalidated */ - if (needs_retry(range)) { + if (mmu_interval_check_retry(range->notifier, + range->notifier_seq)) { ret =3D -EBUSY; goto unmap; } --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Sun Apr 28 23:50:46 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573590301; cv=none; d=zoho.com; s=zohoarc; b=H5RHcGad4oQH1hFv9F31zCrNQlKnzkOHA4jQctNcSWrgbZVUB5kSVUVoGF97hwkCj+W8Qa5QqiC6w1Sbv/3Zv/VhuNrxvXtKvwA3OOVPIXUw6GOaJzTF1MoZ+/3VDGhmxArG0AktFcF2x46rESRpIudjx4SbN4xh9GH7ciDZku4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573590301; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LC9whM8npiQ5dtOn9BJ74xTUxSsIqXBEx2+BAufnP3A=; b=gNRTq0toBfSvwz16pWBdau0qnNIhu0dr6x15Vb1zcerXykX6KuotXo1DkwMvpf/K5O+XlkSSXXcMnpABBle1PCRTWeNm6mN5GwE5G97T/N0Mmysb9R1jQ5WQ7TP6SuNGSr0pmjF82JXxilMJ0e+rzHbgiwMZpWSZOEm60wRJa9Y= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573590301689873.456993220088; Tue, 12 Nov 2019 12:25:01 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUchf-0003Xp-Qc; Tue, 12 Nov 2019 20:23:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iUche-0003Wt-Fq for xen-devel@lists.xenproject.org; Tue, 12 Nov 2019 20:23:54 +0000 Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3690ae50-058a-11ea-b678-bc764e2007e4; Tue, 12 Nov 2019 20:22:56 +0000 (UTC) Received: by mail-qk1-x744.google.com with SMTP id i19so15722587qki.2 for ; Tue, 12 Nov 2019 12:22:56 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id x30sm9613099qtc.7.2019.11.12.12.22.48 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Nov 2019 12:22:49 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iUcgZ-0003ks-QH; Tue, 12 Nov 2019 16:22:47 -0400 X-Inumbo-ID: 3690ae50-058a-11ea-b678-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ag74A03ltWPnoMqcDguSbvV42NCeKy/4wLDNRWFWC64=; b=PCWQFtRmX6dvCHfSMOkkkcPKeSnrHz8krN1hlyUDdwuQyPsVYOmzLQC5Yc41QZZP85 B2NGMnss1hJMDxF9lI+niLqo+82kQlnomIjzm0A0i+pJUKE0GqVpYK2MOn66QPGThGqX NS94OqyNlUtJgXLAoouud6q++7BxD+vQdqXjziXP0GXKbwlkyF9M+ndNZkMoy7I1rJmB ToKMX/nJPIpUEiHM9uVporsmySIGzEMSqaRCthLMgEJFbQ6MTWATLpKUViLlGqYMFEmk /0gOYuaOZHlcHNsUM5RxvixV9NteWANqq9+eCto0TKyqtpfm3yeZNMfikTuU5wGo+s69 K8UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ag74A03ltWPnoMqcDguSbvV42NCeKy/4wLDNRWFWC64=; b=p8uXbos7jmDsaM27MdBFCT1cZ3wwD3pRNsmn8/g7k7NNqQJBc7uahQiQq//8sZEHFZ x7hD4mElv7sj2bkFSV2kf8P2+18i4gYXnBO+xrQP/X8hRPJUUp21orQbuJaT2NVm1hpr 3icu5kcGIAI6KAZy2F7dzKFOmoLBEw0TWsITifE2uK8g0PzRy1pZ4/ju+ky7R6tye75X IX3shvpcvLyOD8r43rt8jplJY1TOppTOlYzz7DXKoiSmIS4ShvYNv5UJrBgNx1wkVd9h kUkB1vyEa62AlhMCBCxDxvhnEMldeMPv3JQdt840dslkLKjRo8Rhf+nZ0LkS43CX1XSK MlNA== X-Gm-Message-State: APjAAAUFqW9XSDMe6+MzBL1PXHh6DEi435Jf7anPnV83GDUEZl3YR2hz iMCNrVaN8G5ipS7o7NXLWADB6A== X-Google-Smtp-Source: APXvYqzFb8GUByDgUwOTbZF3caBu1zPVFacaHYNI0Y/7i1YxdoOa6W+iBbg13cRpLZGHNCe94hp2sw== X-Received: by 2002:a05:620a:1645:: with SMTP id c5mr7466381qko.22.1573590176249; Tue, 12 Nov 2019 12:22:56 -0800 (PST) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Tue, 12 Nov 2019 16:22:31 -0400 Message-Id: <20191112202231.3856-15-jgg@ziepe.ca> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191112202231.3856-1-jgg@ziepe.ca> References: <20191112202231.3856-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 14/14] xen/gntdev: use mmu_interval_notifier_insert X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe gntdev simply wants to monitor a specific VMA for any notifier events, this can be done straightforwardly using mmu_interval_notifier_insert() over the VMA's VA range. The notifier should be attached until the original VMA is destroyed. It is unclear if any of this is even sane, but at least a lot of duplicate code is removed. Reviewed-by: Boris Ostrovsky Signed-off-by: Jason Gunthorpe --- drivers/xen/gntdev-common.h | 8 +- drivers/xen/gntdev.c | 179 ++++++++++-------------------------- 2 files changed, 49 insertions(+), 138 deletions(-) diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h index 2f8b949c3eeb14..91e44c04f7876c 100644 --- a/drivers/xen/gntdev-common.h +++ b/drivers/xen/gntdev-common.h @@ -21,15 +21,8 @@ struct gntdev_dmabuf_priv; struct gntdev_priv { /* Maps with visible offsets in the file descriptor. */ struct list_head maps; - /* - * Maps that are not visible; will be freed on munmap. - * Only populated if populate_freeable_maps =3D=3D 1 - */ - struct list_head freeable_maps; /* lock protects maps and freeable_maps. */ struct mutex lock; - struct mm_struct *mm; - struct mmu_notifier mn; =20 #ifdef CONFIG_XEN_GRANT_DMA_ALLOC /* Device for which DMA memory is allocated. */ @@ -49,6 +42,7 @@ struct gntdev_unmap_notify { }; =20 struct gntdev_grant_map { + struct mmu_interval_notifier notifier; struct list_head next; struct vm_area_struct *vma; int index; diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c index 81401f386c9ce0..a04ddf2a68afa5 100644 --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -63,7 +63,6 @@ MODULE_PARM_DESC(limit, "Maximum number of grants that ma= y be mapped by " static atomic_t pages_mapped =3D ATOMIC_INIT(0); =20 static int use_ptemod; -#define populate_freeable_maps use_ptemod =20 static int unmap_grant_pages(struct gntdev_grant_map *map, int offset, int pages); @@ -249,12 +248,6 @@ void gntdev_put_map(struct gntdev_priv *priv, struct g= ntdev_grant_map *map) evtchn_put(map->notify.event); } =20 - if (populate_freeable_maps && priv) { - mutex_lock(&priv->lock); - list_del(&map->next); - mutex_unlock(&priv->lock); - } - if (map->pages && !use_ptemod) unmap_grant_pages(map, 0, map->count); gntdev_free_map(map); @@ -444,16 +437,9 @@ static void gntdev_vma_close(struct vm_area_struct *vm= a) =20 pr_debug("gntdev_vma_close %p\n", vma); if (use_ptemod) { - /* It is possible that an mmu notifier could be running - * concurrently, so take priv->lock to ensure that the vma won't - * vanishing during the unmap_grant_pages call, since we will - * spin here until that completes. Such a concurrent call will - * not do any unmapping, since that has been done prior to - * closing the vma, but it may still iterate the unmap_ops list. - */ - mutex_lock(&priv->lock); + WARN_ON(map->vma !=3D vma); + mmu_interval_notifier_remove(&map->notifier); map->vma =3D NULL; - mutex_unlock(&priv->lock); } vma->vm_private_data =3D NULL; gntdev_put_map(priv, map); @@ -475,109 +461,44 @@ static const struct vm_operations_struct gntdev_vmop= s =3D { =20 /* ------------------------------------------------------------------ */ =20 -static bool in_range(struct gntdev_grant_map *map, - unsigned long start, unsigned long end) -{ - if (!map->vma) - return false; - if (map->vma->vm_start >=3D end) - return false; - if (map->vma->vm_end <=3D start) - return false; - - return true; -} - -static int unmap_if_in_range(struct gntdev_grant_map *map, - unsigned long start, unsigned long end, - bool blockable) +static bool gntdev_invalidate(struct mmu_interval_notifier *mn, + const struct mmu_notifier_range *range, + unsigned long cur_seq) { + struct gntdev_grant_map *map =3D + container_of(mn, struct gntdev_grant_map, notifier); unsigned long mstart, mend; int err; =20 - if (!in_range(map, start, end)) - return 0; + if (!mmu_notifier_range_blockable(range)) + return false; =20 - if (!blockable) - return -EAGAIN; + /* + * If the VMA is split or otherwise changed the notifier is not + * updated, but we don't want to process VA's outside the modified + * VMA. FIXME: It would be much more understandable to just prevent + * modifying the VMA in the first place. + */ + if (map->vma->vm_start >=3D range->end || + map->vma->vm_end <=3D range->start) + return true; =20 - mstart =3D max(start, map->vma->vm_start); - mend =3D min(end, map->vma->vm_end); + mstart =3D max(range->start, map->vma->vm_start); + mend =3D min(range->end, map->vma->vm_end); pr_debug("map %d+%d (%lx %lx), range %lx %lx, mrange %lx %lx\n", map->index, map->count, map->vma->vm_start, map->vma->vm_end, - start, end, mstart, mend); + range->start, range->end, mstart, mend); err =3D unmap_grant_pages(map, (mstart - map->vma->vm_start) >> PAGE_SHIFT, (mend - mstart) >> PAGE_SHIFT); WARN_ON(err); =20 - return 0; -} - -static int mn_invl_range_start(struct mmu_notifier *mn, - const struct mmu_notifier_range *range) -{ - struct gntdev_priv *priv =3D container_of(mn, struct gntdev_priv, mn); - struct gntdev_grant_map *map; - int ret =3D 0; - - if (mmu_notifier_range_blockable(range)) - mutex_lock(&priv->lock); - else if (!mutex_trylock(&priv->lock)) - return -EAGAIN; - - list_for_each_entry(map, &priv->maps, next) { - ret =3D unmap_if_in_range(map, range->start, range->end, - mmu_notifier_range_blockable(range)); - if (ret) - goto out_unlock; - } - list_for_each_entry(map, &priv->freeable_maps, next) { - ret =3D unmap_if_in_range(map, range->start, range->end, - mmu_notifier_range_blockable(range)); - if (ret) - goto out_unlock; - } - -out_unlock: - mutex_unlock(&priv->lock); - - return ret; -} - -static void mn_release(struct mmu_notifier *mn, - struct mm_struct *mm) -{ - struct gntdev_priv *priv =3D container_of(mn, struct gntdev_priv, mn); - struct gntdev_grant_map *map; - int err; - - mutex_lock(&priv->lock); - list_for_each_entry(map, &priv->maps, next) { - if (!map->vma) - continue; - pr_debug("map %d+%d (%lx %lx)\n", - map->index, map->count, - map->vma->vm_start, map->vma->vm_end); - err =3D unmap_grant_pages(map, /* offset */ 0, map->count); - WARN_ON(err); - } - list_for_each_entry(map, &priv->freeable_maps, next) { - if (!map->vma) - continue; - pr_debug("map %d+%d (%lx %lx)\n", - map->index, map->count, - map->vma->vm_start, map->vma->vm_end); - err =3D unmap_grant_pages(map, /* offset */ 0, map->count); - WARN_ON(err); - } - mutex_unlock(&priv->lock); + return true; } =20 -static const struct mmu_notifier_ops gntdev_mmu_ops =3D { - .release =3D mn_release, - .invalidate_range_start =3D mn_invl_range_start, +static const struct mmu_interval_notifier_ops gntdev_mmu_ops =3D { + .invalidate =3D gntdev_invalidate, }; =20 /* ------------------------------------------------------------------ */ @@ -592,7 +513,6 @@ static int gntdev_open(struct inode *inode, struct file= *flip) return -ENOMEM; =20 INIT_LIST_HEAD(&priv->maps); - INIT_LIST_HEAD(&priv->freeable_maps); mutex_init(&priv->lock); =20 #ifdef CONFIG_XEN_GNTDEV_DMABUF @@ -604,17 +524,6 @@ static int gntdev_open(struct inode *inode, struct fil= e *flip) } #endif =20 - if (use_ptemod) { - priv->mm =3D get_task_mm(current); - if (!priv->mm) { - kfree(priv); - return -ENOMEM; - } - priv->mn.ops =3D &gntdev_mmu_ops; - ret =3D mmu_notifier_register(&priv->mn, priv->mm); - mmput(priv->mm); - } - if (ret) { kfree(priv); return ret; @@ -644,16 +553,12 @@ static int gntdev_release(struct inode *inode, struct= file *flip) list_del(&map->next); gntdev_put_map(NULL /* already removed */, map); } - WARN_ON(!list_empty(&priv->freeable_maps)); mutex_unlock(&priv->lock); =20 #ifdef CONFIG_XEN_GNTDEV_DMABUF gntdev_dmabuf_fini(priv->dmabuf_priv); #endif =20 - if (use_ptemod) - mmu_notifier_unregister(&priv->mn, priv->mm); - kfree(priv); return 0; } @@ -714,8 +619,6 @@ static long gntdev_ioctl_unmap_grant_ref(struct gntdev_= priv *priv, map =3D gntdev_find_map_index(priv, op.index >> PAGE_SHIFT, op.count); if (map) { list_del(&map->next); - if (populate_freeable_maps) - list_add_tail(&map->next, &priv->freeable_maps); err =3D 0; } mutex_unlock(&priv->lock); @@ -1087,11 +990,6 @@ static int gntdev_mmap(struct file *flip, struct vm_a= rea_struct *vma) goto unlock_out; if (use_ptemod && map->vma) goto unlock_out; - if (use_ptemod && priv->mm !=3D vma->vm_mm) { - pr_warn("Huh? Other mm?\n"); - goto unlock_out; - } - refcount_inc(&map->users); =20 vma->vm_ops =3D &gntdev_vmops; @@ -1102,10 +1000,6 @@ static int gntdev_mmap(struct file *flip, struct vm_= area_struct *vma) vma->vm_flags |=3D VM_DONTCOPY; =20 vma->vm_private_data =3D map; - - if (use_ptemod) - map->vma =3D vma; - if (map->flags) { if ((vma->vm_flags & VM_WRITE) && (map->flags & GNTMAP_readonly)) @@ -1116,8 +1010,28 @@ static int gntdev_mmap(struct file *flip, struct vm_= area_struct *vma) map->flags |=3D GNTMAP_readonly; } =20 + if (use_ptemod) { + map->vma =3D vma; + err =3D mmu_interval_notifier_insert_locked( + &map->notifier, vma->vm_mm, vma->vm_start, + vma->vm_end - vma->vm_start, &gntdev_mmu_ops); + if (err) + goto out_unlock_put; + } mutex_unlock(&priv->lock); =20 + /* + * gntdev takes the address of the PTE in find_grant_ptes() and passes + * it to the hypervisor in gntdev_map_grant_pages(). The purpose of + * the notifier is to prevent the hypervisor pointer to the PTE from + * going stale. + * + * Since this vma's mappings can't be touched without the mmap_sem, + * and we are holding it now, there is no need for the notifier_range + * locking pattern. + */ + mmu_interval_read_begin(&map->notifier); + if (use_ptemod) { map->pages_vm_start =3D vma->vm_start; err =3D apply_to_page_range(vma->vm_mm, vma->vm_start, @@ -1166,8 +1080,11 @@ static int gntdev_mmap(struct file *flip, struct vm_= area_struct *vma) mutex_unlock(&priv->lock); out_put_map: if (use_ptemod) { - map->vma =3D NULL; unmap_grant_pages(map, 0, map->count); + if (map->vma) { + mmu_interval_notifier_remove(&map->notifier); + map->vma =3D NULL; + } } gntdev_put_map(priv, map); return err; --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel