From nobody Tue Feb 10 06:43:56 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1572293528; cv=none; d=zoho.com; s=zohoarc; b=RM53Tq5D7BxPeul6nlsUg0IsJ9XwJ1VjDAF0koc5vKnElll1A9WcNT4nIbRr8cbq0Zqev4lrOl6bBc1Uz1Ma3reNoPohzLx8Y4vg1afZlhPewPRst4OFf1QexHwJ7xyK+2+FTRaqSyKbcv2spanTLrFeUSFF573/vQxDW3042bg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1572293528; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=HGIaUSl8i3vDobuVYP/hVAU+NHWipNzMwSRvAiJs1zk=; b=BNI4XeOSffXZRNyhEj8Us+qbr5mv017/ErsKl5+e8HQsr+tbtDILciWcdytqrWnnxEm+4c3LRNFS6fpsMqvx1SB9hftUEe5a1tgoPFcvSKsKGlNrmYuMOmck/pZbT/dgQJDI0DxBzK92e9HtXOSN7YNhOFAUFl1S0UrHJRf0Ulw= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1572293528540494.18463650246713; Mon, 28 Oct 2019 13:12:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iPBLi-0001S1-Ur; Mon, 28 Oct 2019 20:10:46 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iPBLh-0001Rw-PV for xen-devel@lists.xenproject.org; Mon, 28 Oct 2019 20:10:46 +0000 Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 05f718f0-f9bf-11e9-beca-bc764e2007e4; Mon, 28 Oct 2019 20:10:44 +0000 (UTC) Received: by mail-qk1-x742.google.com with SMTP id 71so9800497qkl.0 for ; Mon, 28 Oct 2019 13:10:44 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-142-162-113-180.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.180]) by smtp.gmail.com with ESMTPSA id l7sm5028902qkg.102.2019.10.28.13.10.43 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 28 Oct 2019 13:10:44 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iPBLf-0001gw-Fq; Mon, 28 Oct 2019 17:10:43 -0300 X-Inumbo-ID: 05f718f0-f9bf-11e9-beca-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jB/ndDNYI+4jh+CyWUF8UUfKQRLegAjGY1TRhiwKQXc=; b=Hh/Zj3Y+Ryj/mU9zMFkCbpMT+v0KWWsJkUCHo1SRF/TME2Wth945w+SYI+g39jEPtr 2TcL6muJgm05AFmLXzPJ2in/FLllzSgw/AhshM/+MYVIyJbvpU0Qap6m7vyyIMRZczjM qoRRoR6DCc1QboYFax8j0T3qz9s4m6qoVjluldwL8VV7L6yIOlMd0/vXcql7RACqDDl7 yEg0VGkJ7zGEKiXYcwRIQgRK1yf/bFeaYVtpI2tN+jQa36SxcmtU4AhWxNj5bkZTFyzo TREK4XNbxKgUHNgwccShMuwBnl2TL/ivSv+RnIdWqzcy2MpngTiiSLCxjHKPz913nWhK 84oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jB/ndDNYI+4jh+CyWUF8UUfKQRLegAjGY1TRhiwKQXc=; b=K9RIVts4J0SRBt7L2jXAs/iiKWRS2kyjJCLd/FHW4a9KpbQeIwwsLCNSUZ8wz+ee2V At/hY+jbJZyMM25m9Pm9MfFJn2DAnaW1ONFAerZJtOvR1Nwl9+1FHNbxiryzg/3bnR94 739VZVMdS+cUeAdGjh3O4/l919oP0UrHOI7sEzkb23wKz87A478tTA31kGM8yfc3fAVd lkQdw+tndhwy4mQI472cmhDQtyUSQN6wC8y58CrXHm+WypGL7/eYli/AyL69ht7boImf QXSBDsVUizLnj3NEa9be3BYvHFkz03SpmTvkxu97QxWm4YF/DpTVtwAvsvAWYLD+KYEp vlqg== X-Gm-Message-State: APjAAAVKQRFFUAj/E7FhVzVEvnEA6R69/iRL8crvs3yz6Szq2771DPrZ N2LCLwqhIcota0Jt9klcY2UdCw== X-Google-Smtp-Source: APXvYqwdtbMFOB3L0H1oyTNxzhFUkpbCkNAYH5f+kI9BJWI3JXGsIu1i9+YKKYVTym2s/kszBv6jdw== X-Received: by 2002:ae9:ee10:: with SMTP id i16mr17214277qkg.14.1572293444241; Mon, 28 Oct 2019 13:10:44 -0700 (PDT) From: Jason Gunthorpe To: linux-mm@kvack.org, Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Date: Mon, 28 Oct 2019 17:10:27 -0300 Message-Id: <20191028201032.6352-11-jgg@ziepe.ca> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191028201032.6352-1-jgg@ziepe.ca> References: <20191028201032.6352-1-jgg@ziepe.ca> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 10/15] nouveau: use mmu_notifier directly for invalidate_range_start X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , David Zhou , Mike Marciniszyn , Stefano Stabellini , Oleksandr Andrushchenko , linux-rdma@vger.kernel.org, nouveau@lists.freedesktop.org, Dennis Dalessandro , amd-gfx@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , dri-devel@lists.freedesktop.org, Alex Deucher , xen-devel@lists.xenproject.org, Boris Ostrovsky , Petr Cvek , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ben Skeggs Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Jason Gunthorpe There is no reason to get the invalidate_range_start() callback via an indirection through hmm_mirror, just register a normal notifier directly. Cc: Ben Skeggs Cc: dri-devel@lists.freedesktop.org Cc: nouveau@lists.freedesktop.org Cc: Ralph Campbell Signed-off-by: Jason Gunthorpe --- drivers/gpu/drm/nouveau/nouveau_svm.c | 95 ++++++++++++++++++--------- 1 file changed, 63 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouvea= u/nouveau_svm.c index 668d4bd0c118f1..577f8811925a59 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -88,6 +88,7 @@ nouveau_ivmm_find(struct nouveau_svm *svm, u64 inst) } =20 struct nouveau_svmm { + struct mmu_notifier notifier; struct nouveau_vmm *vmm; struct { unsigned long start; @@ -96,7 +97,6 @@ struct nouveau_svmm { =20 struct mutex mutex; =20 - struct mm_struct *mm; struct hmm_mirror mirror; }; =20 @@ -251,10 +251,11 @@ nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u6= 4 start, u64 limit) } =20 static int -nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, - const struct mmu_notifier_range *update) +nouveau_svmm_invalidate_range_start(struct mmu_notifier *mn, + const struct mmu_notifier_range *update) { - struct nouveau_svmm *svmm =3D container_of(mirror, typeof(*svmm), mirror); + struct nouveau_svmm *svmm =3D + container_of(mn, struct nouveau_svmm, notifier); unsigned long start =3D update->start; unsigned long limit =3D update->end; =20 @@ -264,6 +265,9 @@ nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirr= or *mirror, SVMM_DBG(svmm, "invalidate %016lx-%016lx", start, limit); =20 mutex_lock(&svmm->mutex); + if (unlikely(!svmm->vmm)) + goto out; + if (limit > svmm->unmanaged.start && start < svmm->unmanaged.limit) { if (start < svmm->unmanaged.start) { nouveau_svmm_invalidate(svmm, start, @@ -273,19 +277,31 @@ nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mi= rror *mirror, } =20 nouveau_svmm_invalidate(svmm, start, limit); + +out: mutex_unlock(&svmm->mutex); return 0; } =20 -static void -nouveau_svmm_release(struct hmm_mirror *mirror) +static void nouveau_svmm_free_notifier(struct mmu_notifier *mn) +{ + kfree(container_of(mn, struct nouveau_svmm, notifier)); +} + +static const struct mmu_notifier_ops nouveau_mn_ops =3D { + .invalidate_range_start =3D nouveau_svmm_invalidate_range_start, + .free_notifier =3D nouveau_svmm_free_notifier, +}; + +static int +nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, + const struct mmu_notifier_range *update) { + return 0; } =20 -static const struct hmm_mirror_ops -nouveau_svmm =3D { +static const struct hmm_mirror_ops nouveau_svmm =3D { .sync_cpu_device_pagetables =3D nouveau_svmm_sync_cpu_device_pagetables, - .release =3D nouveau_svmm_release, }; =20 void @@ -294,7 +310,10 @@ nouveau_svmm_fini(struct nouveau_svmm **psvmm) struct nouveau_svmm *svmm =3D *psvmm; if (svmm) { hmm_mirror_unregister(&svmm->mirror); - kfree(*psvmm); + mutex_lock(&svmm->mutex); + svmm->vmm =3D NULL; + mutex_unlock(&svmm->mutex); + mmu_notifier_put(&svmm->notifier); *psvmm =3D NULL; } } @@ -320,7 +339,7 @@ nouveau_svmm_init(struct drm_device *dev, void *data, mutex_lock(&cli->mutex); if (cli->svm.cli) { ret =3D -EBUSY; - goto done; + goto out_free; } =20 /* Allocate a new GPU VMM that can support SVM (managed by the @@ -335,24 +354,33 @@ nouveau_svmm_init(struct drm_device *dev, void *data, .fault_replay =3D true, }, sizeof(struct gp100_vmm_v0), &cli->svm.vmm); if (ret) - goto done; + goto out_free; =20 - /* Enable HMM mirroring of CPU address-space to VMM. */ - svmm->mm =3D get_task_mm(current); - down_write(&svmm->mm->mmap_sem); + down_write(¤t->mm->mmap_sem); svmm->mirror.ops =3D &nouveau_svmm; - ret =3D hmm_mirror_register(&svmm->mirror, svmm->mm); - if (ret =3D=3D 0) { - cli->svm.svmm =3D svmm; - cli->svm.cli =3D cli; - } - up_write(&svmm->mm->mmap_sem); - mmput(svmm->mm); + ret =3D hmm_mirror_register(&svmm->mirror, current->mm); + if (ret) + goto out_mm_unlock; =20 -done: + svmm->notifier.ops =3D &nouveau_mn_ops; + ret =3D __mmu_notifier_register(&svmm->notifier, current->mm); if (ret) - nouveau_svmm_fini(&svmm); + goto out_hmm_unregister; + /* Note, ownership of svmm transfers to mmu_notifier */ + + cli->svm.svmm =3D svmm; + cli->svm.cli =3D cli; + up_write(¤t->mm->mmap_sem); mutex_unlock(&cli->mutex); + return 0; + +out_hmm_unregister: + hmm_mirror_unregister(&svmm->mirror); +out_mm_unlock: + up_write(¤t->mm->mmap_sem); +out_free: + mutex_unlock(&cli->mutex); + kfree(svmm); return ret; } =20 @@ -494,12 +522,12 @@ nouveau_range_fault(struct nouveau_svmm *svmm, struct= hmm_range *range) =20 ret =3D hmm_range_register(range, &svmm->mirror); if (ret) { - up_read(&svmm->mm->mmap_sem); + up_read(&svmm->notifier.mm->mmap_sem); return (int)ret; } =20 if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { - up_read(&svmm->mm->mmap_sem); + up_read(&svmm->notifier.mm->mmap_sem); return -EBUSY; } =20 @@ -507,7 +535,7 @@ nouveau_range_fault(struct nouveau_svmm *svmm, struct h= mm_range *range) if (ret <=3D 0) { if (ret =3D=3D 0) ret =3D -EBUSY; - up_read(&svmm->mm->mmap_sem); + up_read(&svmm->notifier.mm->mmap_sem); hmm_range_unregister(range); return ret; } @@ -587,12 +615,15 @@ nouveau_svm_fault(struct nvif_notify *notify) args.i.p.version =3D 0; =20 for (fi =3D 0; fn =3D fi + 1, fi < buffer->fault_nr; fi =3D fn) { + struct mm_struct *mm; + /* Cancel any faults from non-SVM channels. */ if (!(svmm =3D buffer->fault[fi]->svmm)) { nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } SVMM_DBG(svmm, "addr %016llx", buffer->fault[fi]->addr); + mm =3D svmm->notifier.mm; =20 /* We try and group handling of faults within a small * window into a single update. @@ -609,11 +640,11 @@ nouveau_svm_fault(struct nvif_notify *notify) /* Intersect fault window with the CPU VMA, cancelling * the fault if the address is invalid. */ - down_read(&svmm->mm->mmap_sem); - vma =3D find_vma_intersection(svmm->mm, start, limit); + down_read(&mm->mmap_sem); + vma =3D find_vma_intersection(mm, start, limit); if (!vma) { SVMM_ERR(svmm, "wndw %016llx-%016llx", start, limit); - up_read(&svmm->mm->mmap_sem); + up_read(&mm->mmap_sem); nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } @@ -623,7 +654,7 @@ nouveau_svm_fault(struct nvif_notify *notify) =20 if (buffer->fault[fi]->addr !=3D start) { SVMM_ERR(svmm, "addr %016llx", buffer->fault[fi]->addr); - up_read(&svmm->mm->mmap_sem); + up_read(&mm->mmap_sem); nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } @@ -704,7 +735,7 @@ nouveau_svm_fault(struct nvif_notify *notify) NULL); svmm->vmm->vmm.object.client->super =3D false; mutex_unlock(&svmm->mutex); - up_read(&svmm->mm->mmap_sem); + up_read(&mm->mmap_sem); } =20 /* Cancel any faults in the window whose pages didn't manage --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel