From nobody Tue Dec 16 19:40:59 2025 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DEAF43BBEB for ; Wed, 17 Jul 2024 22:24:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721255080; cv=none; b=LmPj168U1/J+K7i3OVox8PQJ+BRUT4GN0s0Ik1WU+Q0liayfZjXUkOw5NgIU8CleJPUTUx4VCUTMph+wGmMtLqUQK3OZqCz9WHeuTNrPsbNm62aa956PBZZyuud+dtXIQ+X6xmvrVHNo4eI5Lyo4d+fpD0uVUFHivEPrPlq017k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721255080; c=relaxed/simple; bh=U735gPL/j4/gNkZpIZj10CDDwOkKG7qvXtx9agyPju4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gvcIhxegcAkkChlKLJy0moYK3eV61On2cOOkPklexI4ljk5bo+x/KYBE4Y6Iy/JuhvCaup269cGtVXD6mMx8SGTsKxvX6Ib7gUEEGkoekZ3wc6HT9j7DjygBfAa5rxiOuum3SvDMTOtmf9K2eFEGpjj/mLh+L2KAbE5Is8qvf/0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--axelrasmussen.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WtsjuEEN; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--axelrasmussen.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WtsjuEEN" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e05a8aa4478so516314276.1 for ; Wed, 17 Jul 2024 15:24:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1721255074; x=1721859874; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qsLfk3WKZqMTvBKd64i6B/HOksUTK2sb8iIXkimCFQM=; b=WtsjuEENOZRIG63ydjGdZH+zB/mTTX1FrMa9aMXn2ie9UHAG3qWCZPUzTUjPMfpkwJ 43NSXBom3kriptX3ASUiy9wcPhe0vUp+2pLqp/PXwI/xMek25DqAx9rwPI+6rrC9Ti9f GWI6ObXEKI6RdBkmqfb4ftJs8cZJiVooKSmOStegocIEM7dLvZJzKc2pBQM+mxDCWdUc m/nQKj5NzX6KILVFoTXBl/XzkpnAp9BSdI79izn7cDz078MUueR09jmECbazG7uAXVGG 8nqu/h88psSMNGDj/tiR645QamqOeNDdnHbCqIVBSufQ3RauhRmOpBpcRqNB4j0Y4Dgb k5EA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721255074; x=1721859874; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qsLfk3WKZqMTvBKd64i6B/HOksUTK2sb8iIXkimCFQM=; b=WtcFf4gEo4kmDxVCFlEeJEXTRN44Bg1dsyYgZ2DVwQLn6o7YnpTChCx7dsWGx8MG5y g63VWKGUtins/A8Jzz8R/dxQ9D8OEhMY//irBv083PHBUTq8X7Zv8Yi1OTHpCKUxEr0Q 2L8uPlha6gsCW/7J1A59e/QyWsjpol0mSyMa83MBvsgR9VH6GkuMNExBaiEGa/TZDJx/ A91v7e3VOvDQ13nsfKl61U8hkkNU4PFk1XbZtuXwqnhWD5dH9nD9DOqJfEMWHWOCzeMB HFgcXeL1jK7J6WE+hUxOmUf3/GjHjvJc/GP47pKsncnc+xDJXzrqhQBdQLWnohYQjt61 UnjQ== X-Forwarded-Encrypted: i=1; AJvYcCX0ZKdLE86ZDzyVhu4u+kbFyLeRBkAFbsK6Ou7CAufnqeOYyOe7GqICSqMealj8046m7w4G02Vq3htAs4fDKVdN3rYV/2StM965F1VY X-Gm-Message-State: AOJu0YxNx7Tpo5xIdq+t9M1HpUlD1SYWeTfkxGF7ESjM4oM+yBuqYW4L 86eCO65n/1cWXSlWYiy7Uv6sOMxJSXkt1hj0KvLrZNSrtqbycxD75jYppeAL85SxJSB0DjoO50+ A98buAvB6SZtZMCQBN2CwxkLULwnigQ== X-Google-Smtp-Source: AGHT+IGCouqY3qXBhAPrZ6wvEyo1BAsuREg2PdFRGGE6O0pnpk9sCyfzUe+hPFbHoZJIUAjhm0fj6BPI5AYV1WJqT77p X-Received: from axel.svl.corp.google.com ([2620:15c:2a3:200:a503:d697:557b:840c]) (user=axelrasmussen job=sendgmr) by 2002:a05:6902:120c:b0:e02:f35c:d398 with SMTP id 3f1490d57ef6-e05ff371065mr23222276.0.1721255073882; Wed, 17 Jul 2024 15:24:33 -0700 (PDT) Date: Wed, 17 Jul 2024 15:24:27 -0700 In-Reply-To: <20240717222429.2011540-1-axelrasmussen@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240717222429.2011540-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.45.2.993.g49e7a77208-goog Message-ID: <20240717222429.2011540-2-axelrasmussen@google.com> Subject: [PATCH 6.6 1/3] vfio: Create vfio_fs_type with inode per device From: Axel Rasmussen To: stable@vger.kernel.org Cc: Alex Williamson , Ankit Agrawal , Eric Auger , Jason Gunthorpe , Kevin Tian , Kunwu Chan , Leah Rumancik , Miaohe Lin , Stefan Hajnoczi , Yi Liu , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jason Gunthorpe , Axel Rasmussen Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Williamson commit b7c5e64fecfa88764791679cca4786ac65de739e upstream. By linking all the device fds we provide to userspace to an address space through a new pseudo fs, we can use tools like unmap_mapping_range() to zap all vmas associated with a device. Suggested-by: Jason Gunthorpe Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Link: https://lore.kernel.org/r/20240530045236.1005864-2-alex.williamson@re= dhat.com Signed-off-by: Alex Williamson Signed-off-by: Axel Rasmussen --- drivers/vfio/device_cdev.c | 7 ++++++ drivers/vfio/group.c | 7 ++++++ drivers/vfio/vfio_main.c | 44 ++++++++++++++++++++++++++++++++++++++ include/linux/vfio.h | 1 + 4 files changed, 59 insertions(+) diff --git a/drivers/vfio/device_cdev.c b/drivers/vfio/device_cdev.c index e75da0a70d1f..bb1817bd4ff3 100644 --- a/drivers/vfio/device_cdev.c +++ b/drivers/vfio/device_cdev.c @@ -39,6 +39,13 @@ int vfio_device_fops_cdev_open(struct inode *inode, stru= ct file *filep) =20 filep->private_data =3D df; =20 + /* + * Use the pseudo fs inode on the device to link all mmaps + * to the same address space, allowing us to unmap all vmas + * associated to this device using unmap_mapping_range(). + */ + filep->f_mapping =3D device->inode->i_mapping; + return 0; =20 err_put_registration: diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c index 610a429c6191..ded364588d29 100644 --- a/drivers/vfio/group.c +++ b/drivers/vfio/group.c @@ -286,6 +286,13 @@ static struct file *vfio_device_open_file(struct vfio_= device *device) */ filep->f_mode |=3D (FMODE_PREAD | FMODE_PWRITE); =20 + /* + * Use the pseudo fs inode on the device to link all mmaps + * to the same address space, allowing us to unmap all vmas + * associated to this device using unmap_mapping_range(). + */ + filep->f_mapping =3D device->inode->i_mapping; + if (device->group->type =3D=3D VFIO_NO_IOMMU) dev_warn(device->dev, "vfio-noiommu device opened by user " "(%s:%d)\n", current->comm, task_pid_nr(current)); diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c index 40732e8ed4c6..a205d3a4e379 100644 --- a/drivers/vfio/vfio_main.c +++ b/drivers/vfio/vfio_main.c @@ -22,8 +22,10 @@ #include #include #include +#include #include #include +#include #include #include #include @@ -43,9 +45,13 @@ #define DRIVER_AUTHOR "Alex Williamson " #define DRIVER_DESC "VFIO - User Level meta-driver" =20 +#define VFIO_MAGIC 0x5646494f /* "VFIO" */ + static struct vfio { struct class *device_class; struct ida device_ida; + struct vfsmount *vfs_mount; + int fs_count; } vfio; =20 #ifdef CONFIG_VFIO_NOIOMMU @@ -186,6 +192,8 @@ static void vfio_device_release(struct device *dev) if (device->ops->release) device->ops->release(device); =20 + iput(device->inode); + simple_release_fs(&vfio.vfs_mount, &vfio.fs_count); kvfree(device); } =20 @@ -228,6 +236,34 @@ struct vfio_device *_vfio_alloc_device(size_t size, st= ruct device *dev, } EXPORT_SYMBOL_GPL(_vfio_alloc_device); =20 +static int vfio_fs_init_fs_context(struct fs_context *fc) +{ + return init_pseudo(fc, VFIO_MAGIC) ? 0 : -ENOMEM; +} + +static struct file_system_type vfio_fs_type =3D { + .name =3D "vfio", + .owner =3D THIS_MODULE, + .init_fs_context =3D vfio_fs_init_fs_context, + .kill_sb =3D kill_anon_super, +}; + +static struct inode *vfio_fs_inode_new(void) +{ + struct inode *inode; + int ret; + + ret =3D simple_pin_fs(&vfio_fs_type, &vfio.vfs_mount, &vfio.fs_count); + if (ret) + return ERR_PTR(ret); + + inode =3D alloc_anon_inode(vfio.vfs_mount->mnt_sb); + if (IS_ERR(inode)) + simple_release_fs(&vfio.vfs_mount, &vfio.fs_count); + + return inode; +} + /* * Initialize a vfio_device so it can be registered to vfio core. */ @@ -246,6 +282,11 @@ static int vfio_init_device(struct vfio_device *device= , struct device *dev, init_completion(&device->comp); device->dev =3D dev; device->ops =3D ops; + device->inode =3D vfio_fs_inode_new(); + if (IS_ERR(device->inode)) { + ret =3D PTR_ERR(device->inode); + goto out_inode; + } =20 if (ops->init) { ret =3D ops->init(device); @@ -260,6 +301,9 @@ static int vfio_init_device(struct vfio_device *device,= struct device *dev, return 0; =20 out_uninit: + iput(device->inode); + simple_release_fs(&vfio.vfs_mount, &vfio.fs_count); +out_inode: vfio_release_device_set(device); ida_free(&vfio.device_ida, device->index); return ret; diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 5ac5f182ce0b..514a7f9b3ef4 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -64,6 +64,7 @@ struct vfio_device { struct completion comp; struct iommufd_access *iommufd_access; void (*put_kvm)(struct kvm *kvm); + struct inode *inode; #if IS_ENABLED(CONFIG_IOMMUFD) struct iommufd_device *iommufd_device; u8 iommufd_attached:1; --=20 2.45.2.993.g49e7a77208-goog From nobody Tue Dec 16 19:40:59 2025 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7949348CDD for ; Wed, 17 Jul 2024 22:24:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721255082; cv=none; b=a/MKIFsS7MpGj3PgOoGxIE5ioj+wD4drmBQXJeIvysTMMO8pL0MvsHX1BaTRbmybNATcKol1JZMY1EAyUNn/BifXjD3kcHkwroomMLWZ3oN3Qum2vvEg2kGt4dm0qUNtXO3naiDz37cvbL9dkE9nABam3uBLFLT3xLcLSSvLgtM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721255082; c=relaxed/simple; bh=TtH3XLpYUdffZi+3ZQKLYC8ZepilY49siwBCRQ2/iGA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IzEYmcTj6ph4kN1P9zds1AXS6dIYXd5yUprWQZ3NR/MjF+jfrYV8C2CyTD+EhvhgQ15a32AG/k+X8G1KlFSWSmfmd09F6ORHoiO6APGxamE44qIpwHAAjrO4cUiPFQJ0i1OFVWceosUu6FoYPQqb0k54yTOD6E7XuHHSkj9Mnxc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--axelrasmussen.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lk350mmG; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--axelrasmussen.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lk350mmG" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e05f08adcacso505717276.0 for ; Wed, 17 Jul 2024 15:24:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1721255075; x=1721859875; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VAihi2NLrhsl9QonAbqxzxv5quN04e33lYpCxidSW1I=; b=lk350mmGPerB72D1L0hUPjYk7UliW++bVnX4mvc4WKWKVWdzr66EGAhSNKXUytDZUj 9akTzzOO18EsvdYHFguHkykiGhecFS+7bbyVfUfnxYpoTLMKQ7/HYOX7jDxzNf11ma1d nbOYWuI6DYr6mKmGEVr/P1wv+5G3zePbvJrSxCkQeYG+sVzBFxFFw+X5VGlFPKFtUgA5 34h947PSAsBxy8ed5XStqY9peHxnM7tCboJ4NzrSlhZYLeesqtGG9WTDGerYapCEUxq0 RXj5B9pFid3v1VmEerweqw/uYTCSmc/4S7GozLj+XogoMxTBIk31r8P4LZ3DBznJnNhW YLsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721255075; x=1721859875; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VAihi2NLrhsl9QonAbqxzxv5quN04e33lYpCxidSW1I=; b=WEaaFygncklKyp22Y7pWR8vPQsSezPyIwpXtM9wPVvS/oPaPCCk5SaUAAEHmlW1psH WVJ7LD4fQVvL0Ln5PX9Lv7KmxtfpXbwuFADtrHH2Bg3sL/bZIyj+WkQmxF60kqxg0ypX qITLA4jwwa4Tvv5XMkfvt+hfT2rpern85ZZGAPM6096WHtQrsVanzInsOPAHV8/v42WQ KgAyY3eym7QRfW8mlODZ+AMf9hQbD6r6YaHLIYkgI43GbY0G9o8uyIxi7GZGvVqBBPtm v5nPnx5rMRi1skZKHXEC9ebGvc59WnnAnx29A9cCgbFq7VHdGrvDT9DCTOjg7TuhLmlc V6vw== X-Forwarded-Encrypted: i=1; AJvYcCV9bRrV3NeRoJdNmT0TXmRW5EXBRlw3Sa0BuAwclDQHQ7XSjJGeSmWs6NIbgK8O8Hw9u+PfR6vkmKN/NHpFvZxZGW2kp+26sri8v+N0 X-Gm-Message-State: AOJu0YxJmmpDH82763J8YK1YS+/+/YYLEnNkRyYYbTRX5Vy0hOqVjEwG BJ61A2sULFjff5feHy+6+6hHWLQBx/yBpLrBYQC+e9GrnkdynmsycTCqMSAmulJKrg4ILgyAZgq jj0BG5spuc0TUAZJloN2gnCUza7FG1g== X-Google-Smtp-Source: AGHT+IHO+Ae3NjnrwpiDjqOu3v+RM540yt4wUFe0mWmEmLo4mHb2p4sqDQwyimrQcr6k8xCK609bsoaV4wUIj5HbIRmg X-Received: from axel.svl.corp.google.com ([2620:15c:2a3:200:a503:d697:557b:840c]) (user=axelrasmussen job=sendgmr) by 2002:a25:660e:0:b0:e03:5dfe:45bb with SMTP id 3f1490d57ef6-e05fedb2a03mr1483276.12.1721255075506; Wed, 17 Jul 2024 15:24:35 -0700 (PDT) Date: Wed, 17 Jul 2024 15:24:28 -0700 In-Reply-To: <20240717222429.2011540-1-axelrasmussen@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240717222429.2011540-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.45.2.993.g49e7a77208-goog Message-ID: <20240717222429.2011540-3-axelrasmussen@google.com> Subject: [PATCH 6.6 2/3] vfio/pci: Use unmap_mapping_range() From: Axel Rasmussen To: stable@vger.kernel.org Cc: Alex Williamson , Ankit Agrawal , Eric Auger , Jason Gunthorpe , Kevin Tian , Kunwu Chan , Leah Rumancik , Miaohe Lin , Stefan Hajnoczi , Yi Liu , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jason Gunthorpe , Axel Rasmussen Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Williamson commit aac6db75a9fc2c7a6f73e152df8f15101dda38e6 upstream. With the vfio device fd tied to the address space of the pseudo fs inode, we can use the mm to track all vmas that might be mmap'ing device BARs, which removes our vma_list and all the complicated lock ordering necessary to manually zap each related vma. Note that we can no longer store the pfn in vm_pgoff if we want to use unmap_mapping_range() to zap a selective portion of the device fd corresponding to BAR mappings. This also converts our mmap fault handler to use vmf_insert_pfn() because we no longer have a vma_list to avoid the concurrency problem with io_remap_pfn_range(). The goal is to eventually use the vm_ops huge_fault handler to avoid the additional faulting overhead, but vmf_insert_pfn_{pmd,pud}() need to learn about pfnmaps first. Also, Jason notes that a race exists between unmap_mapping_range() and the fops mmap callback if we were to call io_remap_pfn_range() to populate the vma on mmap. Specifically, mmap_region() does call_mmap() before it does vma_link_file() which gives a window where the vma is populated but invisible to unmap_mapping_range(). Suggested-by: Jason Gunthorpe Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Link: https://lore.kernel.org/r/20240530045236.1005864-3-alex.williamson@re= dhat.com Signed-off-by: Alex Williamson Signed-off-by: Axel Rasmussen --- drivers/vfio/pci/vfio_pci_core.c | 264 +++++++------------------------ include/linux/vfio_pci_core.h | 2 - 2 files changed, 55 insertions(+), 211 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_c= ore.c index a3c545dd174e..7b74c71a3169 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -1607,100 +1607,20 @@ ssize_t vfio_pci_core_write(struct vfio_device *co= re_vdev, const char __user *bu } EXPORT_SYMBOL_GPL(vfio_pci_core_write); =20 -/* Return 1 on zap and vma_lock acquired, 0 on contention (only with @try)= */ -static int vfio_pci_zap_and_vma_lock(struct vfio_pci_core_device *vdev, bo= ol try) +static void vfio_pci_zap_bars(struct vfio_pci_core_device *vdev) { - struct vfio_pci_mmap_vma *mmap_vma, *tmp; + struct vfio_device *core_vdev =3D &vdev->vdev; + loff_t start =3D VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_BAR0_REGION_INDEX); + loff_t end =3D VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_ROM_REGION_INDEX); + loff_t len =3D end - start; =20 - /* - * Lock ordering: - * vma_lock is nested under mmap_lock for vm_ops callback paths. - * The memory_lock semaphore is used by both code paths calling - * into this function to zap vmas and the vm_ops.fault callback - * to protect the memory enable state of the device. - * - * When zapping vmas we need to maintain the mmap_lock =3D> vma_lock - * ordering, which requires using vma_lock to walk vma_list to - * acquire an mm, then dropping vma_lock to get the mmap_lock and - * reacquiring vma_lock. This logic is derived from similar - * requirements in uverbs_user_mmap_disassociate(). - * - * mmap_lock must always be the top-level lock when it is taken. - * Therefore we can only hold the memory_lock write lock when - * vma_list is empty, as we'd need to take mmap_lock to clear - * entries. vma_list can only be guaranteed empty when holding - * vma_lock, thus memory_lock is nested under vma_lock. - * - * This enables the vm_ops.fault callback to acquire vma_lock, - * followed by memory_lock read lock, while already holding - * mmap_lock without risk of deadlock. - */ - while (1) { - struct mm_struct *mm =3D NULL; - - if (try) { - if (!mutex_trylock(&vdev->vma_lock)) - return 0; - } else { - mutex_lock(&vdev->vma_lock); - } - while (!list_empty(&vdev->vma_list)) { - mmap_vma =3D list_first_entry(&vdev->vma_list, - struct vfio_pci_mmap_vma, - vma_next); - mm =3D mmap_vma->vma->vm_mm; - if (mmget_not_zero(mm)) - break; - - list_del(&mmap_vma->vma_next); - kfree(mmap_vma); - mm =3D NULL; - } - if (!mm) - return 1; - mutex_unlock(&vdev->vma_lock); - - if (try) { - if (!mmap_read_trylock(mm)) { - mmput(mm); - return 0; - } - } else { - mmap_read_lock(mm); - } - if (try) { - if (!mutex_trylock(&vdev->vma_lock)) { - mmap_read_unlock(mm); - mmput(mm); - return 0; - } - } else { - mutex_lock(&vdev->vma_lock); - } - list_for_each_entry_safe(mmap_vma, tmp, - &vdev->vma_list, vma_next) { - struct vm_area_struct *vma =3D mmap_vma->vma; - - if (vma->vm_mm !=3D mm) - continue; - - list_del(&mmap_vma->vma_next); - kfree(mmap_vma); - - zap_vma_ptes(vma, vma->vm_start, - vma->vm_end - vma->vm_start); - } - mutex_unlock(&vdev->vma_lock); - mmap_read_unlock(mm); - mmput(mm); - } + unmap_mapping_range(core_vdev->inode->i_mapping, start, len, true); } =20 void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_core_device *= vdev) { - vfio_pci_zap_and_vma_lock(vdev, false); down_write(&vdev->memory_lock); - mutex_unlock(&vdev->vma_lock); + vfio_pci_zap_bars(vdev); } =20 u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_core_device *vdev) @@ -1722,99 +1642,41 @@ void vfio_pci_memory_unlock_and_restore(struct vfio= _pci_core_device *vdev, u16 c up_write(&vdev->memory_lock); } =20 -/* Caller holds vma_lock */ -static int __vfio_pci_add_vma(struct vfio_pci_core_device *vdev, - struct vm_area_struct *vma) -{ - struct vfio_pci_mmap_vma *mmap_vma; - - mmap_vma =3D kmalloc(sizeof(*mmap_vma), GFP_KERNEL_ACCOUNT); - if (!mmap_vma) - return -ENOMEM; - - mmap_vma->vma =3D vma; - list_add(&mmap_vma->vma_next, &vdev->vma_list); - - return 0; -} - -/* - * Zap mmaps on open so that we can fault them in on access and therefore - * our vma_list only tracks mappings accessed since last zap. - */ -static void vfio_pci_mmap_open(struct vm_area_struct *vma) -{ - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); -} - -static void vfio_pci_mmap_close(struct vm_area_struct *vma) +static unsigned long vma_to_pfn(struct vm_area_struct *vma) { struct vfio_pci_core_device *vdev =3D vma->vm_private_data; - struct vfio_pci_mmap_vma *mmap_vma; + int index =3D vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT); + u64 pgoff; =20 - mutex_lock(&vdev->vma_lock); - list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) { - if (mmap_vma->vma =3D=3D vma) { - list_del(&mmap_vma->vma_next); - kfree(mmap_vma); - break; - } - } - mutex_unlock(&vdev->vma_lock); + pgoff =3D vma->vm_pgoff & + ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1); + + return (pci_resource_start(vdev->pdev, index) >> PAGE_SHIFT) + pgoff; } =20 static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf) { struct vm_area_struct *vma =3D vmf->vma; struct vfio_pci_core_device *vdev =3D vma->vm_private_data; - struct vfio_pci_mmap_vma *mmap_vma; - vm_fault_t ret =3D VM_FAULT_NOPAGE; + unsigned long pfn, pgoff =3D vmf->pgoff - vma->vm_pgoff; + vm_fault_t ret =3D VM_FAULT_SIGBUS; =20 - mutex_lock(&vdev->vma_lock); - down_read(&vdev->memory_lock); + pfn =3D vma_to_pfn(vma); =20 - /* - * Memory region cannot be accessed if the low power feature is engaged - * or memory access is disabled. - */ - if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev)) { - ret =3D VM_FAULT_SIGBUS; - goto up_out; - } + down_read(&vdev->memory_lock); =20 - /* - * We populate the whole vma on fault, so we need to test whether - * the vma has already been mapped, such as for concurrent faults - * to the same vma. io_remap_pfn_range() will trigger a BUG_ON if - * we ask it to fill the same range again. - */ - list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) { - if (mmap_vma->vma =3D=3D vma) - goto up_out; - } + if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev)) + goto out_disabled; =20 - if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, - vma->vm_end - vma->vm_start, - vma->vm_page_prot)) { - ret =3D VM_FAULT_SIGBUS; - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); - goto up_out; - } + ret =3D vmf_insert_pfn(vma, vmf->address, pfn + pgoff); =20 - if (__vfio_pci_add_vma(vdev, vma)) { - ret =3D VM_FAULT_OOM; - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); - } - -up_out: +out_disabled: up_read(&vdev->memory_lock); - mutex_unlock(&vdev->vma_lock); + return ret; } =20 static const struct vm_operations_struct vfio_pci_mmap_ops =3D { - .open =3D vfio_pci_mmap_open, - .close =3D vfio_pci_mmap_close, .fault =3D vfio_pci_mmap_fault, }; =20 @@ -1877,11 +1739,12 @@ int vfio_pci_core_mmap(struct vfio_device *core_vde= v, struct vm_area_struct *vma =20 vma->vm_private_data =3D vdev; vma->vm_page_prot =3D pgprot_noncached(vma->vm_page_prot); - vma->vm_pgoff =3D (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff; + vma->vm_page_prot =3D pgprot_decrypted(vma->vm_page_prot); =20 /* - * See remap_pfn_range(), called from vfio_pci_fault() but we can't - * change vm_flags within the fault handler. Set them now. + * Set vm_flags now, they should not be changed in the fault handler. + * We want the same flags and page protection (decrypted above) as + * io_remap_pfn_range() would set. */ vm_flags_set(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP); vma->vm_ops =3D &vfio_pci_mmap_ops; @@ -2181,8 +2044,6 @@ int vfio_pci_core_init_dev(struct vfio_device *core_v= dev) mutex_init(&vdev->ioeventfds_lock); INIT_LIST_HEAD(&vdev->dummy_resources_list); INIT_LIST_HEAD(&vdev->ioeventfds_list); - mutex_init(&vdev->vma_lock); - INIT_LIST_HEAD(&vdev->vma_list); INIT_LIST_HEAD(&vdev->sriov_pfs_item); init_rwsem(&vdev->memory_lock); xa_init(&vdev->ctx); @@ -2198,7 +2059,6 @@ void vfio_pci_core_release_dev(struct vfio_device *co= re_vdev) =20 mutex_destroy(&vdev->igate); mutex_destroy(&vdev->ioeventfds_lock); - mutex_destroy(&vdev->vma_lock); kfree(vdev->region); kfree(vdev->pm_save); } @@ -2476,26 +2336,15 @@ static int vfio_pci_dev_set_pm_runtime_get(struct v= fio_device_set *dev_set) return ret; } =20 -/* - * We need to get memory_lock for each device, but devices can share mmap_= lock, - * therefore we need to zap and hold the vma_lock for each device, and onl= y then - * get each memory_lock. - */ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set, struct vfio_pci_group_info *groups, struct iommufd_ctx *iommufd_ctx) { - struct vfio_pci_core_device *cur_mem; - struct vfio_pci_core_device *cur_vma; - struct vfio_pci_core_device *cur; + struct vfio_pci_core_device *vdev; struct pci_dev *pdev; - bool is_mem =3D true; int ret; =20 mutex_lock(&dev_set->lock); - cur_mem =3D list_first_entry(&dev_set->device_list, - struct vfio_pci_core_device, - vdev.dev_set_list); =20 pdev =3D vfio_pci_dev_set_resettable(dev_set); if (!pdev) { @@ -2512,7 +2361,7 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_dev= ice_set *dev_set, if (ret) goto err_unlock; =20 - list_for_each_entry(cur_vma, &dev_set->device_list, vdev.dev_set_list) { + list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list) { bool owned; =20 /* @@ -2536,38 +2385,38 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_d= evice_set *dev_set, * Otherwise, reset is not allowed. */ if (iommufd_ctx) { - int devid =3D vfio_iommufd_get_dev_id(&cur_vma->vdev, + int devid =3D vfio_iommufd_get_dev_id(&vdev->vdev, iommufd_ctx); =20 owned =3D (devid > 0 || devid =3D=3D -ENOENT); } else { - owned =3D vfio_dev_in_groups(&cur_vma->vdev, groups); + owned =3D vfio_dev_in_groups(&vdev->vdev, groups); } =20 if (!owned) { ret =3D -EINVAL; - goto err_undo; + break; } =20 /* - * Locking multiple devices is prone to deadlock, runaway and - * unwind if we hit contention. + * Take the memory write lock for each device and zap BAR + * mappings to prevent the user accessing the device while in + * reset. Locking multiple devices is prone to deadlock, + * runaway and unwind if we hit contention. */ - if (!vfio_pci_zap_and_vma_lock(cur_vma, true)) { + if (!down_write_trylock(&vdev->memory_lock)) { ret =3D -EBUSY; - goto err_undo; + break; } + + vfio_pci_zap_bars(vdev); } - cur_vma =3D NULL; =20 - list_for_each_entry(cur_mem, &dev_set->device_list, vdev.dev_set_list) { - if (!down_write_trylock(&cur_mem->memory_lock)) { - ret =3D -EBUSY; - goto err_undo; - } - mutex_unlock(&cur_mem->vma_lock); + if (!list_entry_is_head(vdev, + &dev_set->device_list, vdev.dev_set_list)) { + vdev =3D list_prev_entry(vdev, vdev.dev_set_list); + goto err_undo; } - cur_mem =3D NULL; =20 /* * The pci_reset_bus() will reset all the devices in the bus. @@ -2578,25 +2427,22 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_d= evice_set *dev_set, * cause the PCI config space reset without restoring the original * state (saved locally in 'vdev->pm_save'). */ - list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) - vfio_pci_set_power_state(cur, PCI_D0); + list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list) + vfio_pci_set_power_state(vdev, PCI_D0); =20 ret =3D pci_reset_bus(pdev); =20 + vdev =3D list_last_entry(&dev_set->device_list, + struct vfio_pci_core_device, vdev.dev_set_list); + err_undo: - list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) { - if (cur =3D=3D cur_mem) - is_mem =3D false; - if (cur =3D=3D cur_vma) - break; - if (is_mem) - up_write(&cur->memory_lock); - else - mutex_unlock(&cur->vma_lock); - } + list_for_each_entry_from_reverse(vdev, &dev_set->device_list, + vdev.dev_set_list) + up_write(&vdev->memory_lock); + + list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list) + pm_runtime_put(&vdev->pdev->dev); =20 - list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) - pm_runtime_put(&cur->pdev->dev); err_unlock: mutex_unlock(&dev_set->lock); return ret; diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 562e8754869d..4f283514a1ed 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -93,8 +93,6 @@ struct vfio_pci_core_device { struct list_head sriov_pfs_item; struct vfio_pci_core_device *sriov_pf_core_dev; struct notifier_block nb; - struct mutex vma_lock; - struct list_head vma_list; struct rw_semaphore memory_lock; }; =20 --=20 2.45.2.993.g49e7a77208-goog From nobody Tue Dec 16 19:40:59 2025 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26F9B558B6 for ; Wed, 17 Jul 2024 22:24:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721255079; cv=none; b=TKbe+uqBGDZHU+curczr1rnMia3EyLlV4/2WjW5eFa/oPRxK3vMJddxWju/jfmBJiGl0HY+XBFfqMIr8//inZL41jzXQi6mYPGQXtbLOWqyG8TdhMzMboA8DcfNY6pGfHBA0YRDlFaw7YeooEKHKMQrFMRl15Q/ylO9/Zu+nJzU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721255079; c=relaxed/simple; bh=kMqdRxWNiI9YiTgGPD1e6dunxlUyzqetxFSiKFvNiPI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CO0ZZ9LjtW8CS0LEws3iJEbP9AyZhzyvXNdipTZPVT9oUNgmUEVIuqpHZDmBUONY8JVBSbpYDHvXt/bdnfhPg+5olT0zC7WVIaTYe8PDAkmaQkiNRfDRS++AeITL9hjJmQ3x9G2/aZZFy3Ju9aNrmMKyGBfvV/4kjtFXJNWVCB8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--axelrasmussen.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ipDKW34L; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--axelrasmussen.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ipDKW34L" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-65fabcd336fso4296687b3.0 for ; Wed, 17 Jul 2024 15:24:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1721255077; x=1721859877; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sdMf+ufr3mRNGP33DXF0RnpzY+vpJ6GjjPw/JpyYHhE=; b=ipDKW34LDXjO8jvBw0bPwF8/fNk4EqV5BGvo9qg0sVImlHjWi1b9PmeFX59ArGeq+G +i92H6fx9Zom8toZlHZ9HQF5vhCiO1RQYmS6YLDnVQI2wGn+lq0odiF3k6V8ZJZbtcLr 0LiqXaebg4WO/L6OdwhPdoVLCiQJBX2plUpgQ166DwNFAr9oJkRL1DkBp1M2g7DTyRxg TsBuG+pcUM4oVi7Chgdl8gb02CoLdpd3ksTFQuvOm/piV4JTADM2ueoReE4JYuHAbNR2 Vp8XB1S2nK1Iu4hZD6iO6qPbwdttkv+vRU+IqIGfJ6DO7WdRbqqqSHJ2b3PDp0XbmXh1 t69Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721255077; x=1721859877; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sdMf+ufr3mRNGP33DXF0RnpzY+vpJ6GjjPw/JpyYHhE=; b=V5f5BRsL6nB4uNxJGx1Z9Z8Q8VKXyl4AlLaBnRSaYVjgS6u0YzMdEalL+32A9g3dX7 FChqL8G3BucjKV54j4jGqYYAgPhlyX9wwoQy8oLyCempotWp6NwdvYxc5QEwUJaPdmFJ GTjlZzb1ux0ITPnx6sRwj2ZiJbymIHbC6t6PPyWGpnKKhlaXjAxuLVwmFNFsWVLTtcqv HYH0pmg10/NpXj1nrecE6fXHgfbbjL8BQQoJwhjB8rd7he/1lGk+H9z/6k9FfDYfAZhy jvWsgNjuJKQA/VNtYvx9R/8H2SSmuNxrApfpllKa8izdpfbSsTqCHIdcpzUkPvRwpjE0 a8Xw== X-Forwarded-Encrypted: i=1; AJvYcCVZcqYNsqY12NeHlTZESAOYngA/rmtEMVeP51+FyYkJRX/dFlP7i+9fLJUHbw0uRcW9asVmhu1zXe0syhGGWgMUc4R6pSvd5cXj6bL9 X-Gm-Message-State: AOJu0Yx4bIYKkRB2EzGhZ1wIt/Unn43tJ9q7UBesIpZPb1uGtcKCI9Pl NLq26bmaDJfDny7cGwjVk76rM2FSX49IpJKkxJPwWSiQ+0jM3U0mQWQzezWn3UDKL8Iv+NZZeAB 1+g6OyabgBWgWYTUBAqJX3WjrnokolA== X-Google-Smtp-Source: AGHT+IEV6mYusUj8ULHke+1RPPJco+bwaOOORCvPCTttFeFfrfUKPgZ9QPG4zLl9wVzK07pSlcLnXXgBKQS8++ePE95q X-Received: from axel.svl.corp.google.com ([2620:15c:2a3:200:a503:d697:557b:840c]) (user=axelrasmussen job=sendgmr) by 2002:a05:690c:f94:b0:64a:e220:bfb5 with SMTP id 00721157ae682-666015f1508mr450367b3.1.1721255076965; Wed, 17 Jul 2024 15:24:36 -0700 (PDT) Date: Wed, 17 Jul 2024 15:24:29 -0700 In-Reply-To: <20240717222429.2011540-1-axelrasmussen@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240717222429.2011540-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.45.2.993.g49e7a77208-goog Message-ID: <20240717222429.2011540-4-axelrasmussen@google.com> Subject: [PATCH 6.6 3/3] vfio/pci: Insert full vma on mmap'd MMIO fault From: Axel Rasmussen To: stable@vger.kernel.org Cc: Alex Williamson , Ankit Agrawal , Eric Auger , Jason Gunthorpe , Kevin Tian , Kunwu Chan , Leah Rumancik , Miaohe Lin , Stefan Hajnoczi , Yi Liu , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yan Zhao , Axel Rasmussen Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alex Williamson commit d71a989cf5d961989c273093cdff2550acdde314 upstream. In order to improve performance of typical scenarios we can try to insert the entire vma on fault. This accelerates typical cases, such as when the MMIO region is DMA mapped by QEMU. The vfio_iommu_type1 driver will fault in the entire DMA mapped range through fixup_user_fault(). In synthetic testing, this improves the time required to walk a PCI BAR mapping from userspace by roughly 1/3rd. This is likely an interim solution until vmf_insert_pfn_{pmd,pud}() gain support for pfnmaps. Suggested-by: Yan Zhao Link: https://lore.kernel.org/all/Zl6XdUkt%2FzMMGOLF@yzhao56-desk.sh.intel.= com/ Reviewed-by: Yan Zhao Link: https://lore.kernel.org/r/20240607035213.2054226-1-alex.williamson@re= dhat.com Signed-off-by: Alex Williamson Signed-off-by: Axel Rasmussen --- drivers/vfio/pci/vfio_pci_core.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_c= ore.c index 7b74c71a3169..b16678d186d3 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -1659,6 +1659,7 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault= *vmf) struct vm_area_struct *vma =3D vmf->vma; struct vfio_pci_core_device *vdev =3D vma->vm_private_data; unsigned long pfn, pgoff =3D vmf->pgoff - vma->vm_pgoff; + unsigned long addr =3D vma->vm_start; vm_fault_t ret =3D VM_FAULT_SIGBUS; =20 pfn =3D vma_to_pfn(vma); @@ -1666,11 +1667,25 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fau= lt *vmf) down_read(&vdev->memory_lock); =20 if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev)) - goto out_disabled; + goto out_unlock; =20 ret =3D vmf_insert_pfn(vma, vmf->address, pfn + pgoff); + if (ret & VM_FAULT_ERROR) + goto out_unlock; =20 -out_disabled: + /* + * Pre-fault the remainder of the vma, abort further insertions and + * supress error if fault is encountered during pre-fault. + */ + for (; addr < vma->vm_end; addr +=3D PAGE_SIZE, pfn++) { + if (addr =3D=3D vmf->address) + continue; + + if (vmf_insert_pfn(vma, addr, pfn) & VM_FAULT_ERROR) + break; + } + +out_unlock: up_read(&vdev->memory_lock); =20 return ret; --=20 2.45.2.993.g49e7a77208-goog