From nobody Sun May 5 06:45:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1584651179; cv=none; d=zohomail.com; s=zohoarc; b=GeuPimQsKk6oXqd9kgI+YLexixW+va+yiVyExt5vMOo9eDzQrVZNWdmm2IBB1j2OrCldpc21K613e1xL3oE8KwDwzPxzISl32s3+E5XQcn24KxGkylntoz3iHPndglhj37feGJTu4uwg9RTvjJPUrOTDVCuoYGL/W1jvP6PUW+8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1584651179; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=0ZjfEcEiUXBdRywKyc/OSpDD6GbPXaTampjnU8oIFR4=; b=Hn0dqySL/XpcR7ZvSGnErnyOj8O4stnzr5FlDWn5kAXDYjKzENaRthfaMXjAWddQbNY7NboRjJQ4zPqKegtMGMYlZ3n9PGP/fbhcJYVEyWgJ3OU/4iJCoOKauNPsznqFukeRklJX17cqK190bQeBIzXrnwgk/V1ZIqlsUyrGvFs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1584651179657441.79062612618804; Thu, 19 Mar 2020 13:52:59 -0700 (PDT) Received: from localhost ([::1]:43066 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF29y-0003wq-Ds for importer@patchew.org; Thu, 19 Mar 2020 16:52:58 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42548) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF27V-0007yT-3P for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jF27S-0002qF-QX for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:25 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:15223) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jF27S-0002kG-HI for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:22 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 19 Mar 2020 13:49:31 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Thu, 19 Mar 2020 13:50:21 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 19 Mar 2020 20:50:20 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 19 Mar 2020 20:50:14 +0000 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Thu, 19 Mar 2020 13:50:21 -0700 From: Kirti Wankhede To: , Subject: [PATCH v15 Kernel 1/7] vfio: KABI for migration interface for device state Date: Fri, 20 Mar 2020 01:46:38 +0530 Message-ID: <1584649004-8285-2-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> References: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1584650971; bh=0ZjfEcEiUXBdRywKyc/OSpDD6GbPXaTampjnU8oIFR4=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=NrUZxtBOsOotXYc6nxKq6Hm//ewQ8P0QQ+uhZbziuAglUEfqcnCiVm8C2MtYvI/Oi iLlVimo81F08oYph0rzzO+M0IqwPOlvmhJA3uKNa5kK7lv/VMYjQUjYQhIc6pyAiv6 vc6Bvji7pJT425YOD/+RS8zlHaF7yw/xKTgut4ZpbgmDMG3i/gWDWnieU7JUZ1rSRU t9eLubPPjdLwZUN2Ex3dISdi9PiTP6Ler1rYQGd2zn2qMFRGrlcS3Qt7aiTNdWIujK ZHNpLbqEoibu04ni4v9zA/mkX6nPM0rxK1MZU+YQy3jOpdWhkw+rlG/jd889P2mcWP 6591WNMr4294w== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.64 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" - Defined MIGRATION region type and sub-type. - Defined vfio_device_migration_info structure which will be placed at the 0th offset of migration region to get/set VFIO device related information. Defined members of structure and usage on read/write access. - Defined device states and state transition details. - Defined sequence to be followed while saving and resuming VFIO device. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 227 insertions(+) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 9e843a147ead..d0021467af53 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type { #define VFIO_REGION_TYPE_PCI_VENDOR_MASK (0xffff) #define VFIO_REGION_TYPE_GFX (1) #define VFIO_REGION_TYPE_CCW (2) +#define VFIO_REGION_TYPE_MIGRATION (3) =20 /* sub-types for VFIO_REGION_TYPE_PCI_* */ =20 @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid { /* sub-types for VFIO_REGION_TYPE_CCW */ #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD (1) =20 +/* sub-types for VFIO_REGION_TYPE_MIGRATION */ +#define VFIO_REGION_SUBTYPE_MIGRATION (1) + +/* + * The structure vfio_device_migration_info is placed at the 0th offset of + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device rel= ated + * migration information. Field accesses from this structure are only supp= orted + * at their native width and alignment. Otherwise, the result is undefined= and + * vendor drivers should return an error. + * + * device_state: (read/write) + * - The user application writes to this field to inform the vendor d= river + * about the device state to be transitioned to. + * - The vendor driver should take the necessary actions to change the + * device state. After successful transition to a given state, the + * vendor driver should return success on write(device_state, state) + * system call. If the device state transition fails, the vendor dr= iver + * should return an appropriate -errno for the fault condition. + * - On the user application side, if the device state transition fai= ls, + * that is, if write(device_state, state) returns an error, read + * device_state again to determine the current state of the device from + * the vendor driver. + * - The vendor driver should return previous state of the device unl= ess + * the vendor driver has encountered an internal error, in which ca= se + * the vendor driver may report the device_state VFIO_DEVICE_STATE_= ERROR. + * - The user application must use the device reset ioctl to recover = the + * device from VFIO_DEVICE_STATE_ERROR state. If the device is + * indicated to be in a valid device state by reading device_state,= the + * user application may attempt to transition the device to any val= id + * state reachable from the current state or terminate itself. + * + * device_state consists of 3 bits: + * - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is cl= ear, + * it indicates the _STOP state. When the device state is changed to + * _STOP, driver should stop the device before write() returns. + * - If bit 1 is set, it indicates the _SAVING state, which means tha= t the + * driver should start gathering device state information that will= be + * provided to the VFIO user application to save the device's state. + * - If bit 2 is set, it indicates the _RESUMING state, which means t= hat + * the driver should prepare to resume the device. Data provided th= rough + * the migration region should be used to resume the device. + * Bits 3 - 31 are reserved for future use. To preserve them, the user + * application should perform a read-modify-write operation on this + * field when modifying the specified bits. + * + * +------- _RESUMING + * |+------ _SAVING + * ||+----- _RUNNING + * ||| + * 000b =3D> Device Stopped, not saving or resuming + * 001b =3D> Device running, which is the default state + * 010b =3D> Stop the device & save the device state, stop-and-copy state + * 011b =3D> Device running and save the device state, pre-copy state + * 100b =3D> Device stopped and the device state is resuming + * 101b =3D> Invalid state + * 110b =3D> Error state + * 111b =3D> Invalid state + * + * State transitions: + * + * _RESUMING _RUNNING Pre-copy Stop-and-copy _STOP + * (100b) (001b) (011b) (010b) (000b) + * 0. Running or default state + * | + * + * 1. Normal Shutdown (optional) + * |------------------------------------->| + * + * 2. Save the state or suspend + * |------------------------->|---------->| + * + * 3. Save the state during live migration + * |----------->|------------>|---------->| + * + * 4. Resuming + * |<---------| + * + * 5. Resumed + * |--------->| + * + * 0. Default state of VFIO device is _RUNNNG when the user application st= arts. + * 1. During normal shutdown of the user application, the user application= may + * optionally change the VFIO device state from _RUNNING to _STOP. This + * transition is optional. The vendor driver must support this transiti= on but + * must not require it. + * 2. When the user application saves state or suspends the application, t= he + * device state transitions from _RUNNING to stop-and-copy and then to = _STOP. + * On state transition from _RUNNING to stop-and-copy, driver must stop= the + * device, save the device state and send it to the application through= the + * migration region. The sequence to be followed for such transition is= given + * below. + * 3. In live migration of user application, the state transitions from _R= UNNING + * to pre-copy, to stop-and-copy, and to _STOP. + * On state transition from _RUNNING to pre-copy, the driver should sta= rt + * gathering the device state while the application is still running an= d send + * the device state data to application through the migration region. + * On state transition from pre-copy to stop-and-copy, the driver must = stop + * the device, save the device state and send it to the user application + * through the migration region. + * Vendor drivers must support the pre-copy state even for implementati= ons + * where no data is provided to the user before the stop-and-copy state= . The + * user must not be required to consume all migration data before the d= evice + * transitions to a new state, including the stop-and-copy state. + * The sequence to be followed for above two transitions is given below. + * 4. To start the resuming phase, the device state should be transitioned= from + * the _RUNNING to the _RESUMING state. + * In the _RESUMING state, the driver should use the device state data + * received through the migration region to resume the device. + * 5. After providing saved device data to the driver, the application sho= uld + * change the state from _RESUMING to _RUNNING. + * + * reserved: + * Reads on this field return zero and writes are ignored. + * + * pending_bytes: (read only) + * The number of pending bytes still to be migrated from the vendor d= river. + * + * data_offset: (read only) + * The user application should read data_offset in the migration regi= on + * from where the user application should read the device data during= the + * _SAVING state or write the device data during the _RESUMING state.= See + * below for details of sequence to be followed. + * + * data_size: (read/write) + * The user application should read data_size to get the size in byte= s of + * the data copied in the migration region during the _SAVING state a= nd + * write the size in bytes of the data copied in the migration region + * during the _RESUMING state. + * + * The format of the migration region is as follows: + * ------------------------------------------------------------------ + * |vfio_device_migration_info| data section | + * | | /////////////////////////////// | + * ------------------------------------------------------------------ + * ^ ^ + * offset 0-trapped part data_offset + * + * The structure vfio_device_migration_info is always followed by the data + * section in the region, so data_offset will always be nonzero. The offset + * from where the data is copied is decided by the kernel driver. The data + * section can be trapped, mapped, or partitioned, depending on how the ke= rnel + * driver defines the data section. The data section partition can be defi= ned + * as mapped by the sparse mmap capability. If mmapped, data_offset should= be + * page aligned, whereas initial section which contains the + * vfio_device_migration_info structure, might not end at the offset, whic= h is + * page aligned. The user is not required to access through mmap regardless + * of the capabilities of the region mmap. + * The vendor driver should determine whether and how to partition the data + * section. The vendor driver should return data_offset accordingly. + * + * The sequence to be followed for the _SAVING|_RUNNING device state or + * pre-copy phase and for the _SAVING device state or stop-and-copy phase = is as + * follows: + * a. Read pending_bytes, indicating the start of a new iteration to get d= evice + * data. Repeated read on pending_bytes at this stage should have no si= de + * effects. + * If pending_bytes =3D=3D 0, the user application should not iterate t= o get data + * for that device. + * If pending_bytes > 0, perform the following steps. + * b. Read data_offset, indicating that the vendor driver should make data + * available through the data section. The vendor driver should return = this + * read operation only after data is available from (region + data_offs= et) + * to (region + data_offset + data_size). + * c. Read data_size, which is the amount of data in bytes available throu= gh + * the migration region. + * Read on data_offset and data_size should return the offset and size = of + * the current buffer if the user application reads data_offset and + * data_size more than once here. + * d. Read data_size bytes of data from (region + data_offset) from the + * migration region. + * e. Process the data. + * f. Read pending_bytes, which indicates that the data from the previous + * iteration has been read. If pending_bytes > 0, go to step b. + * + * If an error occurs during the above sequence, the vendor driver can ret= urn + * an error code for next read() or write() operation, which will terminat= e the + * loop. The user application should then take the next necessary action, = for + * example, failing migration or terminating the user application. + * + * The user application can transition from the _SAVING|_RUNNING + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the + * number of pending bytes. The user application should iterate in _SAVING + * (stop-and-copy) until pending_bytes is 0. + * + * The sequence to be followed while _RESUMING device state is as follows: + * While data for this device is available, repeat the following steps: + * a. Read data_offset from where the user application should write data. + * b. Write migration data starting at the migration region + data_offset = for + * the length determined by data_size from the migration source. + * c. Write data_size, which indicates to the vendor driver that data is + * written in the migration region. Vendor driver should apply the + * user-provided migration region data to the device resume state. + * + * For the user application, data is opaque. The user application should w= rite + * data in the same order as the data is received and the data should be of + * same transaction size at the source. + */ + +struct vfio_device_migration_info { + __u32 device_state; /* VFIO device state */ +#define VFIO_DEVICE_STATE_STOP (0) +#define VFIO_DEVICE_STATE_RUNNING (1 << 0) +#define VFIO_DEVICE_STATE_SAVING (1 << 1) +#define VFIO_DEVICE_STATE_RESUMING (1 << 2) +#define VFIO_DEVICE_STATE_MASK (VFIO_DEVICE_STATE_RUNNING | \ + VFIO_DEVICE_STATE_SAVING | \ + VFIO_DEVICE_STATE_RESUMING) + +#define VFIO_DEVICE_STATE_VALID(state) \ + (state & VFIO_DEVICE_STATE_RESUMING ? \ + (state & VFIO_DEVICE_STATE_MASK) =3D=3D VFIO_DEVICE_STATE_RESUMING : 1) + +#define VFIO_DEVICE_STATE_IS_ERROR(state) \ + ((state & VFIO_DEVICE_STATE_MASK) =3D=3D (VFIO_DEVICE_STATE_SAVING | \ + VFIO_DEVICE_STATE_RESUMING)) + +#define VFIO_DEVICE_STATE_SET_ERROR(state) \ + ((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \ + VFIO_DEVICE_STATE_RESUMING) + + __u32 reserved; + __u64 pending_bytes; + __u64 data_offset; + __u64 data_size; +} __attribute__((packed)); + /* * The MSIX mappable capability informs that MSIX data of a BAR can be mma= pped * which allows direct access to non-MSIX registers which happened to be w= ithin --=20 2.7.0 From nobody Sun May 5 06:45:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1584651096; cv=none; d=zohomail.com; s=zohoarc; b=MjqFFAaxZWjpbFbQY0M/lopvdV1BV93pWwcKiRRL46VFHCwO3jIgB5Q9zu2IPcY3izhgd9IpZmqzyw71CEtkyNc0I7BWGS1WNJD+wlWGIXWa+jfCk9Pq3GLrkq9chsQlM7w1Pfu79pJUM3kbjj6+118mpFk9YbF8Deg/sRZeBG0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1584651096; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=FmWokxxjiCUtfWFsp/qhJhc5S4LlTS5Wm3eAxSnwrTA=; b=FYCLHOy7hgc51ZdnUc8MS/4hHfQC+sAiZrEA83OhIN+HRdRp83KYLn1ijOPy3U/5SBYkhRLkQ23LRrsy+45bpepT4Uc50vVLlUoWhW2F6Kmd2+Fi6T0YtDa7jPBl4FDZdw2HLSeVxop6oFPEO06c0qRFnUT5mITL0hdLl2CsEZ8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1584651096015387.21626042620017; Thu, 19 Mar 2020 13:51:36 -0700 (PDT) Received: from localhost ([::1]:43036 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF28c-0001Fp-9p for importer@patchew.org; Thu, 19 Mar 2020 16:51:34 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42577) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF27a-00085z-5G for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:31 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jF27Z-0003Ah-3D for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:30 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:16425) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jF27Y-00038M-Ti for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:29 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 19 Mar 2020 13:50:14 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 19 Mar 2020 13:50:27 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 19 Mar 2020 20:50:27 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 19 Mar 2020 20:50:21 +0000 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 19 Mar 2020 13:50:27 -0700 From: Kirti Wankhede To: , Subject: [PATCH v15 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Date: Fri, 20 Mar 2020 01:46:39 +0530 Message-ID: <1584649004-8285-3-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> References: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1584651014; bh=FmWokxxjiCUtfWFsp/qhJhc5S4LlTS5Wm3eAxSnwrTA=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=fwAiUI1hZ0/nPvWTEtZeUftmL1+m/vDXGkQTiX1O+YT4VfIgLB5gHIdVqisDufLJZ 05VP3+cW6RbLaObx3w8Y8HoSjKn0DqTGuWOx44wAfDfBMoQGbkIIODrU9VJrd41Plp unHw2RgBL541yCgXMPM1BwunEUbwfn7jrTeg2kixeQWvKHMcX31D3bxJD5pkfI+lNu XJhBrZgfGUTj3thUPzheor2eg77JkChjN0Le787bqbliV8m7eksN6CmFFMJhFiTE47 cwVHma5LV1Koc+oLIOh/j71SXAr63b+Uf4jOMykF+Axhv8nJa35pZ3F3wQF1/hADoA Csg5JDnFYZe6A== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.65 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vfio_pfn.ref_count is always updated by holding iommu->lock, using atomic variable is overkill. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia Reviewed-by: Eric Auger --- drivers/vfio/vfio_iommu_type1.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index 9fdfae1cb17a..70aeab921d0f 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -112,7 +112,7 @@ struct vfio_pfn { struct rb_node node; dma_addr_t iova; /* Device address */ unsigned long pfn; /* Host pfn */ - atomic_t ref_count; + unsigned int ref_count; }; =20 struct vfio_regions { @@ -233,7 +233,7 @@ static int vfio_add_to_pfn_list(struct vfio_dma *dma, d= ma_addr_t iova, =20 vpfn->iova =3D iova; vpfn->pfn =3D pfn; - atomic_set(&vpfn->ref_count, 1); + vpfn->ref_count =3D 1; vfio_link_pfn(dma, vpfn); return 0; } @@ -251,7 +251,7 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct v= fio_dma *dma, struct vfio_pfn *vpfn =3D vfio_find_vpfn(dma, iova); =20 if (vpfn) - atomic_inc(&vpfn->ref_count); + vpfn->ref_count++; return vpfn; } =20 @@ -259,7 +259,8 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma,= struct vfio_pfn *vpfn) { int ret =3D 0; =20 - if (atomic_dec_and_test(&vpfn->ref_count)) { + vpfn->ref_count--; + if (!vpfn->ref_count) { ret =3D put_pfn(vpfn->pfn, dma->prot); vfio_remove_from_pfn_list(dma, vpfn); } --=20 2.7.0 From nobody Sun May 5 06:45:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1584651411; cv=none; d=zohomail.com; s=zohoarc; b=jpIpTy3EbYZyaK27E4fLqzM8KYQAguO5Od8A0uQy4yAiahSTkbamBl6V8atAxnG42MYdV805b3JOYmbJ7ddNFvjh3SVsu42fWShBYdjKUCKjCKqLVw44m/5B18BZiPRRWGVlEV41p4chYyjfkrUaUVjeaynBnnVNXZMNBvrFmEA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1584651411; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=I7YWs8dDZjl3NY6xpUPXFsPSPCyfndS53bY0A+n0xfs=; b=Ih2oSHSdUAkoao7hXK38vtjGunrpRIBaXm1N9IwYLPQReDDej98BwmlWjerLQtGqrb7HqgDRlpk6eNDbf6s5q7pZzJ31ELRe5hTtMzJawM5v083K1KGd3jBmGAjJCxxRhR2/EXguEz3oohHSfjmQI6fAa/MbW6+/SEYuS6JuXQo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1584651411359993.8167294298519; Thu, 19 Mar 2020 13:56:51 -0700 (PDT) Received: from localhost ([::1]:43162 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF2Di-0000fw-2T for importer@patchew.org; Thu, 19 Mar 2020 16:56:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42614) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF27h-0008K2-6O for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jF27f-0003Ng-Q3 for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:37 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:15272) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jF27f-0003Kz-Ik for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:35 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 19 Mar 2020 13:49:44 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 19 Mar 2020 13:50:34 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 19 Mar 2020 20:50:34 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 19 Mar 2020 20:50:27 +0000 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 19 Mar 2020 13:50:34 -0700 From: Kirti Wankhede To: , Subject: [PATCH v15 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking. Date: Fri, 20 Mar 2020 01:46:40 +0530 Message-ID: <1584649004-8285-4-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> References: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1584650984; bh=I7YWs8dDZjl3NY6xpUPXFsPSPCyfndS53bY0A+n0xfs=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=IgfGjZvRgk4O+GaWK+CXiYQPaLOl/viNpXPwWv0CkjNlOC8hbgf96gAhTNJ6hTjWs RXpCwknUwIhJB1/nOCuR9/xE/+lTqL7AJPRXNDixZEuQ9p9C6dRF+vsNf7bZkCb57t O5eGeci0xFrx26uMBGXqswWbyguiDvG7sdT1FKA7GQn0tTirZYF3ZIZI3NKw2WYLFQ FsunIaNhvyYnRqHQ7QSctLm7TmPw6SXOb3RfklRQev+3CEptDAr3Yzi5Q8sUUNxF3r xOqUuAxs2RYEBiFHJqtV6loDa5jy5pLrozAMUQ0K7bMCepzeMHGkOoZkJ/PL6uwn1S 3bLPAtIa2R+/g== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.64 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" IOMMU container maintains a list of all pages pinned by vfio_pin_pages API. All pages pinned by vendor driver through this API should be considered as dirty during migration. When container consists of IOMMU capable device and all pages are pinned and mapped, then all pages are marked dirty. Added support to start/stop dirtied pages tracking and to get bitmap of all dirtied pages for requested IO virtual address range. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- include/uapi/linux/vfio.h | 55 +++++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 55 insertions(+) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index d0021467af53..8138f94cac15 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -995,6 +995,12 @@ struct vfio_iommu_type1_dma_map { =20 #define VFIO_IOMMU_MAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 13) =20 +struct vfio_bitmap { + __u64 pgsize; /* page size for bitmap */ + __u64 size; /* in bytes */ + __u64 __user *data; /* one bit per page */ +}; + /** * VFIO_IOMMU_UNMAP_DMA - _IOWR(VFIO_TYPE, VFIO_BASE + 14, * struct vfio_dma_unmap) @@ -1021,6 +1027,55 @@ struct vfio_iommu_type1_dma_unmap { #define VFIO_IOMMU_ENABLE _IO(VFIO_TYPE, VFIO_BASE + 15) #define VFIO_IOMMU_DISABLE _IO(VFIO_TYPE, VFIO_BASE + 16) =20 +/** + * VFIO_IOMMU_DIRTY_PAGES - _IOWR(VFIO_TYPE, VFIO_BASE + 17, + * struct vfio_iommu_type1_dirty_bitma= p) + * IOCTL is used for dirty pages tracking. Caller sets argsz, which is siz= e of + * struct vfio_iommu_type1_dirty_bitmap. Caller set flag depend on which + * operation to perform, details as below: + * + * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_START set, indica= tes + * migration is active and IOMMU module should track pages which are dirti= ed or + * potentially dirtied by device. + * Dirty pages are tracked until tracking is stopped by user application by + * setting VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag. + * + * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP set, indicat= es + * IOMMU should stop tracking dirtied pages. + * + * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP flag s= et, + * IOCTL returns dirty pages bitmap for IOMMU container during migration f= or + * given IOVA range. User must provide data[] as the structure + * vfio_iommu_type1_dirty_bitmap_get through which user provides IOVA rang= e and + * pgsize. This interface supports to get bitmap of smallest supported pgs= ize + * only and can be modified in future to get bitmap of specified pgsize. + * User must allocate memory for bitmap, zero the bitmap memory and set si= ze + * of allocated memory in bitmap_size field. One bit is used to represent = one + * page consecutively starting from iova offset. User should provide page = size + * in 'pgsize'. Bit set in bitmap indicates page at that offset from iova = is + * dirty. Caller must set argsz including size of structure + * vfio_iommu_type1_dirty_bitmap_get. + * + * Only one of the flags _START, STOP and _GET may be specified at a time. + * + */ +struct vfio_iommu_type1_dirty_bitmap { + __u32 argsz; + __u32 flags; +#define VFIO_IOMMU_DIRTY_PAGES_FLAG_START (1 << 0) +#define VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP (1 << 1) +#define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP (1 << 2) + __u8 data[]; +}; + +struct vfio_iommu_type1_dirty_bitmap_get { + __u64 iova; /* IO virtual address */ + __u64 size; /* Size of iova range */ + struct vfio_bitmap bitmap; +}; + +#define VFIO_IOMMU_DIRTY_PAGES _IO(VFIO_TYPE, VFIO_BASE + 17) + /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */ =20 /* --=20 2.7.0 From nobody Sun May 5 06:45:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1584651172; cv=none; d=zohomail.com; s=zohoarc; b=A38ua8YxZjD6SyqlwUnPT8m0LLliobKnnjZxL/yo2fQP34/B9+Oc7RoM6ksrsjv+X62Ht5S3Z9NUhhhEQcngf9nyRh0e5IvN83F9VMuL95s9nMmzImRiE8paFP5UUoru1W7LsGzem28DciWOw29FRhqpah+wQIxUd/9CwjNTENA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1584651172; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=IHFbEZ48tg2YOVxdbIfM/e0YzTbOio7l0MW10UxDD38=; b=AhLN1PXcW9VB2Q9UqYb6WjBp8kPDDHgUGg7Mh8Zx2lBHhqD5WVLlF7aWVGZkOC4IEE9UICdHyDXww4jbdsHLbj2iLQKBPK4YI+KYHZOoo9mxEJaamxzQyeZbx8kgL71DUUstDcsURtMiBjX4BwExHD5lK8Ww2L5efSSfVj4RK8I= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1584651172280150.30544141536325; Thu, 19 Mar 2020 13:52:52 -0700 (PDT) Received: from localhost ([::1]:43062 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF29q-0003ej-Vx for importer@patchew.org; Thu, 19 Mar 2020 16:52:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42638) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF27o-0000BZ-RX for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jF27m-0003aP-NF for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:44 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:16454) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jF27m-0003ZS-Fk for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:42 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 19 Mar 2020 13:50:28 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 19 Mar 2020 13:50:41 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 19 Mar 2020 20:50:40 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 19 Mar 2020 20:50:34 +0000 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 19 Mar 2020 13:50:41 -0700 From: Kirti Wankhede To: , Subject: [PATCH v15 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking. Date: Fri, 20 Mar 2020 01:46:41 +0530 Message-ID: <1584649004-8285-5-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> References: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1584651028; bh=IHFbEZ48tg2YOVxdbIfM/e0YzTbOio7l0MW10UxDD38=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=iGtlBZTnobLXHEeouLPmQXzEjq6OjGUDMd4k82t7oXLYqy06TkSvp4QOqo8rjOw+m 0NqtENYMGqjiYd1ZkgTO9KMO0jwVD9WB3bqxZhNG+lhJNmF+8jY7FlbOXSJNpnTpT8 9gRVV9H/nGLlho6DPEe4vao8HxjTa84fzdJVfjF7W376Z6zvW70AC+kNCgHSv6GCzp WF8ohfq4P9FGPJAc+Lym9CSMH2g7hgKEBTHY9p287UfPPETC6Jnwt1j9sMmE5l901U vEJ14OEh0GgN+i9eC80+UHg7pnRaaolTtOSLele82324ZCfzuI3J3BR4KM1dHogtiy 29tuAHztr3ChA== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.65 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations: - Start dirty pages tracking while migration is active - Stop dirty pages tracking. - Get dirty pages bitmap. Its user space application's responsibility to copy content of dirty pages from source to destination during migration. To prevent DoS attack, memory for bitmap is allocated per vfio_dma structure. Bitmap size is calculated considering smallest supported page size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled Bitmap is populated for already pinned pages when bitmap is allocated for a vfio_dma with the smallest supported page size. Update bitmap from pinning functions when tracking is enabled. When user application queries bitmap, check if requested page size is same as page size used to populated bitmap. If it is equal, copy bitmap, but if not equal, return error. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- drivers/vfio/vfio_iommu_type1.c | 242 ++++++++++++++++++++++++++++++++++++= +++- 1 file changed, 236 insertions(+), 6 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index 70aeab921d0f..239f61764d03 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -71,6 +71,7 @@ struct vfio_iommu { unsigned int dma_avail; bool v2; bool nesting; + bool dirty_page_tracking; }; =20 struct vfio_domain { @@ -91,6 +92,7 @@ struct vfio_dma { bool lock_cap; /* capable(CAP_IPC_LOCK) */ struct task_struct *task; struct rb_root pfn_list; /* Ex-user pinned pfn list */ + unsigned long *bitmap; }; =20 struct vfio_group { @@ -125,7 +127,21 @@ struct vfio_regions { #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) \ (!list_empty(&iommu->domain_list)) =20 +#define DIRTY_BITMAP_BYTES(n) (ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYT= E) + +/* + * Input argument of number of bits to bitmap_set() is unsigned integer, w= hich + * further casts to signed integer for unaligned multi-bit operation, + * __bitmap_set(). + * Then maximum bitmap size supported is 2^31 bits divided by 2^3 bits/byt= e, + * that is 2^28 (256 MB) which maps to 2^31 * 2^12 =3D 2^43 (8TB) on 4K pa= ge + * system. + */ +#define DIRTY_BITMAP_PAGES_MAX ((1UL << 31) - 1) +#define DIRTY_BITMAP_SIZE_MAX DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX) + static int put_pfn(unsigned long pfn, int prot); +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu); =20 /* * This code handles mapping and unmapping of user data buffers @@ -175,6 +191,67 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, = struct vfio_dma *old) rb_erase(&old->node, &iommu->dma_list); } =20 + +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, uint64_t pgsize) +{ + uint64_t npages =3D dma->size / pgsize; + + dma->bitmap =3D kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL); + if (!dma->bitmap) + return -ENOMEM; + + return 0; +} + +static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, uint64_t pg= size) +{ + struct rb_node *n =3D rb_first(&iommu->dma_list); + + for (; n; n =3D rb_next(n)) { + struct vfio_dma *dma =3D rb_entry(n, struct vfio_dma, node); + struct rb_node *p; + int ret; + + ret =3D vfio_dma_bitmap_alloc(dma, pgsize); + if (ret) { + struct rb_node *p =3D rb_prev(n); + + for (; p; p =3D rb_prev(p)) { + struct vfio_dma *dma =3D rb_entry(n, + struct vfio_dma, node); + + kfree(dma->bitmap); + dma->bitmap =3D NULL; + } + return ret; + } + + if (RB_EMPTY_ROOT(&dma->pfn_list)) + continue; + + for (p =3D rb_first(&dma->pfn_list); p; p =3D rb_next(p)) { + struct vfio_pfn *vpfn =3D rb_entry(p, struct vfio_pfn, + node); + + bitmap_set(dma->bitmap, + (vpfn->iova - dma->iova) / pgsize, 1); + } + } + return 0; +} + +static void vfio_dma_bitmap_free_all(struct vfio_iommu *iommu) +{ + struct rb_node *n =3D rb_first(&iommu->dma_list); + + for (; n; n =3D rb_next(n)) { + struct vfio_dma *dma =3D rb_entry(n, struct vfio_dma, node); + + kfree(dma->bitmap); + dma->bitmap =3D NULL; + } +} + /* * Helper Functions for host iova-pfn list */ @@ -567,6 +644,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, vfio_unpin_page_external(dma, iova, do_accounting); goto pin_unwind; } + + if (iommu->dirty_page_tracking) { + unsigned long pgshift =3D + __ffs(vfio_pgsize_bitmap(iommu)); + + bitmap_set(dma->bitmap, + (vpfn->iova - dma->iova) >> pgshift, 1); + } } =20 ret =3D i; @@ -801,6 +886,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, s= truct vfio_dma *dma) vfio_unmap_unpin(iommu, dma, true); vfio_unlink_dma(iommu, dma); put_task_struct(dma->task); + kfree(dma->bitmap); kfree(dma); iommu->dma_avail++; } @@ -831,6 +917,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_io= mmu *iommu) return bitmap; } =20 +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iov= a, + size_t size, uint64_t pgsize, + u64 __user *bitmap) +{ + struct vfio_dma *dma; + unsigned long pgshift =3D __ffs(pgsize); + unsigned int npages, bitmap_size; + + dma =3D vfio_find_dma(iommu, iova, 1); + + if (!dma) + return -EINVAL; + + if (dma->iova !=3D iova || dma->size !=3D size) + return -EINVAL; + + npages =3D dma->size >> pgshift; + bitmap_size =3D DIRTY_BITMAP_BYTES(npages); + + /* mark all pages dirty if all pages are pinned and mapped. */ + if (dma->iommu_mapped) + bitmap_set(dma->bitmap, 0, npages); + + if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size)) + return -EFAULT; + + return 0; +} + +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size) +{ + uint64_t bsize; + + if (!npages || !bitmap_size || (bitmap_size > DIRTY_BITMAP_SIZE_MAX)) + return -EINVAL; + + bsize =3D DIRTY_BITMAP_BYTES(npages); + + if (bitmap_size < bsize) + return -EINVAL; + + return 0; +} + static int vfio_dma_do_unmap(struct vfio_iommu *iommu, struct vfio_iommu_type1_dma_unmap *unmap) { @@ -1038,16 +1168,16 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, unsigned long vaddr =3D map->vaddr; size_t size =3D map->size; int ret =3D 0, prot =3D 0; - uint64_t mask; + uint64_t pgsize; struct vfio_dma *dma; =20 /* Verify that none of our __u64 fields overflow */ if (map->size !=3D size || map->vaddr !=3D vaddr || map->iova !=3D iova) return -EINVAL; =20 - mask =3D ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1; + pgsize =3D (uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu)); =20 - WARN_ON(mask & PAGE_MASK); + WARN_ON((pgsize - 1) & PAGE_MASK); =20 /* READ/WRITE from device perspective */ if (map->flags & VFIO_DMA_MAP_FLAG_WRITE) @@ -1055,7 +1185,7 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, if (map->flags & VFIO_DMA_MAP_FLAG_READ) prot |=3D IOMMU_READ; =20 - if (!prot || !size || (size | iova | vaddr) & mask) + if (!prot || !size || (size | iova | vaddr) & (pgsize - 1)) return -EINVAL; =20 /* Don't allow IOVA or virtual address wrap */ @@ -1130,6 +1260,12 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, else ret =3D vfio_pin_map_dma(iommu, dma, size); =20 + if (!ret && iommu->dirty_page_tracking) { + ret =3D vfio_dma_bitmap_alloc(dma, pgsize); + if (ret) + vfio_remove_dma(iommu, dma); + } + out_unlock: mutex_unlock(&iommu->lock); return ret; @@ -2278,6 +2414,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, =20 return copy_to_user((void __user *)arg, &unmap, minsz) ? -EFAULT : 0; + } else if (cmd =3D=3D VFIO_IOMMU_DIRTY_PAGES) { + struct vfio_iommu_type1_dirty_bitmap dirty; + uint32_t mask =3D VFIO_IOMMU_DIRTY_PAGES_FLAG_START | + VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP | + VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP; + int ret =3D 0; + + if (!iommu->v2) + return -EACCES; + + minsz =3D offsetofend(struct vfio_iommu_type1_dirty_bitmap, + flags); + + if (copy_from_user(&dirty, (void __user *)arg, minsz)) + return -EFAULT; + + if (dirty.argsz < minsz || dirty.flags & ~mask) + return -EINVAL; + + /* only one flag should be set at a time */ + if (__ffs(dirty.flags) !=3D __fls(dirty.flags)) + return -EINVAL; + + if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) { + uint64_t pgsize =3D 1 << __ffs(vfio_pgsize_bitmap(iommu)); + + mutex_lock(&iommu->lock); + if (!iommu->dirty_page_tracking) { + ret =3D vfio_dma_bitmap_alloc_all(iommu, pgsize); + if (!ret) + iommu->dirty_page_tracking =3D true; + } + mutex_unlock(&iommu->lock); + return ret; + } else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) { + mutex_lock(&iommu->lock); + if (iommu->dirty_page_tracking) { + iommu->dirty_page_tracking =3D false; + vfio_dma_bitmap_free_all(iommu); + } + mutex_unlock(&iommu->lock); + return 0; + } else if (dirty.flags & + VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) { + struct vfio_iommu_type1_dirty_bitmap_get range; + unsigned long pgshift; + size_t data_size =3D dirty.argsz - minsz; + uint64_t iommu_pgsize =3D + 1 << __ffs(vfio_pgsize_bitmap(iommu)); + + if (!data_size || data_size < sizeof(range)) + return -EINVAL; + + if (copy_from_user(&range, (void __user *)(arg + minsz), + sizeof(range))) + return -EFAULT; + + /* allow only min supported pgsize */ + if (range.bitmap.pgsize !=3D iommu_pgsize) + return -EINVAL; + if (range.iova & (iommu_pgsize - 1)) + return -EINVAL; + if (!range.size || range.size & (iommu_pgsize - 1)) + return -EINVAL; + if (range.iova + range.size < range.iova) + return -EINVAL; + if (!access_ok((void __user *)range.bitmap.data, + range.bitmap.size)) + return -EINVAL; + + pgshift =3D __ffs(range.bitmap.pgsize); + ret =3D verify_bitmap_size(range.size >> pgshift, + range.bitmap.size); + if (ret) + return ret; + + mutex_lock(&iommu->lock); + if (iommu->dirty_page_tracking) + ret =3D vfio_iova_dirty_bitmap(iommu, range.iova, + range.size, range.bitmap.pgsize, + range.bitmap.data); + else + ret =3D -EINVAL; + mutex_unlock(&iommu->lock); + + return ret; + } } =20 return -ENOTTY; @@ -2345,10 +2568,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfi= o_iommu *iommu, =20 vaddr =3D dma->vaddr + offset; =20 - if (write) + if (write) { *copied =3D __copy_to_user((void __user *)vaddr, data, count) ? 0 : count; - else + if (*copied && iommu->dirty_page_tracking) { + unsigned long pgshift =3D + __ffs(vfio_pgsize_bitmap(iommu)); + + bitmap_set(dma->bitmap, offset >> pgshift, + *copied >> pgshift); + } + } else *copied =3D __copy_from_user(data, (void __user *)vaddr, count) ? 0 : count; if (kthread) --=20 2.7.0 From nobody Sun May 5 06:45:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1584651260; cv=none; d=zohomail.com; s=zohoarc; b=AOzdsfGDIQOsfBRg34eUPqH4IrEe1y3DcVj1JM24e7oHkpwXETfq5kYK5zkMbwW+7LgFraXBri6L2xZLdRFPTXU9p/N+CT6i5YzodSDbM4flMZwIi7RBJQQvVxETyGrT04a6ao4dZ3jg/Dk8NT4t6s5SXb0Xz5iwpcnDjZuUJbg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1584651260; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=MOsbTOYhqV35rmjXEgH3UV2bzrMBXZJ/IwUsOwjoOIM=; b=aXB4+gSxxcbngJgI0zyiqRd1C3dEm/0f9TcnlF82MfegEtddcSeelg27khBsvwSrvhcntZHa9gOnsOtGHBUEmbjFrqc5zrxXkx1fJf0r8iV2JryzZOxMyC8YiOnkqEJO8d6zlv7lmawf/VNbzV/tx8A/Qk0BRmzObLFpAcFdWx4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1584651260755532.9848464844533; Thu, 19 Mar 2020 13:54:20 -0700 (PDT) Received: from localhost ([::1]:43104 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF2BH-000611-HC for importer@patchew.org; Thu, 19 Mar 2020 16:54:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42650) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF27u-0000Qd-NJ for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jF27t-0003c2-74 for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:50 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:16467) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jF27t-0003bh-0f for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:49 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 19 Mar 2020 13:50:34 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 19 Mar 2020 13:50:47 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 19 Mar 2020 20:50:47 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 19 Mar 2020 20:50:41 +0000 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 19 Mar 2020 13:50:47 -0700 From: Kirti Wankhede To: , Subject: [PATCH v15 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Date: Fri, 20 Mar 2020 01:46:42 +0530 Message-ID: <1584649004-8285-6-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> References: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1584651035; bh=MOsbTOYhqV35rmjXEgH3UV2bzrMBXZJ/IwUsOwjoOIM=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=C+VejKklxiIYCGPxcTyBC76I1hx7tk46loxtS35knpuig+ZUYG3u3ixJqXj0mBZU6 4VZhBV7Un9T2CYbATK1iR+Uz0eDg5nemBMwPt7dxohTWFh7b6judRRIyxNbzoD5oiZ CrSsWQQP2I4A72JbpFx1w3ZWj4cbnFbCu0vq9283aNcMaqnAL//saekdMMByR3R3gU cwbP1HtJAG20N4V5yc5Xf1aXARgNMeuV+/7splrnjAGGcqzc9NHGtquptpmKOuNKFS 8B1o4fJyHcexJ0tHpy1fR/XTqF+R1a3L76CBb8caCON58ePIV54ubzY4+UpiIuXtgR 3WfN7DKN+v6ZA== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.65 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" DMA mapped pages, including those pinned by mdev vendor drivers, might get unpinned and unmapped while migration is active and device is still running. For example, in pre-copy phase while guest driver could access those pages, host device or vendor driver can dirty these mapped pages. Such pages should be marked dirty so as to maintain memory consistency for a user making use of dirty page tracking. To get bitmap during unmap, user should allocate memory for bitmap, set size of allocated memory, set page size to be considered for bitmap and set flag VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- drivers/vfio/vfio_iommu_type1.c | 54 +++++++++++++++++++++++++++++++++++++= +--- include/uapi/linux/vfio.h | 10 ++++++++ 2 files changed, 60 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index 239f61764d03..e79c1ff6fb41 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -962,7 +962,8 @@ static int verify_bitmap_size(uint64_t npages, uint64_t= bitmap_size) } =20 static int vfio_dma_do_unmap(struct vfio_iommu *iommu, - struct vfio_iommu_type1_dma_unmap *unmap) + struct vfio_iommu_type1_dma_unmap *unmap, + struct vfio_bitmap *bitmap) { uint64_t mask; struct vfio_dma *dma, *dma_last =3D NULL; @@ -1013,6 +1014,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iomm= u, * will be returned if these conditions are not met. The v2 interface * will only return success and a size of zero if there were no * mappings within the range. + * + * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request + * must be for single mapping. Multiple mappings with this flag set is + * not supported. */ if (iommu->v2) { dma =3D vfio_find_dma(iommu, unmap->iova, 1); @@ -1020,6 +1025,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iomm= u, ret =3D -EINVAL; goto unlock; } + + if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) && + (dma->iova !=3D unmap->iova || dma->size !=3D unmap->size)) { + ret =3D -EINVAL; + goto unlock; + } + dma =3D vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0); if (dma && dma->iova + dma->size !=3D unmap->iova + unmap->size) { ret =3D -EINVAL; @@ -1037,6 +1049,11 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iomm= u, if (dma->task->mm !=3D current->mm) break; =20 + if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) && + iommu->dirty_page_tracking) + vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size, + bitmap->pgsize, bitmap->data); + if (!RB_EMPTY_ROOT(&dma->pfn_list)) { struct vfio_iommu_type1_dma_unmap nb_unmap; =20 @@ -2398,17 +2415,46 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, =20 } else if (cmd =3D=3D VFIO_IOMMU_UNMAP_DMA) { struct vfio_iommu_type1_dma_unmap unmap; - long ret; + struct vfio_bitmap bitmap =3D { 0 }; + int ret; =20 minsz =3D offsetofend(struct vfio_iommu_type1_dma_unmap, size); =20 if (copy_from_user(&unmap, (void __user *)arg, minsz)) return -EFAULT; =20 - if (unmap.argsz < minsz || unmap.flags) + if (unmap.argsz < minsz || + unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) return -EINVAL; =20 - ret =3D vfio_dma_do_unmap(iommu, &unmap); + if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) { + unsigned long pgshift; + uint64_t iommu_pgsize =3D + 1 << __ffs(vfio_pgsize_bitmap(iommu)); + + if (unmap.argsz < (minsz + sizeof(bitmap))) + return -EINVAL; + + if (copy_from_user(&bitmap, + (void __user *)(arg + minsz), + sizeof(bitmap))) + return -EFAULT; + + /* allow only min supported pgsize */ + if (bitmap.pgsize !=3D iommu_pgsize) + return -EINVAL; + if (!access_ok((void __user *)bitmap.data, bitmap.size)) + return -EINVAL; + + pgshift =3D __ffs(bitmap.pgsize); + ret =3D verify_bitmap_size(unmap.size >> pgshift, + bitmap.size); + if (ret) + return ret; + + } + + ret =3D vfio_dma_do_unmap(iommu, &unmap, &bitmap); if (ret) return ret; =20 diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 8138f94cac15..2780a5742c04 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1010,12 +1010,22 @@ struct vfio_bitmap { * field. No guarantee is made to the user that arbitrary unmaps of iova * or size different from those used in the original mapping call will * succeed. + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap + * before unmapping IO virtual addresses. When this flag is set, user must + * provide data[] as structure vfio_bitmap. User must allocate memory to g= et + * bitmap and must set size of allocated memory in vfio_bitmap.size field. + * A bit in bitmap represents one page of user provided page size in 'pgsi= ze', + * consecutively starting from iova offset. Bit set indicates page at that + * offset from iova is dirty. Bitmap of pages in the range of unmapped siz= e is + * returned in vfio_bitmap.data */ struct vfio_iommu_type1_dma_unmap { __u32 argsz; __u32 flags; +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0) __u64 iova; /* IO virtual address */ __u64 size; /* Size of mapping (bytes) */ + __u8 data[]; }; =20 #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14) --=20 2.7.0 From nobody Sun May 5 06:45:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1584651495; cv=none; d=zohomail.com; s=zohoarc; b=jjmDYk1RAbqTm+BTUS1JnRmelyLotdimyQLnkZzFz03cGtIvy/Uwf+em5HH8sc61QU6Jw6+8FPwdIMcXlUQxI+VMWFObIxj1iao98cOcwbw2Ai+uLT0me4N6tPyt+23e13xFUISS2MafmyGsw42QtUfns0yNzYGTpcRn4JXlHAA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1584651495; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ikzeUcbiOEuDuPiuNIgs+eC1zqHyjc1APM+hJNJr8v4=; b=CTBUjk9Rk83t48cKdWC/yEEvBRRThLxv8GMMLm6S9yWdPqeitoaB3hkjviBK6rkWGAjH4QnVAqi0rp6FuJw2RWq5rMCjZ/lkGT6cKFHom3svmNKiFRzS/fh6K5k9MnA8y56tWQ+zRjlvs6YsjLZEsJT9OHLlWqfRrYadGP8zFLw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1584651495304428.5941096466197; Thu, 19 Mar 2020 13:58:15 -0700 (PDT) Received: from localhost ([::1]:43184 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF2F3-0002hD-WD for importer@patchew.org; Thu, 19 Mar 2020 16:58:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42686) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF281-0000iX-7I for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:58 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jF280-0003dg-7B for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:57 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:7957) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jF280-0003dO-14 for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:50:56 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 19 Mar 2020 13:49:16 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Thu, 19 Mar 2020 13:50:54 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 19 Mar 2020 20:50:54 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 19 Mar 2020 20:50:47 +0000 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Thu, 19 Mar 2020 13:50:54 -0700 From: Kirti Wankhede To: , Subject: [PATCH v15 Kernel 6/7] vfio iommu: Adds flag to indicate dirty pages tracking capability support Date: Fri, 20 Mar 2020 01:46:43 +0530 Message-ID: <1584649004-8285-7-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> References: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1584650956; bh=ikzeUcbiOEuDuPiuNIgs+eC1zqHyjc1APM+hJNJr8v4=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=T/13PfZ7ppbXzqlZuDGQKATnvCTHfyuzEuLIDRG2kmNPboEx3Q/my/mttajLpzUy9 w6neYhaGu0lPhd7161spiY3nJLTg1Yl37OEpkPkMvVwqySvk8FvKn56mbk2hiCtUnV C8EoLGgOlE9q1fZTT4U4atjcTmaZXohpYHOESqYXBBKy5ocx82PU8xd4VZPB2Po+kA kLFw/eiIG4xtn5iDADtbavIxPychm6umeeNmdT+Sgbqw69zX7uVQtJOGQhWoZs5t8t l8qT703Off1d8GXrfNizxOfGSS8zagDUWDuQiYoAOUCH02Dd5fExcZavJtwsZv51MP buCAtOs1M3s/w== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.143 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Flag VFIO_IOMMU_INFO_DIRTY_PGS in VFIO_IOMMU_GET_INFO indicates that driver support dirty pages tracking. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- drivers/vfio/vfio_iommu_type1.c | 3 ++- include/uapi/linux/vfio.h | 5 +++-- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index e79c1ff6fb41..dce0a3e1e8b7 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2368,7 +2368,8 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, info.cap_offset =3D 0; /* output, no-recopy necessary */ } =20 - info.flags =3D VFIO_IOMMU_INFO_PGSIZES; + info.flags =3D VFIO_IOMMU_INFO_PGSIZES | + VFIO_IOMMU_INFO_DIRTY_PGS; =20 info.iova_pgsizes =3D vfio_pgsize_bitmap(iommu); =20 diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 2780a5742c04..4a886ff84c92 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -947,8 +947,9 @@ struct vfio_device_ioeventfd { struct vfio_iommu_type1_info { __u32 argsz; __u32 flags; -#define VFIO_IOMMU_INFO_PGSIZES (1 << 0) /* supported page sizes info */ -#define VFIO_IOMMU_INFO_CAPS (1 << 1) /* Info supports caps */ +#define VFIO_IOMMU_INFO_PGSIZES (1 << 0) /* supported page sizes info */ +#define VFIO_IOMMU_INFO_CAPS (1 << 1) /* Info supports caps */ +#define VFIO_IOMMU_INFO_DIRTY_PGS (1 << 2) /* supports dirty page tracking= */ __u64 iova_pgsizes; /* Bitmap of supported page sizes */ __u32 cap_offset; /* Offset within info struct of first cap */ }; --=20 2.7.0 From nobody Sun May 5 06:45:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1584651328; cv=none; d=zohomail.com; s=zohoarc; b=gCFLGnACiCnSOGfsselSZfZbf/AHd2XOTEsYz4GlE2Doge7yI9gkPxYcjr+A2p2sKWjQ3K30X22/7YRTeJW77UATUif8bV1FsSGr+OTGLbySjSZpNVCp8XNcdFAsDqweGduKemGCvax3xUcaTiC3PUfBw/DVkQhrNmltcE9Kjj8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1584651328; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=pS4D/kXQIdaJtEkqQTbg6mrFTsymPJiay4SoCkmaedI=; b=Cv4OnrH0pGkHaB+vyNrn6avp1+UCAx57B6Mi9mxb2vDm3x7Z7cyjx9JhEZcwzGNBmjSmNY0/vSAn80dHuZWYXFzWAc+AtY2AvOGdNZ2rR4VZP5+3drZ4xgZcMMEfjoJ5fguIhDyctBF4imk9T69mjYRE6DdOUXHHIijBpzDfClc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1584651328439486.3314408805869; Thu, 19 Mar 2020 13:55:28 -0700 (PDT) Received: from localhost ([::1]:43128 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF2CM-0007mR-2O for importer@patchew.org; Thu, 19 Mar 2020 16:55:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42732) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jF28A-0000xZ-Vi for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:51:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jF287-0003iw-59 for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:51:06 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:7972) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jF286-0003iX-Uw for qemu-devel@nongnu.org; Thu, 19 Mar 2020 16:51:03 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 19 Mar 2020 13:49:23 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Thu, 19 Mar 2020 13:51:01 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 19 Mar 2020 20:51:01 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 19 Mar 2020 20:50:54 +0000 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Thu, 19 Mar 2020 13:51:01 -0700 From: Kirti Wankhede To: , Subject: [PATCH v15 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Date: Fri, 20 Mar 2020 01:46:44 +0530 Message-ID: <1584649004-8285-8-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> References: <1584649004-8285-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1584650963; bh=pS4D/kXQIdaJtEkqQTbg6mrFTsymPJiay4SoCkmaedI=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=e8E5u3pvOJWd3g9xPr86qgp7K3iNgqg826MWObRSTz+R3Mbt1HzdLG2JTR+JjVPMs Q+F3HiEyRefPBOpKiC/00go7swOnmM/wEty2D8cuqSQywmQTPjXYLMN0AT6wyqOIMz oBp48r0nro/fw0fr0/84j5ejrNQkWgEfocF2vibpU0G4Kwetho9aNgLeTn6qsK1MVT AuUALQP21AKAxx61Dk+jDJYDjEWLqsJPiAoUFDT3n5SPpYda/7mgik55RFBzUzJ0oU 83GT5UhlG1+fRGKu8C+8GZI2JVw4Z3OwZ9qxe+7M8Kicfxy94MPbBbouuFnc+CmLIK A9ipI7bVbaXUw== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.143 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Added a check such that only singleton IOMMU groups can pin pages. From the point when vendor driver pins any pages, consider IOMMU group dirty page scope to be limited to pinned pages. To optimize to avoid walking list often, added flag pinned_page_dirty_scope to indicate if all of the vfio_groups for each vfio_domain in the domain_list dirty page scope is limited to pinned pages. This flag is updated on first pinned pages request for that IOMMU group and on attaching/detaching group. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- drivers/vfio/vfio.c | 13 ++++-- drivers/vfio/vfio_iommu_type1.c | 94 +++++++++++++++++++++++++++++++++++++= ++-- include/linux/vfio.h | 4 +- 3 files changed, 104 insertions(+), 7 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 210fcf426643..311b5e4e111e 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -85,6 +85,7 @@ struct vfio_group { atomic_t opened; wait_queue_head_t container_q; bool noiommu; + unsigned int dev_counter; struct kvm *kvm; struct blocking_notifier_head notifier; }; @@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfi= o_group *group, =20 mutex_lock(&group->device_lock); list_add(&device->group_next, &group->device_list); + group->dev_counter++; mutex_unlock(&group->device_lock); =20 return device; @@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref) struct vfio_group *group =3D device->group; =20 list_del(&device->group_next); + group->dev_counter--; mutex_unlock(&group->device_lock); =20 dev_set_drvdata(device->dev, NULL); @@ -1933,6 +1936,9 @@ int vfio_pin_pages(struct device *dev, unsigned long = *user_pfn, int npage, if (!group) return -ENODEV; =20 + if (group->dev_counter > 1) + return -EINVAL; + ret =3D vfio_group_add_container_user(group); if (ret) goto err_pin_pages; @@ -1940,7 +1946,8 @@ int vfio_pin_pages(struct device *dev, unsigned long = *user_pfn, int npage, container =3D group->container; driver =3D container->iommu_driver; if (likely(driver && driver->ops->pin_pages)) - ret =3D driver->ops->pin_pages(container->iommu_data, user_pfn, + ret =3D driver->ops->pin_pages(container->iommu_data, + group->iommu_group, user_pfn, npage, prot, phys_pfn); else ret =3D -ENOTTY; @@ -2038,8 +2045,8 @@ int vfio_group_pin_pages(struct vfio_group *group, driver =3D container->iommu_driver; if (likely(driver && driver->ops->pin_pages)) ret =3D driver->ops->pin_pages(container->iommu_data, - user_iova_pfn, npage, - prot, phys_pfn); + group->iommu_group, user_iova_pfn, + npage, prot, phys_pfn); else ret =3D -ENOTTY; =20 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index dce0a3e1e8b7..881abfc04f0a 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -72,6 +72,7 @@ struct vfio_iommu { bool v2; bool nesting; bool dirty_page_tracking; + bool pinned_page_dirty_scope; }; =20 struct vfio_domain { @@ -99,6 +100,7 @@ struct vfio_group { struct iommu_group *iommu_group; struct list_head next; bool mdev_group; /* An mdev group */ + bool pinned_page_dirty_scope; }; =20 struct vfio_iova { @@ -143,6 +145,10 @@ struct vfio_regions { static int put_pfn(unsigned long pfn, int prot); static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu); =20 +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *i= ommu, + struct iommu_group *iommu_group); + +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu); /* * This code handles mapping and unmapping of user data buffers * into DMA'ble space using the IOMMU @@ -579,11 +585,13 @@ static int vfio_unpin_page_external(struct vfio_dma *= dma, dma_addr_t iova, } =20 static int vfio_iommu_type1_pin_pages(void *iommu_data, + struct iommu_group *iommu_group, unsigned long *user_pfn, int npage, int prot, unsigned long *phys_pfn) { struct vfio_iommu *iommu =3D iommu_data; + struct vfio_group *group; int i, j, ret; unsigned long remote_vaddr; struct vfio_dma *dma; @@ -653,8 +661,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, (vpfn->iova - dma->iova) >> pgshift, 1); } } - ret =3D i; + + group =3D vfio_iommu_find_iommu_group(iommu, iommu_group); + if (!group->pinned_page_dirty_scope) { + group->pinned_page_dirty_scope =3D true; + update_pinned_page_dirty_scope(iommu); + } + goto pin_done; =20 pin_unwind: @@ -936,8 +950,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *i= ommu, dma_addr_t iova, npages =3D dma->size >> pgshift; bitmap_size =3D DIRTY_BITMAP_BYTES(npages); =20 - /* mark all pages dirty if all pages are pinned and mapped. */ - if (dma->iommu_mapped) + /* + * mark all pages dirty if any IOMMU capable device is not able + * to report dirty pages and all pages are pinned and mapped. + */ + if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) bitmap_set(dma->bitmap, 0, npages); =20 if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size)) @@ -1421,6 +1438,51 @@ static struct vfio_group *find_iommu_group(struct vf= io_domain *domain, return NULL; } =20 +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *i= ommu, + struct iommu_group *iommu_group) +{ + struct vfio_domain *domain; + struct vfio_group *group =3D NULL; + + list_for_each_entry(domain, &iommu->domain_list, next) { + group =3D find_iommu_group(domain, iommu_group); + if (group) + return group; + } + + if (iommu->external_domain) + group =3D find_iommu_group(iommu->external_domain, iommu_group); + + return group; +} + +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu) +{ + struct vfio_domain *domain; + struct vfio_group *group; + + list_for_each_entry(domain, &iommu->domain_list, next) { + list_for_each_entry(group, &domain->group_list, next) { + if (!group->pinned_page_dirty_scope) { + iommu->pinned_page_dirty_scope =3D false; + return; + } + } + } + + if (iommu->external_domain) { + domain =3D iommu->external_domain; + list_for_each_entry(group, &domain->group_list, next) { + if (!group->pinned_page_dirty_scope) { + iommu->pinned_page_dirty_scope =3D false; + return; + } + } + } + + iommu->pinned_page_dirty_scope =3D true; +} + static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions, phys_addr_t *base) { @@ -1827,6 +1889,16 @@ static int vfio_iommu_type1_attach_group(void *iommu= _data, =20 list_add(&group->next, &iommu->external_domain->group_list); + /* + * Non-iommu backed group cannot dirty memory directly, + * it can only use interfaces that provide dirty + * tracking. + * The iommu scope can only be promoted with the + * addition of a dirty tracking group. + */ + group->pinned_page_dirty_scope =3D true; + if (!iommu->pinned_page_dirty_scope) + update_pinned_page_dirty_scope(iommu); mutex_unlock(&iommu->lock); =20 return 0; @@ -1949,6 +2021,13 @@ static int vfio_iommu_type1_attach_group(void *iommu= _data, done: /* Delete the old one and insert new iova list */ vfio_iommu_iova_insert_copy(iommu, &iova_copy); + + /* + * An iommu backed group can dirty memory directly and therefore + * demotes the iommu scope until it declares itself dirty tracking + * capable via the page pinning interface. + */ + iommu->pinned_page_dirty_scope =3D false; mutex_unlock(&iommu->lock); vfio_iommu_resv_free(&group_resv_regions); =20 @@ -2101,6 +2180,7 @@ static void vfio_iommu_type1_detach_group(void *iommu= _data, struct vfio_iommu *iommu =3D iommu_data; struct vfio_domain *domain; struct vfio_group *group; + bool update_dirty_scope =3D false; LIST_HEAD(iova_copy); =20 mutex_lock(&iommu->lock); @@ -2108,6 +2188,7 @@ static void vfio_iommu_type1_detach_group(void *iommu= _data, if (iommu->external_domain) { group =3D find_iommu_group(iommu->external_domain, iommu_group); if (group) { + update_dirty_scope =3D !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); =20 @@ -2137,6 +2218,7 @@ static void vfio_iommu_type1_detach_group(void *iommu= _data, continue; =20 vfio_iommu_detach_group(domain, group); + update_dirty_scope =3D !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); /* @@ -2167,6 +2249,12 @@ static void vfio_iommu_type1_detach_group(void *iomm= u_data, vfio_iommu_iova_free(&iova_copy); =20 detach_group_done: + /* + * Removal of a group without dirty tracking may allow the iommu scope + * to be promoted. + */ + if (update_dirty_scope) + update_pinned_page_dirty_scope(iommu); mutex_unlock(&iommu->lock); } =20 diff --git a/include/linux/vfio.h b/include/linux/vfio.h index be2bd358b952..702e1d7b6e8b 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops { struct iommu_group *group); void (*detach_group)(void *iommu_data, struct iommu_group *group); - int (*pin_pages)(void *iommu_data, unsigned long *user_pfn, + int (*pin_pages)(void *iommu_data, + struct iommu_group *group, + unsigned long *user_pfn, int npage, int prot, unsigned long *phys_pfn); int (*unpin_pages)(void *iommu_data, --=20 2.7.0