From nobody Wed May 15 03:16:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1573580379; cv=none; d=zoho.com; s=zohoarc; b=L4w6SKAK7XICEkPiDYstFbVPRkYLpz7h1Qj3Sp2OiS5eEy9WuqcgujnRVGnij/AAO5HJYdho9KoF6dV1s/KFNgOU+Ftlw5bAUsqWYQGMheXJtidke/WTM6BVUOMRX2iJlYvwNiVkKyMbOVAgDC8vj1I6ENT0T4XyDU3ZNW2WVCE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573580379; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=SJzWTVcCs3ugRMUC+MoFLwbA9yAA6M2044D6woFgoO0=; b=CKW8nH/lr7RQPL2+zANaxFUfywXPAl1LsAp281DKMA9Vl63nDO9FEbyG0BTGRO0+vjJme/bBsLZxuJx/ImHqyYj7SUL7H+SLrvAD0+jSt9YlcbXPExzZ6sP3oI7fO2xKUctnnpUbPtHUWOcyhnYCvgtG4oyBxvhli4P8vRAmVBc= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1573580379340137.36247225764203; Tue, 12 Nov 2019 09:39:39 -0800 (PST) Received: from localhost ([::1]:38222 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUa8e-0001wW-B0 for importer@patchew.org; Tue, 12 Nov 2019 12:39:36 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:34187) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUa75-0000kP-3T for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:00 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iUa72-0005GE-Fr for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:37:58 -0500 Received: from hqemgate15.nvidia.com ([216.228.121.64]:16056) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iUa72-0005FI-45 for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:37:56 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 12 Nov 2019 09:32:52 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 12 Nov 2019 09:32:52 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 17:32:52 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 17:32:51 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 12 Nov 2019 17:32:45 +0000 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 12 Nov 2019 09:32:52 -0800 From: Kirti Wankhede To: , Subject: [PATCH v9 Kernel 1/5] vfio: KABI for migration interface for device state Date: Tue, 12 Nov 2019 22:33:36 +0530 Message-ID: <1573578220-7530-2-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> References: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1573579972; bh=SJzWTVcCs3ugRMUC+MoFLwbA9yAA6M2044D6woFgoO0=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=YU7k3Srp1K/hH+rjNfxoUF2xwnLQxMLNUQAVjztljd3vw/UnWWU/Nd/BUTyoQlM2e JhzDe+ulELboXB1Hm/JUij4YqkttDnrTxPJDfxC2LXtWvCzZNDiEqA+HIqxxeK3DKM gcBJ3CTMOVK6hIT3n8Xf5aq7t7FQRSl8yarbNpmaBDC+GVw4xjVU9Y6FDMEXy512P6 tnervQFfSHJrSb8utr/147E9g5mSEcwwe4URthoFDN61TblxznFPXfty5POTCQl/0H j8rhwJTrytRcAaWJMcTXKLdSQUHsooYYxad07gryUeylejGR7VFPGsoka1LIpN15X4 2aAuF1fQ4oXRg== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.64 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" - Defined MIGRATION region type and sub-type. - Used 3 bits to define VFIO device states. Bit 0 =3D> _RUNNING Bit 1 =3D> _SAVING Bit 2 =3D> _RESUMING Combination of these bits defines VFIO device's state during migration _RUNNING =3D> Normal VFIO device running state. When its reset, it indicates _STOPPED state. when device is changed to _STOPPED, driver should stop device before write() returns. _SAVING | _RUNNING =3D> vCPUs are running, VFIO device is running but start saving state of device i.e. pre-copy state _SAVING =3D> vCPUs are stopped, VFIO device should be stopped, and save device state,i.e. stop-n-copy state _RESUMING =3D> VFIO device resuming state. _SAVING | _RESUMING and _RUNNING | _RESUMING =3D> Invalid states Bits 3 - 31 are reserved for future use. User should perform read-modify-write operation on this field. - Defined vfio_device_migration_info structure which will be placed at 0th offset of migration region to get/set VFIO device related information. Defined members of structure and usage on read/write access: * device_state: (read/write) To convey VFIO device state to be transitioned to. Only 3 bits are used as of now, Bits 3 - 31 are reserved for future use. * pending bytes: (read only) To get pending bytes yet to be migrated for VFIO device. * data_offset: (read only) To get data offset in migration region from where data exist during _SAVING and from where data should be written by user space application during _RESUMING state. * data_size: (read/write) To get and set size in bytes of data copied in migration region during _SAVING and _RESUMING state. Migration region looks like: ------------------------------------------------------------------ |vfio_device_migration_info| data section | | | /////////////////////////////// | ------------------------------------------------------------------ ^ ^ offset 0-trapped part data_offset Structure vfio_device_migration_info is always followed by data section in the region, so data_offset will always be non-0. Offset from where data to be copied is decided by kernel driver, data section can be trapped or mapped depending on how kernel driver defines data section. Data section partition can be defined as mapped by sparse mmap capability. If mmapped, then data_offset should be page aligned, where as initial section which contain vfio_device_migration_info structure might not end at offset which is page aligned. Vendor driver should decide whether to partition data section and how to partition the data section. Vendor driver should return data_offset accordingly. For user application, data is opaque. User should write data in the same order as received. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- include/uapi/linux/vfio.h | 108 ++++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 108 insertions(+) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 9e843a147ead..35b09427ad9f 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type { #define VFIO_REGION_TYPE_PCI_VENDOR_MASK (0xffff) #define VFIO_REGION_TYPE_GFX (1) #define VFIO_REGION_TYPE_CCW (2) +#define VFIO_REGION_TYPE_MIGRATION (3) =20 /* sub-types for VFIO_REGION_TYPE_PCI_* */ =20 @@ -379,6 +380,113 @@ struct vfio_region_gfx_edid { /* sub-types for VFIO_REGION_TYPE_CCW */ #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD (1) =20 +/* sub-types for VFIO_REGION_TYPE_MIGRATION */ +#define VFIO_REGION_SUBTYPE_MIGRATION (1) + +/* + * Structure vfio_device_migration_info is placed at 0th offset of + * VFIO_REGION_SUBTYPE_MIGRATION region to get/set VFIO device related mig= ration + * information. Field accesses from this structure are only supported at t= heir + * native width and alignment, otherwise the result is undefined and vendor + * drivers should return an error. + * + * device_state: (read/write) + * To indicate vendor driver the state VFIO device should be transiti= oned + * to. If device state transition fails, write on this field return e= rror. + * It consists of 3 bits: + * - If bit 0 set, indicates _RUNNING state. When its reset, that ind= icates + * _STOPPED state. When device is changed to _STOPPED, driver shoul= d stop + * device before write() returns. + * - If bit 1 set, indicates _SAVING state. When set, that indicates = driver + * should start gathering device state information which will be pr= ovided + * to VFIO user space application to save device's state. + * - If bit 2 set, indicates _RESUMING state. When set, that indicates + * prepare to resume device, data provided through migration region + * should be used to resume device. + * Bits 3 - 31 are reserved for future use. User should perform + * read-modify-write operation on this field. + * _SAVING and _RESUMING bits set at the same time is invalid state. + * Similarly _RUNNING and _RESUMING bits set is invalid state. + * + * pending bytes: (read only) + * Number of pending bytes yet to be migrated from vendor driver + * + * data_offset: (read only) + * User application should read data_offset in migration region from = where + * user application should read device data during _SAVING state or w= rite + * device data during _RESUMING state. See below for detail of sequen= ce to + * be followed. + * + * data_size: (read/write) + * User application should read data_size to get size of data copied = in + * bytes in migration region during _SAVING state and write size of d= ata + * copied in bytes in migration region during _RESUMING state. + * + * Migration region looks like: + * ------------------------------------------------------------------ + * |vfio_device_migration_info| data section | + * | | /////////////////////////////// | + * ------------------------------------------------------------------ + * ^ ^ + * offset 0-trapped part data_offset + * + * Structure vfio_device_migration_info is always followed by data section= in + * the region, so data_offset will always be non-0. Offset from where data= is + * copied is decided by kernel driver, data section can be trapped or mapp= ed + * or partitioned, depending on how kernel driver defines data section. + * Data section partition can be defined as mapped by sparse mmap capabili= ty. + * If mmapped, then data_offset should be page aligned, where as initial s= ection + * which contain vfio_device_migration_info structure might not end at off= set + * which is page aligned. + * Vendor driver should decide whether to partition data section and how to + * partition the data section. Vendor driver should return data_offset + * accordingly. + * + * Sequence to be followed for _SAVING|_RUNNING device state or pre-copy p= hase + * and for _SAVING device state or stop-and-copy phase: + * a. read pending_bytes. If pending_bytes > 0, go through below steps. + * b. read data_offset, indicates kernel driver to write data to staging b= uffer. + * Kernel driver should return this read operation only after writing d= ata to + * staging buffer is done. + * c. read data_size, amount of data in bytes written by vendor driver in + * migration region. + * d. read data_size bytes of data from data_offset in the migration regio= n. + * e. process data. + * f. Loop through a to e. Next read on pending_bytes indicates that read = data + * operation from migration region for previous iteration is done. + * + * Sequence to be followed while _RESUMING device state: + * While data for this device is available, repeat below steps: + * a. read data_offset from where user application should write data. + * b. write data of data_size to migration region from data_offset. + * c. write data_size which indicates vendor driver that data is written in + * staging buffer. Vendor driver should read this data from migration + * region and resume device's state. + * + * For user application, data is opaque. User should write data in the same + * order as received. + */ + +struct vfio_device_migration_info { + __u32 device_state; /* VFIO device state */ +#define VFIO_DEVICE_STATE_RUNNING (1 << 0) +#define VFIO_DEVICE_STATE_SAVING (1 << 1) +#define VFIO_DEVICE_STATE_RESUMING (1 << 2) +#define VFIO_DEVICE_STATE_MASK (VFIO_DEVICE_STATE_RUNNING | \ + VFIO_DEVICE_STATE_SAVING | \ + VFIO_DEVICE_STATE_RESUMING) + +#define VFIO_DEVICE_STATE_INVALID_CASE1 (VFIO_DEVICE_STATE_SAVING | \ + VFIO_DEVICE_STATE_RESUMING) + +#define VFIO_DEVICE_STATE_INVALID_CASE2 (VFIO_DEVICE_STATE_RUNNING | \ + VFIO_DEVICE_STATE_RESUMING) + __u32 reserved; + __u64 pending_bytes; + __u64 data_offset; + __u64 data_size; +} __attribute__((packed)); + /* * The MSIX mappable capability informs that MSIX data of a BAR can be mma= pped * which allows direct access to non-MSIX registers which happened to be w= ithin --=20 2.7.0 From nobody Wed May 15 03:16:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1573580372; cv=none; d=zoho.com; s=zohoarc; b=ogHHAafjJS9Bzl3nJOWdKaQiJqiJqh9YxiXQHaT6ocfWG41KpnwgGhuLPxfpCEzpYjc4lQYHMvdJbxYOhPlo2REkXXgE6xFEh1HVIO24ETNykNkTQjRba4prltz480IzEOYqnNBe2i2GxFcWjvmT637xOTw+pa0Uc/DKXktxPjU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573580372; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hsCfdH2jXMq4AsAY8CLWysoECSs3dz6mo7Od5e+A1nQ=; b=mtsIJSYiswPXe9zRaIBTnNMiAhqEOiFAWlp/AeMO8Ci1NoEUPFhnIvpodTBI1Ezr7CfBFKf72jDCjOJ1hSGzD2GlD6eO4I1AFvSdGoUFvBKHT/AVwqU0FjLjpz+jZ81XhIoVEE7M/wKPBZpNM9vGk/+n8wkeIIq8hLt5wRlFB/c= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1573580372619789.180967670422; Tue, 12 Nov 2019 09:39:32 -0800 (PST) Received: from localhost ([::1]:38216 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUa8V-0001ke-Gk for importer@patchew.org; Tue, 12 Nov 2019 12:39:27 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:34212) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUa7B-0000r6-MZ for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:06 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iUa7A-0005J0-MH for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:05 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:8017) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iUa7A-0005Im-Dr for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:04 -0500 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 12 Nov 2019 09:32:03 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 12 Nov 2019 09:32:59 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 17:32:58 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 17:32:58 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 12 Nov 2019 17:32:52 +0000 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 12 Nov 2019 09:32:59 -0800 From: Kirti Wankhede To: , Subject: [PATCH v9 Kernel 2/5] vfio iommu: Add ioctl defination to get dirty pages bitmap. Date: Tue, 12 Nov 2019 22:33:37 +0530 Message-ID: <1573578220-7530-3-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> References: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1573579923; bh=hsCfdH2jXMq4AsAY8CLWysoECSs3dz6mo7Od5e+A1nQ=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=WnTSTsCxfyZuYr5B4jwHp9VwBUAYt/aX9LHKlO7qDzEQQVLICYR9bndQww9nBlKY/ 8al3gR/WhXoaK+QVlD/K1JX87hLjrMNBQPCjl6fKrbMQEXIHUnPZGpnldCR1nOdQCh 3ZdbeSWle/0T/c+0oKSnItWSrUPo8bQ0xo3rJQv2kFAKpaBqa1a3xB62rgUSL5W7g9 3fFaoXY+D+Qey5pKGRp81hzCc6UPydO3A/rHdQf1NXLOXg/Il/b7Th2+VCKa/hhBGq 1Y3qDQimx4/P8ISQice0xZo75koOuD7ADdzddcEoUr7SF+IOra6aisNXTrbbXL/8uc fzGZuIHwuILTg== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.65 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" All pages pinned by vendor driver through vfio_pin_pages API should be considered as dirty during migration. IOMMU container maintains a list of all such pinned pages. Added an ioctl defination to get bitmap of such pinned pages for requested IO virtual address range. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- include/uapi/linux/vfio.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 35b09427ad9f..6fd3822aa610 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -902,6 +902,29 @@ struct vfio_iommu_type1_dma_unmap { #define VFIO_IOMMU_ENABLE _IO(VFIO_TYPE, VFIO_BASE + 15) #define VFIO_IOMMU_DISABLE _IO(VFIO_TYPE, VFIO_BASE + 16) =20 +/** + * VFIO_IOMMU_GET_DIRTY_BITMAP - _IOWR(VFIO_TYPE, VFIO_BASE + 17, + * struct vfio_iommu_type1_dirty_bitma= p) + * + * IOCTL to get dirty pages bitmap for IOMMU container during migration. + * Get dirty pages bitmap of given IO virtual addresses range using + * struct vfio_iommu_type1_dirty_bitmap. Caller sets argsz, which is size = of + * struct vfio_iommu_type1_dirty_bitmap. User should allocate memory to get + * bitmap and should set size of allocated memory in bitmap_size field. + * One bit is used to represent per page consecutively starting from iova + * offset. Bit set indicates page at that offset from iova is dirty. + */ +struct vfio_iommu_type1_dirty_bitmap { + __u32 argsz; + __u32 flags; + __u64 iova; /* IO virtual address */ + __u64 size; /* Size of iova range */ + __u64 bitmap_size; /* in bytes */ + void __user *bitmap; /* one bit per page */ +}; + +#define VFIO_IOMMU_GET_DIRTY_BITMAP _IO(VFIO_TYPE, VFIO_BASE += 17) + /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */ =20 /* --=20 2.7.0 From nobody Wed May 15 03:16:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1573580504; cv=none; d=zoho.com; s=zohoarc; b=k7FURT9U+kfREXFRpeSVDO9Esv8T8pAEuXlKMzkPhnYIPIJqEi3JUzKCpg360/t8ZBP3tvjU65uEhkZA1boVGDfhGP3p6Xzx4AV+/PFv5wWv+jTL2ANaeH4DXxYTIBVaD1+DlKgCFpcsdVeFpEzxfJSarjl0V3Cr2eOniEMaCxM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573580504; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZL8WRUjv2HMqr+5c71mCqu+CqlFsMn0q7XRR6YFy7As=; b=YXfqCNaW3RtJLj51gefyA66lvbBh6Hf8Y+/QVilNpGAXydFA+FBfam+RxLTbSF8jZ9XvS363LOPEAUIEbq66L1BuTKyIKYrw/6hQPi0S6Z8qe3VGkT8fW2JJVaOutCEJwX+yIeWuOEtfbR32ANrx7GlraWCGDG/37QEC0E4cNYI= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1573580504507785.156711413779; Tue, 12 Nov 2019 09:41:44 -0800 (PST) Received: from localhost ([::1]:38364 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUaAh-0004HF-88 for importer@patchew.org; Tue, 12 Nov 2019 12:41:43 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:34237) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUa7E-0000ty-Nf for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:09 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iUa7D-0005KF-Iw for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:08 -0500 Received: from hqemgate14.nvidia.com ([216.228.121.143]:3175) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iUa7D-0005Jo-BK for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:07 -0500 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 12 Nov 2019 09:33:08 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 12 Nov 2019 09:33:05 -0800 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 17:33:05 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 12 Nov 2019 17:32:58 +0000 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 12 Nov 2019 09:33:05 -0800 From: Kirti Wankhede To: , Subject: [PATCH v9 Kernel 3/5] vfio iommu: Add ioctl defination to unmap IOVA and return dirty bitmap Date: Tue, 12 Nov 2019 22:33:38 +0530 Message-ID: <1573578220-7530-4-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> References: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1573579988; bh=ZL8WRUjv2HMqr+5c71mCqu+CqlFsMn0q7XRR6YFy7As=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=O8wySMnJz96tThtn5yOR+jCMWh6qABnnCPWLz3cdiGSOtiliQK4lhmHFp1riF6Kef 8QWku+dA48Te8BFdge2IUI/gYPfN5QwAottVcUVYvZwd+Ib7tP72A7qFxwFib2e63V L0XbPiATETwH/HZpGRj3/iVJVF1UHvXDiXuuQOILWvDhMUSv6iYkQHw/lkTwyaKrL6 72ndmFYV79GTUNXZUonCHDqLH6ndFyE0S+sEbZLRH3X2zHbEb8n+WCBtGMVk2qHZH8 oMnAPwTVtNWZ5ZsZ6CrQD1xWwdRUWDM2oEi4HbosTCQvVa4TXt1/7noFylyv3M1Vih 4ZeDcpe1PK0nQ== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.143 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With vIOMMU, during pre-copy phase of migration, while CPUs are still running, IO virtual address unmap can happen while device still keeping reference of guest pfns. Those pages should be reported as dirty before unmap, so that VFIO user space application can copy content of those pages from source to destination. IOCTL defination added here add bitmap pointer, size and flag. If flag VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP is set and bitmap memory is allocated and bitmap_size of set, then ioctl will create bitmap of pinned pages and then unmap those. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- include/uapi/linux/vfio.h | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 6fd3822aa610..72fd297baf52 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -925,6 +925,39 @@ struct vfio_iommu_type1_dirty_bitmap { =20 #define VFIO_IOMMU_GET_DIRTY_BITMAP _IO(VFIO_TYPE, VFIO_BASE += 17) =20 +/** + * VFIO_IOMMU_UNMAP_DMA_GET_BITMAP - _IOWR(VFIO_TYPE, VFIO_BASE + 18, + * struct vfio_iommu_type1_dma_unmap_bitmap) + * + * Unmap IO virtual addresses using the provided struct + * vfio_iommu_type1_dma_unmap_bitmap. Caller sets argsz. + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap + * before unmapping IO virtual addresses. If this flag is not set, only IO + * virtual address are unmapped without creating pinned pages bitmap, that + * is, behave same as VFIO_IOMMU_UNMAP_DMA ioctl. + * User should allocate memory to get bitmap and should set size of alloca= ted + * memory in bitmap_size field. One bit in bitmap is used to represent per= page + * consecutively starting from iova offset. Bit set indicates page at that + * offset from iova is dirty. + * The actual unmapped size is returned in the size field and bitmap of pa= ges + * in the range of unmapped size is returned in bitmap if flag + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP is set. + * + * No guarantee is made to the user that arbitrary unmaps of iova or size + * different from those used in the original mapping call will succeed. + */ +struct vfio_iommu_type1_dma_unmap_bitmap { + __u32 argsz; + __u32 flags; +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0) + __u64 iova; /* IO virtual address */ + __u64 size; /* Size of mapping (bytes) */ + __u64 bitmap_size; /* in bytes */ + void __user *bitmap; /* one bit per page */ +}; + +#define VFIO_IOMMU_UNMAP_DMA_GET_BITMAP _IO(VFIO_TYPE, VFIO_BASE + 18) + /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */ =20 /* --=20 2.7.0 From nobody Wed May 15 03:16:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1573580627; cv=none; d=zoho.com; s=zohoarc; b=ZRcqTBQFIGCuyoeGxbz8kzn/baql+EYC3fLAyvJODTHTp77Gl/Lw//C09wWj0KN7ndump0ln+f14LdB68F5Io6e3xTX6/+OljUzFU30BIVuR0smw94jvcWXgVuuYJlfanLCTmYSK23QK2dqDnB2gM4saNtHAzsFokBSy0czJppc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573580627; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=U7o8nopuwa5svw0kgwL12cP5oHdgEJ1PRVw74v24iMA=; b=ACkcNkGRVBqG++ammXqwCEn30taZHd6jflEyWhgNI3pP/cJYMLkCUKU9HMO4O7KcBEm1ZSPoe8Pd5GvACnvibiEkkn3HxCyWcJ9z/+yPONEzv4xW6bPXe7hQynzeB0WQfNrnnLmWSf6WSfWNGaApExDOKabqGu2PzYpCANw3yaI= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1573580627021390.5537977166259; Tue, 12 Nov 2019 09:43:47 -0800 (PST) Received: from localhost ([::1]:38412 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUaCf-0007CZ-M1 for importer@patchew.org; Tue, 12 Nov 2019 12:43:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:34270) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUa7M-00015P-2z for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:17 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iUa7K-0005OS-Pf for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:16 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:8030) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iUa7K-0005O2-HY for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:14 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 12 Nov 2019 09:32:16 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 12 Nov 2019 09:33:12 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 17:33:11 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 17:33:11 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 12 Nov 2019 17:33:05 +0000 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 12 Nov 2019 09:33:12 -0800 From: Kirti Wankhede To: , Subject: [PATCH v9 Kernel 4/5] vfio iommu: Implementation of ioctl to get dirty pages bitmap. Date: Tue, 12 Nov 2019 22:33:39 +0530 Message-ID: <1573578220-7530-5-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> References: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1573579936; bh=U7o8nopuwa5svw0kgwL12cP5oHdgEJ1PRVw74v24iMA=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=Oa9jIzdSNDv8nSrjw2gE/HEM5+chCLs5VFmZT1f6pXZk2cFN2f9W961uIXq4C6J9A 0wbQ+0xoH0eWsgA3rGKkd/gky0UIN++q7AC3MnvsywmVH7DJzmCoyPdKJ/uf4Wue6u CDCX1KhJPZulX2UrjHmKmiI+IiztBcspQsdKf+zomDq1oNFRvyiGPBgyiEa/9xKtvs r61NvcpH54uToWIdrnMcGwiyNyq2UJptvqwmFsa2K/Vr1zTsnAUp+I22nbw2pWEoNN TplD8/9Vf9QAcl4os7DY6lFZGzZXG2iJp4aBxkG2VQgl1zytul1kVTRvgfWiTOhoSi EunXiz8XRiqMA== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.65 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" IOMMU container maintains list of external pinned pages. Bitmap of pinned pages for input IO virtual address range is created and returned. IO virtual address range should be from a single mapping created by map request. Input bitmap_size is validated by calculating the size of requested range. This ioctl returns bitmap of dirty pages, its user space application responsibility to copy content of dirty pages from source to destination during migration. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- drivers/vfio/vfio_iommu_type1.c | 92 +++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 92 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index 2ada8e6cdb88..ac176e672857 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -850,6 +850,81 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_io= mmu *iommu) return bitmap; } =20 +/* + * start_iova is the reference from where bitmaping started. This is called + * from DMA_UNMAP where start_iova can be different than iova + */ + +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iov= a, + size_t size, dma_addr_t start_iova, + unsigned long *bitmap) +{ + struct vfio_dma *dma; + dma_addr_t temp_iova =3D iova; + + dma =3D vfio_find_dma(iommu, iova, size); + if (!dma) + return -EINVAL; + + /* + * Range should be from a single mapping created by map request. + */ + + if ((iova < dma->iova) || + ((dma->iova + dma->size) < (iova + size))) + return -EINVAL; + + while (temp_iova < iova + size) { + struct vfio_pfn *vpfn =3D NULL; + + vpfn =3D vfio_find_vpfn(dma, temp_iova); + if (vpfn) + __bitmap_set(bitmap, vpfn->iova - start_iova, 1); + + temp_iova +=3D PAGE_SIZE; + } + + return 0; +} + +static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_s= ize) +{ + unsigned long bsize =3D ALIGN(npages, BITS_PER_LONG) / 8; + + if ((bitmap_size =3D=3D 0) || (bitmap_size < bsize)) + return -EINVAL; + return 0; +} + +static int vfio_iova_get_dirty_bitmap(struct vfio_iommu *iommu, + struct vfio_iommu_type1_dirty_bitmap *range) +{ + unsigned long *bitmap; + int ret; + + ret =3D verify_bitmap_size(range->size >> PAGE_SHIFT, range->bitmap_size); + if (ret) + return ret; + + /* one bit per page */ + bitmap =3D bitmap_zalloc(range->size >> PAGE_SHIFT, GFP_KERNEL); + if (!bitmap) + return -ENOMEM; + + mutex_lock(&iommu->lock); + ret =3D vfio_iova_dirty_bitmap(iommu, range->iova, range->size, + range->iova, bitmap); + mutex_unlock(&iommu->lock); + + if (!ret) { + if (copy_to_user(range->bitmap, bitmap, range->bitmap_size)) + ret =3D -EFAULT; + } + + bitmap_free(bitmap); + return ret; +} + static int vfio_dma_do_unmap(struct vfio_iommu *iommu, struct vfio_iommu_type1_dma_unmap *unmap) { @@ -2297,6 +2372,23 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, =20 return copy_to_user((void __user *)arg, &unmap, minsz) ? -EFAULT : 0; + } else if (cmd =3D=3D VFIO_IOMMU_GET_DIRTY_BITMAP) { + struct vfio_iommu_type1_dirty_bitmap range; + + /* Supported for v2 version only */ + if (!iommu->v2) + return -EACCES; + + minsz =3D offsetofend(struct vfio_iommu_type1_dirty_bitmap, + bitmap); + + if (copy_from_user(&range, (void __user *)arg, minsz)) + return -EFAULT; + + if (range.argsz < minsz) + return -EINVAL; + + return vfio_iova_get_dirty_bitmap(iommu, &range); } =20 return -ENOTTY; --=20 2.7.0 From nobody Wed May 15 03:16:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; t=1573580612; cv=none; d=zoho.com; s=zohoarc; b=SR6mTYtP9T4DDvZ9x1H7TNPCjj28eHXpofqzvmHxkvvoujurEXRr46xMdsfsGxk7Kkzzd1onh/9DB7UK48xrDbf+1xH9WcYx75R7Hd5oQ36DgSKWQAr3xS1giBygz3kSCq+io4DpgJ6TKLJlKazzYMZyut8zbD8mwzKFTEXvZr0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573580612; h=Content-Type:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=b76VG1CtaF0t0W+2xw926JIpF08VPIo1rVi43atlLHw=; b=EeJKetUE0/1XHPu2tx4ksEI/1Y/Ut0HIy6ZgNBdlq2mDHyej9n4UrYBDSvWH30mgnBgyYKtybWvmhRiwueax34bDWC0MuqQXxXndl9pb0Ushac9RJJgjrOtH9uqrV8xdXqU6P1+byfFZVoahl/O3HjZiCAubBjDEIZEo4mMqIGI= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1573580612522183.31647188612658; Tue, 12 Nov 2019 09:43:32 -0800 (PST) Received: from localhost ([::1]:38408 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUaCR-0006qf-4x for importer@patchew.org; Tue, 12 Nov 2019 12:43:31 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:34291) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iUa7Z-0001Gy-LI for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:31 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iUa7Y-0005Tp-Fa for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:29 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:8033) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iUa7Y-0005TI-7W for qemu-devel@nongnu.org; Tue, 12 Nov 2019 12:38:28 -0500 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 12 Nov 2019 09:32:22 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 12 Nov 2019 09:33:18 -0800 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 17:33:18 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 12 Nov 2019 17:33:11 +0000 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 12 Nov 2019 09:33:18 -0800 From: Kirti Wankhede To: , Subject: [PATCH v9 Kernel 5/5] vfio iommu: Implementation of ioctl to get dirty bitmap before unmap Date: Tue, 12 Nov 2019 22:33:40 +0530 Message-ID: <1573578220-7530-6-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> References: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1573579942; bh=b76VG1CtaF0t0W+2xw926JIpF08VPIo1rVi43atlLHw=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=HvyAOKaOIfCq27eIY++DpFpn6cLVxF3V9P5LNQc9uHf11GXj1ihxcOK+JmcFRfsNz qvqi0Y5BiUG3bMqb3O30JBCjnkRBpCjljKVmo8xzhXzA5TF1XXxsLCCr14DuAoyHfl Qo2Dd4hGGU1XLz/cieVBumRSxOb89vDZ0LGD9edGUniATX0P0/9q1juq0UC+DLN8pu zXqeqlU5ymQHbnV9/Kw3Y/OcJzWOKlRfE708v93GHAAFVF4tG+gR8Q6gdtU9xdagVl PSo1vUk/qp063s+/tkvt5n0jKmsNVHF2ht2VU1NM25ZfCEAIkMylqMnKAUPL3fvm5L 8o9hcZnmFuIJg== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.65 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com, yi.l.liu@intel.com, yan.y.zhao@intel.com, kvm@vger.kernel.org, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, Kirti Wankhede , eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If pages are pinned by external interface for requested IO virtual address range, bitmap of such pages is created and then that range is unmapped. To get bitmap during unmap, user should set flag VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and bitmap_size should be set. If flag is not set, then it behaves same as VFIO_IOMMU_UNMAP_DMA ioctl. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- drivers/vfio/vfio_iommu_type1.c | 71 +++++++++++++++++++++++++++++++++++++= ++-- 1 file changed, 69 insertions(+), 2 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index ac176e672857..d6b988452ba6 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -926,7 +926,8 @@ static int vfio_iova_get_dirty_bitmap(struct vfio_iommu= *iommu, } =20 static int vfio_dma_do_unmap(struct vfio_iommu *iommu, - struct vfio_iommu_type1_dma_unmap *unmap) + struct vfio_iommu_type1_dma_unmap *unmap, + unsigned long *bitmap) { uint64_t mask; struct vfio_dma *dma, *dma_last =3D NULL; @@ -1026,6 +1027,12 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iomm= u, &nb_unmap); goto again; } + + if (bitmap) { + vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size, + unmap->iova, bitmap); + } + unmapped +=3D dma->size; vfio_remove_dma(iommu, dma); } @@ -1039,6 +1046,43 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iomm= u, return ret; } =20 +static int vfio_dma_do_unmap_bitmap(struct vfio_iommu *iommu, + struct vfio_iommu_type1_dma_unmap_bitmap *unmap_bitmap) +{ + struct vfio_iommu_type1_dma_unmap unmap; + unsigned long *bitmap =3D NULL; + int ret; + + /* check bitmap size */ + if ((unmap_bitmap->flags | VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)) { + ret =3D verify_bitmap_size(unmap_bitmap->size >> PAGE_SHIFT, + unmap_bitmap->bitmap_size); + if (ret) + return ret; + + /* one bit per page */ + bitmap =3D bitmap_zalloc(unmap_bitmap->size >> PAGE_SHIFT, + GFP_KERNEL); + if (!bitmap) + return -ENOMEM; + } + + unmap.iova =3D unmap_bitmap->iova; + unmap.size =3D unmap_bitmap->size; + ret =3D vfio_dma_do_unmap(iommu, &unmap, bitmap); + if (!ret) + unmap_bitmap->size =3D unmap.size; + + if (bitmap) { + if (!ret && copy_to_user(unmap_bitmap->bitmap, bitmap, + unmap_bitmap->bitmap_size)) + ret =3D -EFAULT; + bitmap_free(bitmap); + } + + return ret; +} + static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova, unsigned long pfn, long npage, int prot) { @@ -2366,7 +2410,7 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, if (unmap.argsz < minsz || unmap.flags) return -EINVAL; =20 - ret =3D vfio_dma_do_unmap(iommu, &unmap); + ret =3D vfio_dma_do_unmap(iommu, &unmap, NULL); if (ret) return ret; =20 @@ -2389,6 +2433,29 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, return -EINVAL; =20 return vfio_iova_get_dirty_bitmap(iommu, &range); + } else if (cmd =3D=3D VFIO_IOMMU_UNMAP_DMA_GET_BITMAP) { + struct vfio_iommu_type1_dma_unmap_bitmap unmap_bitmap; + long ret; + + /* Supported for v2 version only */ + if (!iommu->v2) + return -EACCES; + + minsz =3D offsetofend(struct vfio_iommu_type1_dma_unmap_bitmap, + bitmap); + + if (copy_from_user(&unmap_bitmap, (void __user *)arg, minsz)) + return -EFAULT; + + if (unmap_bitmap.argsz < minsz) + return -EINVAL; + + ret =3D vfio_dma_do_unmap_bitmap(iommu, &unmap_bitmap); + if (ret) + return ret; + + return copy_to_user((void __user *)arg, &unmap_bitmap, minsz) ? + -EFAULT : 0; } =20 return -ENOTTY; --=20 2.7.0