From nobody Sat May 18 23:55:09 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614865003; cv=none; d=zohomail.com; s=zohoarc; b=QHMXC5ZCTyacGNDcwj0s9CBvDQ/HZ6QYtbeSt0SHz3Qi79xALoNU8drIIeKx9HHC6cfd/LftkWTeWPyZ0W38uvMuMUiT9LlLLvo05Mw0eIm/ZcsQO1giTYIETjsK6m8U4cUULgto5ekvxZPp4YukyaC0X9W7DnWiYbPhWPg51C0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614865003; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=M9e+z9cU/5V6Jd8DIAH7ycTPl9GVdFiHwXTjDdFRjTo=; b=RUIbpi5GM97cX0ss/rjrLfbd0g5GEuJAG544qYjXhb+PW6u7WUnN/xwyFLc5Zji+YE1sP/7jfsEF+f1J4WDIwDWRc0ul3/hHO1BqRqs1PLAMAV58TADwvfgiA9j8Q0uNpT3OBfHy2fReJ74cALatGyE5KJK5wU6Krvy8xkNnMtM= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614865003192570.2228841097974; Thu, 4 Mar 2021 05:36:43 -0800 (PST) Received: from localhost ([::1]:47290 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lHo9i-0007Rj-16 for importer@patchew.org; Thu, 04 Mar 2021 08:36:42 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:38208) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lHo8U-0006tq-P2 for qemu-devel@nongnu.org; Thu, 04 Mar 2021 08:35:26 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:2661) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lHo8R-0007jf-NT for qemu-devel@nongnu.org; Thu, 04 Mar 2021 08:35:26 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4DrsJX4zNWz8scC; Thu, 4 Mar 2021 21:33:32 +0800 (CST) Received: from DESKTOP-6NKE0BC.china.huawei.com (10.174.185.210) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Thu, 4 Mar 2021 21:35:06 +0800 From: Kunkun Jiang To: Alex Williamson , Kirti Wankhede , Liu Yi L , "open list:All patches CC here" , Eric Auger Subject: [PATCH] vfio: Support host translation granule size Date: Thu, 4 Mar 2021 21:34:46 +0800 Message-ID: <20210304133446.1521-1-jiangkunkun@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.174.185.210] X-CFilter-Loop: Reflected Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=45.249.212.35; envelope-from=jiangkunkun@huawei.com; helo=szxga07-in.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zenghui Yu , wanghaibin.wang@huawei.com, Keqian Zhu , shameerali.kolothum.thodi@huawei.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" The cpu_physical_memory_set_dirty_lebitmap() can quickly deal with the dirty pages of memory by bitmap-traveling, regardless of whether the bitmap is aligned correctly or not. cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of host page size. So it'd better to set bitmap_pgsize to host page size to support more translation granule sizes. Fixes: 87ea529c502 (vfio: Get migration capability flags for container) Signed-off-by: Kunkun Jiang --- hw/vfio/common.c | 44 ++++++++++++++++++++++---------------------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 6ff1daa763..69fb5083a4 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -378,7 +378,7 @@ static int vfio_dma_unmap_bitmap(VFIOContainer *contain= er, { struct vfio_iommu_type1_dma_unmap *unmap; struct vfio_bitmap *bitmap; - uint64_t pages =3D TARGET_PAGE_ALIGN(size) >> TARGET_PAGE_BITS; + uint64_t pages =3D REAL_HOST_PAGE_ALIGN(size) / qemu_real_host_page_si= ze; int ret; =20 unmap =3D g_malloc0(sizeof(*unmap) + sizeof(*bitmap)); @@ -390,12 +390,12 @@ static int vfio_dma_unmap_bitmap(VFIOContainer *conta= iner, bitmap =3D (struct vfio_bitmap *)&unmap->data; =20 /* - * cpu_physical_memory_set_dirty_lebitmap() expects pages in bitmap of - * TARGET_PAGE_SIZE to mark those dirty. Hence set bitmap_pgsize to - * TARGET_PAGE_SIZE. + * cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of + * qemu_real_host_page_size to mark those dirty. Hence set bitmap_pgsi= ze + * to qemu_real_host_page_size. */ =20 - bitmap->pgsize =3D TARGET_PAGE_SIZE; + bitmap->pgsize =3D qemu_real_host_page_size; bitmap->size =3D ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) / BITS_PER_BYTE; =20 @@ -674,16 +674,16 @@ static void vfio_listener_region_add(MemoryListener *= listener, return; } =20 - if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK= ) !=3D - (section->offset_within_region & ~TARGET_PAGE_MASK))) { + if (unlikely((section->offset_within_address_space & ~qemu_real_host_p= age_mask) !=3D + (section->offset_within_region & ~qemu_real_host_page_mas= k))) { error_report("%s received unaligned region", __func__); return; } =20 - iova =3D TARGET_PAGE_ALIGN(section->offset_within_address_space); + iova =3D REAL_HOST_PAGE_ALIGN(section->offset_within_address_space); llend =3D int128_make64(section->offset_within_address_space); llend =3D int128_add(llend, section->size); - llend =3D int128_and(llend, int128_exts64(TARGET_PAGE_MASK)); + llend =3D int128_and(llend, int128_exts64(qemu_real_host_page_mask)); =20 if (int128_ge(int128_make64(iova), llend)) { return; @@ -892,8 +892,8 @@ static void vfio_listener_region_del(MemoryListener *li= stener, return; } =20 - if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK= ) !=3D - (section->offset_within_region & ~TARGET_PAGE_MASK))) { + if (unlikely((section->offset_within_address_space & ~qemu_real_host_p= age_mask) !=3D + (section->offset_within_region & ~qemu_real_host_page_mas= k))) { error_report("%s received unaligned region", __func__); return; } @@ -921,10 +921,10 @@ static void vfio_listener_region_del(MemoryListener *= listener, */ } =20 - iova =3D TARGET_PAGE_ALIGN(section->offset_within_address_space); + iova =3D REAL_HOST_PAGE_ALIGN(section->offset_within_address_space); llend =3D int128_make64(section->offset_within_address_space); llend =3D int128_add(llend, section->size); - llend =3D int128_and(llend, int128_exts64(TARGET_PAGE_MASK)); + llend =3D int128_and(llend, int128_exts64(qemu_real_host_page_mask)); =20 if (int128_ge(int128_make64(iova), llend)) { return; @@ -1004,13 +1004,13 @@ static int vfio_get_dirty_bitmap(VFIOContainer *con= tainer, uint64_t iova, range->size =3D size; =20 /* - * cpu_physical_memory_set_dirty_lebitmap() expects pages in bitmap of - * TARGET_PAGE_SIZE to mark those dirty. Hence set bitmap's pgsize to - * TARGET_PAGE_SIZE. + * cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of + * qemu_real_host_page_size to mark those dirty. Hence set bitmap's pg= size + * to qemu_real_host_page_size. */ - range->bitmap.pgsize =3D TARGET_PAGE_SIZE; + range->bitmap.pgsize =3D qemu_real_host_page_size; =20 - pages =3D TARGET_PAGE_ALIGN(range->size) >> TARGET_PAGE_BITS; + pages =3D REAL_HOST_PAGE_ALIGN(range->size) / qemu_real_host_page_size; range->bitmap.size =3D ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) / BITS_PER_BYTE; range->bitmap.data =3D g_try_malloc0(range->bitmap.size); @@ -1114,7 +1114,7 @@ static int vfio_sync_dirty_bitmap(VFIOContainer *cont= ainer, section->offset_within_region; =20 return vfio_get_dirty_bitmap(container, - TARGET_PAGE_ALIGN(section->offset_within_address_sp= ace), + REAL_HOST_PAGE_ALIGN(section->offset_within_address= _space), int128_get64(section->size), ram_addr); } =20 @@ -1655,10 +1655,10 @@ static void vfio_get_iommu_info_migration(VFIOConta= iner *container, header); =20 /* - * cpu_physical_memory_set_dirty_lebitmap() expects pages in bitmap of - * TARGET_PAGE_SIZE to mark those dirty. + * cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of + * qemu_real_host_page_size to mark those dirty. */ - if (cap_mig->pgsize_bitmap & TARGET_PAGE_SIZE) { + if (cap_mig->pgsize_bitmap & qemu_real_host_page_size) { container->dirty_pages_supported =3D true; container->max_dirty_bitmap_size =3D cap_mig->max_dirty_bitmap_siz= e; container->dirty_pgsizes =3D cap_mig->pgsize_bitmap; --=20 2.23.0