From nobody Fri May 3 02:44:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1487079774203174.63757362082265; Tue, 14 Feb 2017 05:42:54 -0800 (PST) Received: from localhost ([::1]:34966 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cddNb-0000P2-MV for importer@patchew.org; Tue, 14 Feb 2017 08:42:51 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49067) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cddKT-0006p6-BE for qemu-devel@nongnu.org; Tue, 14 Feb 2017 08:39:39 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cddKO-0004co-BY for qemu-devel@nongnu.org; Tue, 14 Feb 2017 08:39:37 -0500 Received: from mx-v6.kamp.de ([2a02:248:0:51::16]:41560 helo=mx01.kamp.de) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cddKO-0004bM-1a for qemu-devel@nongnu.org; Tue, 14 Feb 2017 08:39:32 -0500 Received: (qmail 25426 invoked by uid 89); 14 Feb 2017 13:39:28 -0000 Received: from [195.62.97.28] by client-16-kamp (envelope-from , uid 89) with qmail-scanner-2010/03/19-MF (clamdscan: 0.99.2/23060. avast: 1.2.2/17010300. spamassassin: 3.4.1. Clear:RC:1(195.62.97.28):. Processed in 0.068529 secs); 14 Feb 2017 13:39:28 -0000 Received: from smtp.kamp.de (HELO submission.kamp.de) ([195.62.97.28]) by mx01.kamp.de with ESMTPS (DHE-RSA-AES256-GCM-SHA384 encrypted); 14 Feb 2017 13:39:27 -0000 Received: (qmail 12465 invoked from network); 14 Feb 2017 13:39:27 -0000 Received: from lieven-pc.kamp-intra.net (HELO lieven-pc) (relay@kamp.de@::ffff:172.21.12.60) by submission.kamp.de with ESMTPS (DHE-RSA-AES256-GCM-SHA384 encrypted) ESMTPA; 14 Feb 2017 13:39:27 -0000 Received: by lieven-pc (Postfix, from userid 1000) id 72D8A20AC0; Tue, 14 Feb 2017 14:39:27 +0100 (CET) X-GL_Whitelist: yes From: Peter Lieven To: qemu-devel@nongnu.org Date: Tue, 14 Feb 2017 14:39:25 +0100 Message-Id: <1487079565-3548-1-git-send-email-pl@kamp.de> X-Mailer: git-send-email 1.9.1 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a02:248:0:51::16 Subject: [Qemu-devel] [RFC PATCH V3] qemu-img: make convert async X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, Peter Lieven , ct@flyingcircus.io, qemu-block@nongnu.org, mreitz@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" this is something I have been thinking about for almost 2 years now. we heavily have the following two use cases when using qemu-img convert. a) reading from NFS and writing to iSCSI for deploying templates b) reading from iSCSI and writing to NFS for backups In both processes we use libiscsi and libnfs so we have no kernel pagecache. As qemu-img convert is implemented with sync operations that means we read one buffer and then write it. No parallelism and each sync request takes as long as it takes until it is completed. This is version 3 of the approach using coroutine worker "threads". So far I have the following runtimes when reading an uncompressed QCOW2 from NFS and writing it to iSCSI (raw): qemu-img (master) nfs -> iscsi 22.8 secs nfs -> ram 11.7 secs ram -> iscsi 12.3 secs qemu-img-async nfs -> iscsi 12.3 secs nfs -> ram 10.5 secs ram -> iscsi 11.7 secs Comments appreciated. Thank you, Peter Signed-off-by: Peter Lieven --- v2->v3: - updated stats in the commit msg from a host with a better network= card - only wake up the coroutine that is acutally waiting for a write t= o complete. this was not only overhead, but also breaking at least linux AIO. - fix coding style complaints - rename some variables and structs v1->v2: - using coroutine as worker "threads". [Max] - keeping the request queue as otherwise it happens that we wait on BLK_ZERO chunks while keeping the write order. it also avoids redundant calls to get_block_status and helps to skip some conditions for fully allocated imaged (!s->min_spars= e) qemu-img.c | 213 +++++++++++++++++++++++++++++++++++++++++----------------= ---- 1 file changed, 145 insertions(+), 68 deletions(-) diff --git a/qemu-img.c b/qemu-img.c index cff22e3..970863f 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -1448,6 +1448,16 @@ enum ImgConvertBlockStatus { BLK_BACKING_FILE, }; =20 +typedef struct ImgConvertRequest { + int64_t sector_num; + enum ImgConvertBlockStatus status; + int nb_sectors; + QSIMPLEQ_ENTRY(ImgConvertRequest) next; +} ImgConvertRequest; + +/* XXX: this should be a cmdline parameter */ +#define NUM_COROUTINES 8 + typedef struct ImgConvertState { BlockBackend **src; int64_t *src_sectors; @@ -1455,6 +1465,8 @@ typedef struct ImgConvertState { int64_t src_cur_offset; int64_t total_sectors; int64_t allocated_sectors; + int64_t allocated_done; + int64_t wr_offs; enum ImgConvertBlockStatus status; int64_t sector_next_status; BlockBackend *target; @@ -1464,11 +1476,16 @@ typedef struct ImgConvertState { int min_sparse; size_t cluster_sectors; size_t buf_sectors; + Coroutine *co[NUM_COROUTINES]; + int64_t wait_sector_num[NUM_COROUTINES]; + QSIMPLEQ_HEAD(, ImgConvertRequest) queue; + int ret; } ImgConvertState; =20 static void convert_select_part(ImgConvertState *s, int64_t sector_num) { - assert(sector_num >=3D s->src_cur_offset); + s->src_cur_offset =3D 0; + s->src_cur =3D 0; while (sector_num - s->src_cur_offset >=3D s->src_sectors[s->src_cur])= { s->src_cur_offset +=3D s->src_sectors[s->src_cur]; s->src_cur++; @@ -1544,11 +1561,13 @@ static int convert_iteration_sectors(ImgConvertStat= e *s, int64_t sector_num) return n; } =20 -static int convert_read(ImgConvertState *s, int64_t sector_num, int nb_sec= tors, - uint8_t *buf) +static int convert_co_read(ImgConvertState *s, ImgConvertRequest *req, + uint8_t *buf, QEMUIOVector *qiov) { int n; int ret; + int64_t sector_num =3D req->sector_num; + int nb_sectors =3D req->nb_sectors; =20 assert(nb_sectors <=3D s->buf_sectors); while (nb_sectors > 0) { @@ -1562,10 +1581,13 @@ static int convert_read(ImgConvertState *s, int64_t= sector_num, int nb_sectors, blk =3D s->src[s->src_cur]; bs_sectors =3D s->src_sectors[s->src_cur]; =20 + qemu_iovec_reset(qiov); n =3D MIN(nb_sectors, bs_sectors - (sector_num - s->src_cur_offset= )); - ret =3D blk_pread(blk, - (sector_num - s->src_cur_offset) << BDRV_SECTOR_BI= TS, - buf, n << BDRV_SECTOR_BITS); + qemu_iovec_add(qiov, buf, n << BDRV_SECTOR_BITS); + + ret =3D blk_co_preadv( + blk, (sector_num - s->src_cur_offset) << BDRV_SECTOR_BITS, + n << BDRV_SECTOR_BITS, qiov, 0); if (ret < 0) { return ret; } @@ -1578,15 +1600,18 @@ static int convert_read(ImgConvertState *s, int64_t= sector_num, int nb_sectors, return 0; } =20 -static int convert_write(ImgConvertState *s, int64_t sector_num, int nb_se= ctors, - const uint8_t *buf) + +static int convert_co_write(ImgConvertState *s, ImgConvertRequest *req, + uint8_t *buf, QEMUIOVector *qiov) { int ret; + int64_t sector_num =3D req->sector_num; + int nb_sectors =3D req->nb_sectors; =20 while (nb_sectors > 0) { int n =3D nb_sectors; - - switch (s->status) { + qemu_iovec_reset(qiov); + switch (req->status) { case BLK_BACKING_FILE: /* If we have a backing file, leave clusters unallocated that = are * unallocated in the source image, so that the backing file is @@ -1607,9 +1632,10 @@ static int convert_write(ImgConvertState *s, int64_t= sector_num, int nb_sectors, break; } =20 - ret =3D blk_pwrite_compressed(s->target, - sector_num << BDRV_SECTOR_BITS, - buf, n << BDRV_SECTOR_BITS); + qemu_iovec_add(qiov, buf, n << BDRV_SECTOR_BITS); + ret =3D blk_co_pwritev(s->target, sector_num << BDRV_SECTO= R_BITS, + n << BDRV_SECTOR_BITS, qiov, + BDRV_REQ_WRITE_COMPRESSED); if (ret < 0) { return ret; } @@ -1622,8 +1648,9 @@ static int convert_write(ImgConvertState *s, int64_t = sector_num, int nb_sectors, if (!s->min_sparse || is_allocated_sectors_min(buf, n, &n, s->min_sparse)) { - ret =3D blk_pwrite(s->target, sector_num << BDRV_SECTOR_BI= TS, - buf, n << BDRV_SECTOR_BITS, 0); + qemu_iovec_add(qiov, buf, n << BDRV_SECTOR_BITS); + ret =3D blk_co_pwritev(s->target, sector_num << BDRV_SECTO= R_BITS, + n << BDRV_SECTOR_BITS, qiov, 0); if (ret < 0) { return ret; } @@ -1635,8 +1662,9 @@ static int convert_write(ImgConvertState *s, int64_t = sector_num, int nb_sectors, if (s->has_zero_init) { break; } - ret =3D blk_pwrite_zeroes(s->target, sector_num << BDRV_SECTOR= _BITS, - n << BDRV_SECTOR_BITS, 0); + ret =3D blk_co_pwrite_zeroes(s->target, + sector_num << BDRV_SECTOR_BITS, + n << BDRV_SECTOR_BITS, 0); if (ret < 0) { return ret; } @@ -1651,12 +1679,92 @@ static int convert_write(ImgConvertState *s, int64_= t sector_num, int nb_sectors, return 0; } =20 -static int convert_do_copy(ImgConvertState *s) +static void convert_co_do_copy(void *opaque) { + ImgConvertState *s =3D opaque; uint8_t *buf =3D NULL; - int64_t sector_num, allocated_done; + int ret, i; + ImgConvertRequest *req, *next_req; + QEMUIOVector qiov; + int index =3D -1; + + for (i =3D 0; i < NUM_COROUTINES; i++) { + if (s->co[i] =3D=3D qemu_coroutine_self()) { + index =3D i; + break; + } + } + assert(index >=3D 0); + + qemu_iovec_init(&qiov, 1); + buf =3D blk_blockalign(s->target, s->buf_sectors * BDRV_SECTOR_SIZE); + + while (s->ret =3D=3D -EINPROGRESS && (req =3D QSIMPLEQ_FIRST(&s->queue= ))) { + QSIMPLEQ_REMOVE_HEAD(&s->queue, next); + next_req =3D QSIMPLEQ_FIRST(&s->queue); + + s->allocated_done +=3D req->nb_sectors; + qemu_progress_print(100.0 * s->allocated_done / s->allocated_secto= rs, + 0); + + if (req->status =3D=3D BLK_DATA) { + ret =3D convert_co_read(s, req, buf, &qiov); + if (ret < 0) { + error_report("error while reading sector %" PRId64 + ": %s", req->sector_num, strerror(-ret)); + s->ret =3D ret; + goto out; + } + } + + /* keep writes in order */ + while (s->wr_offs !=3D req->sector_num) { + if (s->ret !=3D -EINPROGRESS) { + goto out; + } + s->wait_sector_num[index] =3D req->sector_num; + qemu_coroutine_yield(); + } + s->wait_sector_num[index] =3D -1; + + ret =3D convert_co_write(s, req, buf, &qiov); + if (ret < 0) { + error_report("error while writing sector %" PRId64 + ": %s", req->sector_num, strerror(-ret)); + s->ret =3D ret; + goto out; + } + + if (!next_req) { + /* the convert job is completed */ + s->ret =3D 0; + s->wr_offs =3D s->total_sectors; + } else { + s->wr_offs =3D next_req->sector_num; + /* reenter the coroutine that might have waited + * for this write completion */ + for (i =3D 0; i < NUM_COROUTINES; i++) { + if (s->co[i] && s->wait_sector_num[i] =3D=3D s->wr_offs) { + qemu_coroutine_enter(s->co[i]); + break; + } + } + } + + g_free(req); + } + +out: + qemu_iovec_destroy(&qiov); + qemu_vfree(buf); + s->co[index] =3D NULL; +} + +static int convert_do_copy(ImgConvertState *s) +{ int ret; - int n; + int i, n; + int64_t sector_num =3D 0; =20 /* Check whether we have zero initialisation or can get it efficiently= */ s->has_zero_init =3D s->min_sparse && !s->target_has_backing @@ -1682,69 +1790,39 @@ static int convert_do_copy(ImgConvertState *s) } s->buf_sectors =3D s->cluster_sectors; } - buf =3D blk_blockalign(s->target, s->buf_sectors * BDRV_SECTOR_SIZE); =20 - /* Calculate allocated sectors for progress */ - s->allocated_sectors =3D 0; - sector_num =3D 0; + QSIMPLEQ_INIT(&s->queue); while (sector_num < s->total_sectors) { n =3D convert_iteration_sectors(s, sector_num); if (n < 0) { ret =3D n; goto fail; } + if (s->status =3D=3D BLK_DATA || (!s->min_sparse && s->status =3D= =3D BLK_ZERO)) { + ImgConvertRequest *elt =3D g_malloc(sizeof(ImgConvertRequest)); + elt->sector_num =3D sector_num; + elt->status =3D s->status; + elt->nb_sectors =3D n; s->allocated_sectors +=3D n; + QSIMPLEQ_INSERT_TAIL(&s->queue, elt, next); } sector_num +=3D n; } =20 - /* Do the copy */ - s->src_cur =3D 0; - s->src_cur_offset =3D 0; - s->sector_next_status =3D 0; - - sector_num =3D 0; - allocated_done =3D 0; - - while (sector_num < s->total_sectors) { - n =3D convert_iteration_sectors(s, sector_num); - if (n < 0) { - ret =3D n; - goto fail; - } - if (s->status =3D=3D BLK_DATA || (!s->min_sparse && s->status =3D= =3D BLK_ZERO)) - { - allocated_done +=3D n; - qemu_progress_print(100.0 * allocated_done / s->allocated_sect= ors, - 0); - } - - if (s->status =3D=3D BLK_DATA) { - ret =3D convert_read(s, sector_num, n, buf); - if (ret < 0) { - error_report("error while reading sector %" PRId64 - ": %s", sector_num, strerror(-ret)); - goto fail; - } - } else if (!s->min_sparse && s->status =3D=3D BLK_ZERO) { - n =3D MIN(n, s->buf_sectors); - memset(buf, 0, n * BDRV_SECTOR_SIZE); - s->status =3D BLK_DATA; - } - - ret =3D convert_write(s, sector_num, n, buf); - if (ret < 0) { - error_report("error while writing sector %" PRId64 - ": %s", sector_num, strerror(-ret)); - goto fail; - } + s->ret =3D -EINPROGRESS; + for (i =3D 0; i < NUM_COROUTINES; i++) { + s->co[i] =3D qemu_coroutine_create(convert_co_do_copy, s); + s->wait_sector_num[i] =3D -1; + qemu_coroutine_enter(s->co[i]); + } =20 - sector_num +=3D n; + while (s->ret =3D=3D -EINPROGRESS) { + main_loop_wait(false); } =20 - if (s->compressed) { + if (s->compressed && !s->ret) { /* signal EOF to align */ ret =3D blk_pwrite_compressed(s->target, 0, NULL, 0); if (ret < 0) { @@ -1752,9 +1830,8 @@ static int convert_do_copy(ImgConvertState *s) } } =20 - ret =3D 0; + ret =3D s->ret; fail: - qemu_vfree(buf); return ret; } =20 --=20 1.9.1