[Qemu-devel] [PATCH v4] blockjob: drain all job nodes in block_job_drain

Vladimir Sementsov-Ogievskiy posted 1 patch 4 years, 8 months ago
Test docker-clang@ubuntu passed
Test s390x passed
Test asan passed
Test docker-mingw@fedora passed
Test FreeBSD passed
Test checkpatch passed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20190802095239.31975-1-vsementsov@virtuozzo.com
Maintainers: John Snow <jsnow@redhat.com>, Max Reitz <mreitz@redhat.com>, Kevin Wolf <kwolf@redhat.com>
include/block/blockjob_int.h | 11 -----------
block/backup.c               | 18 +-----------------
block/mirror.c               | 26 +++-----------------------
blockjob.c                   | 22 +++++++++++++++++-----
4 files changed, 21 insertions(+), 56 deletions(-)
[Qemu-devel] [PATCH v4] blockjob: drain all job nodes in block_job_drain
Posted by Vladimir Sementsov-Ogievskiy 4 years, 8 months ago
Instead of draining additional nodes in each job code, let's do it in
common block_job_drain, draining just all job's children.
BlockJobDriver.drain becomes unused, so, drop it at all.

It's also a first step to finally get rid of blockjob->blk.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---

v4: keep ref/unref around job nodes draining [John, Max]

v3: just resend, as I've some auto returned mails and not sure that
    v2 reached recipients.

v2: apply Max's suggestions:
 - drop BlockJobDriver.drain
 - do firtly loop of bdrv_drained_begin and then separate loop
   of bdrv_drained_end.

   Hmm, a question here: should I call bdrv_drained_end in reverse
   order? Or it's OK as is?

 include/block/blockjob_int.h | 11 -----------
 block/backup.c               | 18 +-----------------
 block/mirror.c               | 26 +++-----------------------
 blockjob.c                   | 22 +++++++++++++++++-----
 4 files changed, 21 insertions(+), 56 deletions(-)

diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index e4a318dd15..e1abf4ee85 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -52,17 +52,6 @@ struct BlockJobDriver {
      * besides job->blk to the new AioContext.
      */
     void (*attached_aio_context)(BlockJob *job, AioContext *new_context);
-
-    /*
-     * If the callback is not NULL, it will be invoked when the job has to be
-     * synchronously cancelled or completed; it should drain BlockDriverStates
-     * as required to ensure progress.
-     *
-     * Block jobs must use the default implementation for job_driver.drain,
-     * which will in turn call this callback after doing generic block job
-     * stuff.
-     */
-    void (*drain)(BlockJob *job);
 };
 
 /**
diff --git a/block/backup.c b/block/backup.c
index 715e1d3be8..7930004bbd 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -320,21 +320,6 @@ void backup_do_checkpoint(BlockJob *job, Error **errp)
     hbitmap_set(backup_job->copy_bitmap, 0, backup_job->len);
 }
 
-static void backup_drain(BlockJob *job)
-{
-    BackupBlockJob *s = container_of(job, BackupBlockJob, common);
-
-    /* Need to keep a reference in case blk_drain triggers execution
-     * of backup_complete...
-     */
-    if (s->target) {
-        BlockBackend *target = s->target;
-        blk_ref(target);
-        blk_drain(target);
-        blk_unref(target);
-    }
-}
-
 static BlockErrorAction backup_error_action(BackupBlockJob *job,
                                             bool read, int error)
 {
@@ -493,8 +478,7 @@ static const BlockJobDriver backup_job_driver = {
         .commit                 = backup_commit,
         .abort                  = backup_abort,
         .clean                  = backup_clean,
-    },
-    .drain                  = backup_drain,
+    }
 };
 
 static int64_t backup_calculate_cluster_size(BlockDriverState *target,
diff --git a/block/mirror.c b/block/mirror.c
index 8cb75fb409..8456ccd89d 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -644,14 +644,11 @@ static int mirror_exit_common(Job *job)
     bdrv_ref(mirror_top_bs);
     bdrv_ref(target_bs);
 
-    /* Remove target parent that still uses BLK_PERM_WRITE/RESIZE before
+    /*
+     * Remove target parent that still uses BLK_PERM_WRITE/RESIZE before
      * inserting target_bs at s->to_replace, where we might not be able to get
      * these permissions.
-     *
-     * Note that blk_unref() alone doesn't necessarily drop permissions because
-     * we might be running nested inside mirror_drain(), which takes an extra
-     * reference, so use an explicit blk_set_perm() first. */
-    blk_set_perm(s->target, 0, BLK_PERM_ALL, &error_abort);
+     */
     blk_unref(s->target);
     s->target = NULL;
 
@@ -1143,21 +1140,6 @@ static bool mirror_drained_poll(BlockJob *job)
     return !!s->in_flight;
 }
 
-static void mirror_drain(BlockJob *job)
-{
-    MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
-
-    /* Need to keep a reference in case blk_drain triggers execution
-     * of mirror_complete...
-     */
-    if (s->target) {
-        BlockBackend *target = s->target;
-        blk_ref(target);
-        blk_drain(target);
-        blk_unref(target);
-    }
-}
-
 static const BlockJobDriver mirror_job_driver = {
     .job_driver = {
         .instance_size          = sizeof(MirrorBlockJob),
@@ -1172,7 +1154,6 @@ static const BlockJobDriver mirror_job_driver = {
         .complete               = mirror_complete,
     },
     .drained_poll           = mirror_drained_poll,
-    .drain                  = mirror_drain,
 };
 
 static const BlockJobDriver commit_active_job_driver = {
@@ -1189,7 +1170,6 @@ static const BlockJobDriver commit_active_job_driver = {
         .complete               = mirror_complete,
     },
     .drained_poll           = mirror_drained_poll,
-    .drain                  = mirror_drain,
 };
 
 static void coroutine_fn
diff --git a/blockjob.c b/blockjob.c
index 20b7f557da..f64ee3197b 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -92,13 +92,25 @@ void block_job_free(Job *job)
 void block_job_drain(Job *job)
 {
     BlockJob *bjob = container_of(job, BlockJob, job);
-    const JobDriver *drv = job->driver;
-    BlockJobDriver *bjdrv = container_of(drv, BlockJobDriver, job_driver);
+    GSList *nodes = NULL, *el;
 
-    blk_drain(bjob->blk);
-    if (bjdrv->drain) {
-        bjdrv->drain(bjob);
+    for (el = bjob->nodes; el; el = el->next) {
+        BdrvChild *c = el->data;
+        bdrv_ref(c->bs);
+        nodes = g_slist_prepend(nodes, c->bs);
+    }
+
+    for (el = nodes; el; el = el->next) {
+        BlockDriverState *bs = el->data;
+        bdrv_drained_begin(bs);
     }
+    for (el = nodes; el; el = el->next) {
+        BlockDriverState *bs = el->data;
+        bdrv_drained_end(bs);
+        bdrv_unref(bs);
+    }
+
+    g_slist_free(nodes);
 }
 
 static char *child_job_get_parent_desc(BdrvChild *c)
-- 
2.18.0


Re: [Qemu-devel] [PATCH v4] blockjob: drain all job nodes in block_job_drain
Posted by Max Reitz 4 years, 8 months ago
On 02.08.19 11:52, Vladimir Sementsov-Ogievskiy wrote:
> Instead of draining additional nodes in each job code, let's do it in
> common block_job_drain, draining just all job's children.
> BlockJobDriver.drain becomes unused, so, drop it at all.
> 
> It's also a first step to finally get rid of blockjob->blk.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---

What do you think of Kevin’s comment that draining the block nodes may
actually be entirely unnecessary?

Max

Re: [Qemu-devel] [PATCH v4] blockjob: drain all job nodes in block_job_drain
Posted by Vladimir Sementsov-Ogievskiy 4 years, 8 months ago
15.08.2019 16:15, Max Reitz wrote:
> On 02.08.19 11:52, Vladimir Sementsov-Ogievskiy wrote:
>> Instead of draining additional nodes in each job code, let's do it in
>> common block_job_drain, draining just all job's children.
>> BlockJobDriver.drain becomes unused, so, drop it at all.
>>
>> It's also a first step to finally get rid of blockjob->blk.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
> 
> What do you think of Kevin’s comment that draining the block nodes may
> actually be entirely unnecessary?
> 

Hmmm, I'm afraid that nothing more than I can try, and if no real problems
with iotests I'll send a patch.


-- 
Best regards,
Vladimir