[PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports

Eric Blake posted 1 patch 2 years, 8 months ago
Test checkpatch passed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20210827150916.532260-1-eblake@redhat.com
Maintainers: Hanna Reitz <hreitz@redhat.com>, Kevin Wolf <kwolf@redhat.com>, Eric Blake <eblake@redhat.com>, Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>, Markus Armbruster <armbru@redhat.com>
docs/interop/nbd.txt                       |  1 +
docs/tools/qemu-nbd.rst                    |  3 +-
qapi/block-export.json                     |  6 +-
blockdev-nbd.c                             |  4 +
nbd/server.c                               | 10 +--
qemu-nbd.c                                 |  2 +
MAINTAINERS                                |  1 +
tests/qemu-iotests/tests/nbd-multiconn     | 91 ++++++++++++++++++++++
tests/qemu-iotests/tests/nbd-multiconn.out | 14 ++++
9 files changed, 124 insertions(+), 8 deletions(-)
create mode 100755 tests/qemu-iotests/tests/nbd-multiconn
create mode 100644 tests/qemu-iotests/tests/nbd-multiconn.out
[PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports
Posted by Eric Blake 2 years, 8 months ago
According to the NBD spec, a server advertising
NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
not see any cache inconsistencies: when properly separated by a single
flush, actions performed by one client will be visible to another
client, regardless of which client did the flush.  We satisfy these
conditions in qemu because our block layer serializes any overlapping
operations (see bdrv_find_conflicting_request and friends): no matter
which client performs a flush, parallel requests coming from distinct
NBD clients will still be well-ordered by the time they are passed on
to the underlying device, with no caching in qemu proper to allow
stale results to leak after a flush.

We don't want to advertise MULTI_CONN when we know that a second
client can connect (which is the default for qemu-nbd, but not for QMP
nbd-server-add), so it does require a QAPI addition.  But other than
that, the actual change to advertise the bit for writable servers is
fairly small.  The harder part of this patch is setting up an iotest
to demonstrate behavior of multiple NBD clients to a single server.
It might be possible with parallel qemu-io processes, but concisely
managing that in shell is painful.  I found it easier to do by relying
on the libnbd project's nbdsh, which means this test will be skipped
on platforms where that is not available.

Signed-off-by: Eric Blake <eblake@redhat.com>
---
 docs/interop/nbd.txt                       |  1 +
 docs/tools/qemu-nbd.rst                    |  3 +-
 qapi/block-export.json                     |  6 +-
 blockdev-nbd.c                             |  4 +
 nbd/server.c                               | 10 +--
 qemu-nbd.c                                 |  2 +
 MAINTAINERS                                |  1 +
 tests/qemu-iotests/tests/nbd-multiconn     | 91 ++++++++++++++++++++++
 tests/qemu-iotests/tests/nbd-multiconn.out | 14 ++++
 9 files changed, 124 insertions(+), 8 deletions(-)
 create mode 100755 tests/qemu-iotests/tests/nbd-multiconn
 create mode 100644 tests/qemu-iotests/tests/nbd-multiconn.out

diff --git a/docs/interop/nbd.txt b/docs/interop/nbd.txt
index 10ce098a29bf..d03910f1e9eb 100644
--- a/docs/interop/nbd.txt
+++ b/docs/interop/nbd.txt
@@ -68,3 +68,4 @@ NBD_CMD_BLOCK_STATUS for "qemu:dirty-bitmap:", NBD_CMD_CACHE
 * 4.2: NBD_FLAG_CAN_MULTI_CONN for shareable read-only exports,
 NBD_CMD_FLAG_FAST_ZERO
 * 5.2: NBD_CMD_BLOCK_STATUS for "qemu:allocation-depth"
+* 6.2: NBD_FLAG_CAN_MULTI_CONN for shareable writable exports
diff --git a/docs/tools/qemu-nbd.rst b/docs/tools/qemu-nbd.rst
index 5643da26e982..81be32164a55 100644
--- a/docs/tools/qemu-nbd.rst
+++ b/docs/tools/qemu-nbd.rst
@@ -138,8 +138,7 @@ driver options if ``--image-opts`` is specified.
 .. option:: -e, --shared=NUM

   Allow up to *NUM* clients to share the device (default
-  ``1``), 0 for unlimited. Safe for readers, but for now,
-  consistency is not guaranteed between multiple writers.
+  ``1``), 0 for unlimited.

 .. option:: -t, --persistent

diff --git a/qapi/block-export.json b/qapi/block-export.json
index 0ed63442a819..b2085a9fdd4c 100644
--- a/qapi/block-export.json
+++ b/qapi/block-export.json
@@ -95,11 +95,15 @@
 #                    the metadata context name "qemu:allocation-depth" to
 #                    inspect allocation details. (since 5.2)
 #
+# @shared: True if the server should advertise that multiple clients may
+#          connect, default false. (since 6.2)
+#
 # Since: 5.2
 ##
 { 'struct': 'BlockExportOptionsNbd',
   'base': 'BlockExportOptionsNbdBase',
-  'data': { '*bitmaps': ['str'], '*allocation-depth': 'bool' } }
+  'data': { '*bitmaps': ['str'], '*allocation-depth': 'bool',
+             '*shared': 'bool' } }

 ##
 # @BlockExportOptionsVhostUserBlk:
diff --git a/blockdev-nbd.c b/blockdev-nbd.c
index bdfa7ed3a5a9..258feaa76e02 100644
--- a/blockdev-nbd.c
+++ b/blockdev-nbd.c
@@ -221,6 +221,10 @@ void qmp_nbd_server_add(NbdServerAddOptions *arg, Error **errp)
         QAPI_LIST_PREPEND(export_opts->u.nbd.bitmaps, g_strdup(arg->bitmap));
     }

+    /* nbd-server-add always permits parallel clients */
+    export_opts->u.nbd.has_shared = true;
+    export_opts->u.nbd.shared = true;
+
     /*
      * nbd-server-add doesn't complain when a read-only device should be
      * exported as writable, but simply downgrades it. This is an error with
diff --git a/nbd/server.c b/nbd/server.c
index 3927f7789dcf..1646796a4798 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -1,5 +1,5 @@
 /*
- *  Copyright (C) 2016-2020 Red Hat, Inc.
+ *  Copyright (C) 2016-2021 Red Hat, Inc.
  *  Copyright (C) 2005  Anthony Liguori <anthony@codemonkey.ws>
  *
  *  Network Block Device Server Side
@@ -1634,7 +1634,7 @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
     int64_t size;
     uint64_t perm, shared_perm;
     bool readonly = !exp_args->writable;
-    bool shared = !exp_args->writable;
+    bool shared = arg->shared;
     strList *bitmaps;
     size_t i;
     int ret;
@@ -1685,11 +1685,11 @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
     exp->description = g_strdup(arg->description);
     exp->nbdflags = (NBD_FLAG_HAS_FLAGS | NBD_FLAG_SEND_FLUSH |
                      NBD_FLAG_SEND_FUA | NBD_FLAG_SEND_CACHE);
+    if (shared) {
+        exp->nbdflags |= NBD_FLAG_CAN_MULTI_CONN;
+    }
     if (readonly) {
         exp->nbdflags |= NBD_FLAG_READ_ONLY;
-        if (shared) {
-            exp->nbdflags |= NBD_FLAG_CAN_MULTI_CONN;
-        }
     } else {
         exp->nbdflags |= (NBD_FLAG_SEND_TRIM | NBD_FLAG_SEND_WRITE_ZEROES |
                           NBD_FLAG_SEND_FAST_ZERO);
diff --git a/qemu-nbd.c b/qemu-nbd.c
index f68fcceadd9e..198b1c55729d 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -1084,6 +1084,8 @@ int main(int argc, char **argv)
             .bitmaps              = bitmaps,
             .has_allocation_depth = alloc_depth,
             .allocation_depth     = alloc_depth,
+            .has_shared           = true,
+            .shared               = shared != 1,
         },
     };
     blk_exp_add(export_opts, &error_fatal);
diff --git a/MAINTAINERS b/MAINTAINERS
index 6b3697962c1b..6eab82e6b982 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3142,6 +3142,7 @@ F: qemu-nbd.*
 F: blockdev-nbd.c
 F: docs/interop/nbd.txt
 F: docs/tools/qemu-nbd.rst
+F: tests/qemu-iotests/tests/*nbd*
 T: git https://repo.or.cz/qemu/ericb.git nbd
 T: git https://src.openvz.org/scm/~vsementsov/qemu.git nbd

diff --git a/tests/qemu-iotests/tests/nbd-multiconn b/tests/qemu-iotests/tests/nbd-multiconn
new file mode 100755
index 000000000000..66b410546e2f
--- /dev/null
+++ b/tests/qemu-iotests/tests/nbd-multiconn
@@ -0,0 +1,91 @@
+#!/usr/bin/env bash
+# group: rw auto quick
+#
+# Test that qemu-nbd MULTI_CONN works
+#
+# Copyright (C) 2018-2021 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+seq="$(basename $0)"
+echo "QA output created by $seq"
+
+status=1 # failure is the default!
+
+_cleanup()
+{
+    _cleanup_test_img
+    nbd_server_stop
+}
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+# get standard environment, filters and checks
+cd ..
+. ./common.rc
+. ./common.filter
+. ./common.nbd
+
+_supported_fmt qcow2
+_supported_proto file
+_supported_os Linux
+_require_command QEMU_NBD
+
+# Parallel client connections are easier with libnbd than qemu-io:
+if ! (nbdsh --version) >/dev/null 2>&1; then
+    _notrun "nbdsh utility required, skipped this test"
+fi
+
+echo
+echo "=== Initial image setup ==="
+echo
+
+_make_test_img 4M
+$QEMU_IO -c 'w -P 1 0 2M' -c 'w -P 2 2M 2M' "$TEST_IMG" | _filter_qemu_io
+IMG="driver=nbd,server.type=unix,server.path=$nbd_unix_socket"
+nbd_server_start_unix_socket -f qcow2 -e 5 "$TEST_IMG"
+
+echo
+echo "=== Demonstrate parallel clients ==="
+echo
+
+export nbd_unix_socket
+nbdsh -c '
+import os
+sock = os.getenv("nbd_unix_socket")
+h = []
+
+for i in range(3):
+  h.append(nbd.NBD())
+  h[i].connect_unix(sock)
+  assert h[i].can_multi_conn()
+
+buf1 = h[0].pread(1024 * 1024, 0)
+if buf1 != b"\x01" * 1024 * 1024:
+  print("Unexpected initial read")
+buf2 = b"\x03" * 1024 * 1024
+h[1].pwrite(buf2, 0)
+h[2].flush()
+buf1 = h[0].pread(1024 * 1024, 0)
+if buf1 == buf2:
+  print("Flush appears to be consistent across connections")
+
+for i in range(3):
+  h[i].shutdown()
+'
+
+# success, all done
+echo '*** done'
+rm -f $seq.full
+status=0
diff --git a/tests/qemu-iotests/tests/nbd-multiconn.out b/tests/qemu-iotests/tests/nbd-multiconn.out
new file mode 100644
index 000000000000..4554099e4d62
--- /dev/null
+++ b/tests/qemu-iotests/tests/nbd-multiconn.out
@@ -0,0 +1,14 @@
+QA output created by nbd-multiconn
+
+=== Initial image setup ===
+
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=4194304
+wrote 2097152/2097152 bytes at offset 0
+2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 2097152/2097152 bytes at offset 2097152
+2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+=== Demonstrate parallel clients ===
+
+Flush appears to be consistent across connections
+*** done
-- 
2.31.1


Re: [PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports
Posted by Vladimir Sementsov-Ogievskiy 2 years, 8 months ago
27.08.2021 18:09, Eric Blake wrote:
> According to the NBD spec, a server advertising
> NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
> not see any cache inconsistencies: when properly separated by a single
> flush, actions performed by one client will be visible to another
> client, regardless of which client did the flush.  We satisfy these
> conditions in qemu because our block layer serializes any overlapping
> operations (see bdrv_find_conflicting_request and friends)

Not any. We serialize only write operations not aligned to request_alignment of bs (see bdrv_make_request_serialising() call in bdrv_co_pwritev_part). So, actually most of overlapping operations remain overlapping. And that's correct: it's not a Qemu work to resolve overlapping requests. We resolve them only when we are responsible for appearing of intersection: when we align requests up.

-- 
Best regards,
Vladimir

Re: [PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports
Posted by Eric Blake 2 years, 8 months ago
On Fri, Aug 27, 2021 at 07:58:10PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 27.08.2021 18:09, Eric Blake wrote:
> > According to the NBD spec, a server advertising
> > NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
> > not see any cache inconsistencies: when properly separated by a single
> > flush, actions performed by one client will be visible to another
> > client, regardless of which client did the flush.  We satisfy these
> > conditions in qemu because our block layer serializes any overlapping
> > operations (see bdrv_find_conflicting_request and friends)
> 
> Not any. We serialize only write operations not aligned to request_alignment of bs (see bdrv_make_request_serialising() call in bdrv_co_pwritev_part). So, actually most of overlapping operations remain overlapping. And that's correct: it's not a Qemu work to resolve overlapping requests. We resolve them only when we are responsible for appearing of intersection: when we align requests up.

I welcome improvements on the wording.  Maybe what I should be
emphasizing is that even when there are overlapping requests, qemu
itself is multiplexing all of those requests through a single
interface into the backend, without any caching on qemu's part, and
relying on the consistency of the flush operation into that backend.

From a parallelism perspective, in file-posix.c, we don't distiguish
between two pwrite() syscalls made (potentially out-of-order) by a
single BDS client in two coroutines, from two pwrite() syscalls made
by two separate BDS clients.  Either way, those two syscalls may both
be asynchronous, but both go through a single interface into the
kernel's view of the underlying filesystem or block device.  And we
implement flush via fdatasync(), which the kernel already has some
pretty strong guarantees on cross-thread consistency.

But I am less certain of whether we are guaranteed cross-consistency
like that for all protocol drivers.  Is there any block driver (most
likely a networked one) where we have situations such that even though
we are using the same API for all asynchronous access within the qemu
coroutines, under the hood those APIs can end up diverging on their
destinations such as due to network round-robin effects, and result in
us seeing cache-inconsistent views?  That is, can we ever encounter
this:

-> read()
  -> kicks off networked storage call that resolves to host X
    -> host X caches the read
  <- reply
-> write()
  -> kicks off networked storage call that resolves to host Y
    -> host Y updates the file system
  <- reply
-> flush()
  -> kicks off networked storage call that resolves to host Y
    -> host Y starts flushing, but replies early
  <- reply
-> read()
  -> kicks off networked storage call that resolves to host X
    -> host X does not see effects of Y's flush yet, returns stale data

If we can encounter that, then in those situations we must not
advertise MULTI_CONN.  But I'm confident that file-posix.c does not
have that problem, and even if another driver did have that problem
(where our single API access can result in cache-inconsistent views
over the protocol, rather than flush really being effective to all
further API access to that driver), you'd think we'd be aware of it.
However, if we DO know of a place where that is the case, then now is
the time to design our QAPI control over whether to advertise NBD's
MULTI_CONN bit based on whether the block layer can warn us about a
particular block layer NOT being safe.

But unless we come up with such a scenario, maybe all I need here is
better wording to put in the commit message to state why we think we
ARE safe in advertising MULTI_CONN.  Remember, the NBD flag only has
an impact in relation to how strong flush calls are (it is NOT
required that overlapping write requests have any particular behavior
- that's always been up to the client to be careful with that, and
qemu need not go out of its way to prevent client stupidity with
overlapping writes), but rather that actions with a reply completed
prior to FLUSH are then visible to actions started after the reply to
FLUSH.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


Re: [PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports
Posted by Vladimir Sementsov-Ogievskiy 2 years, 7 months ago
27.08.2021 21:45, Eric Blake wrote:
> On Fri, Aug 27, 2021 at 07:58:10PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> 27.08.2021 18:09, Eric Blake wrote:
>>> According to the NBD spec, a server advertising
>>> NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
>>> not see any cache inconsistencies: when properly separated by a single
>>> flush, actions performed by one client will be visible to another
>>> client, regardless of which client did the flush.  We satisfy these
>>> conditions in qemu because our block layer serializes any overlapping
>>> operations (see bdrv_find_conflicting_request and friends)
>>
>> Not any. We serialize only write operations not aligned to request_alignment of bs (see bdrv_make_request_serialising() call in bdrv_co_pwritev_part). So, actually most of overlapping operations remain overlapping. And that's correct: it's not a Qemu work to resolve overlapping requests. We resolve them only when we are responsible for appearing of intersection: when we align requests up.
> 
> I welcome improvements on the wording.  Maybe what I should be
> emphasizing is that even when there are overlapping requests, qemu
> itself is multiplexing all of those requests through a single
> interface into the backend, without any caching on qemu's part, and
> relying on the consistency of the flush operation into that backend.
> 
>  From a parallelism perspective, in file-posix.c, we don't distiguish
> between two pwrite() syscalls made (potentially out-of-order) by a
> single BDS client in two coroutines, from two pwrite() syscalls made
> by two separate BDS clients.  Either way, those two syscalls may both
> be asynchronous, but both go through a single interface into the
> kernel's view of the underlying filesystem or block device.  And we
> implement flush via fdatasync(), which the kernel already has some
> pretty strong guarantees on cross-thread consistency.
> 
> But I am less certain of whether we are guaranteed cross-consistency
> like that for all protocol drivers.  Is there any block driver (most
> likely a networked one) where we have situations such that even though
> we are using the same API for all asynchronous access within the qemu
> coroutines, under the hood those APIs can end up diverging on their
> destinations such as due to network round-robin effects, and result in
> us seeing cache-inconsistent views?  That is, can we ever encounter
> this:
> 
> -> read()
>    -> kicks off networked storage call that resolves to host X
>      -> host X caches the read
>    <- reply
> -> write()
>    -> kicks off networked storage call that resolves to host Y
>      -> host Y updates the file system
>    <- reply
> -> flush()
>    -> kicks off networked storage call that resolves to host Y
>      -> host Y starts flushing, but replies early
>    <- reply
> -> read()
>    -> kicks off networked storage call that resolves to host X
>      -> host X does not see effects of Y's flush yet, returns stale data
> 
> If we can encounter that, then in those situations we must not
> advertise MULTI_CONN.  But I'm confident that file-posix.c does not
> have that problem, and even if another driver did have that problem
> (where our single API access can result in cache-inconsistent views
> over the protocol, rather than flush really being effective to all
> further API access to that driver), you'd think we'd be aware of it.
> However, if we DO know of a place where that is the case, then now is
> the time to design our QAPI control over whether to advertise NBD's
> MULTI_CONN bit based on whether the block layer can warn us about a
> particular block layer NOT being safe.
> 
> But unless we come up with such a scenario, maybe all I need here is
> better wording to put in the commit message to state why we think we
> ARE safe in advertising MULTI_CONN.  Remember, the NBD flag only has
> an impact in relation to how strong flush calls are (it is NOT
> required that overlapping write requests have any particular behavior
> - that's always been up to the client to be careful with that, and
> qemu need not go out of its way to prevent client stupidity with
> overlapping writes), but rather that actions with a reply completed
> prior to FLUSH are then visible to actions started after the reply to
> FLUSH.
> 

Reasonable, I agree

-- 
Best regards,
Vladimir

Re: [PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports
Posted by Kevin Wolf 2 years, 6 months ago
Am 27.08.2021 um 17:09 hat Eric Blake geschrieben:
> According to the NBD spec, a server advertising
> NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
> not see any cache inconsistencies: when properly separated by a single
> flush, actions performed by one client will be visible to another
> client, regardless of which client did the flush.  We satisfy these
> conditions in qemu because our block layer serializes any overlapping
> operations (see bdrv_find_conflicting_request and friends): no matter
> which client performs a flush, parallel requests coming from distinct
> NBD clients will still be well-ordered by the time they are passed on
> to the underlying device, with no caching in qemu proper to allow
> stale results to leak after a flush.
> 
> We don't want to advertise MULTI_CONN when we know that a second
> client can connect (which is the default for qemu-nbd, but not for QMP
> nbd-server-add),

Do you mean when a second client _can't_ connect?

> so it does require a QAPI addition.  But other than
> that, the actual change to advertise the bit for writable servers is
> fairly small.  The harder part of this patch is setting up an iotest
> to demonstrate behavior of multiple NBD clients to a single server.
> It might be possible with parallel qemu-io processes, but concisely
> managing that in shell is painful.

I think it should be fairly straightforward in a Python test case.

Another option is using a single QEMU or QSD instance that has multiple
-blockdev for the same NBD server. For the server these are multiple
clients, even if all connnection come from a single process.

> I found it easier to do by relying
> on the libnbd project's nbdsh, which means this test will be skipped
> on platforms where that is not available.
> 
> Signed-off-by: Eric Blake <eblake@redhat.com>

> diff --git a/docs/interop/nbd.txt b/docs/interop/nbd.txt
> index 10ce098a29bf..d03910f1e9eb 100644
> --- a/docs/interop/nbd.txt
> +++ b/docs/interop/nbd.txt
> @@ -68,3 +68,4 @@ NBD_CMD_BLOCK_STATUS for "qemu:dirty-bitmap:", NBD_CMD_CACHE
>  * 4.2: NBD_FLAG_CAN_MULTI_CONN for shareable read-only exports,
>  NBD_CMD_FLAG_FAST_ZERO
>  * 5.2: NBD_CMD_BLOCK_STATUS for "qemu:allocation-depth"
> +* 6.2: NBD_FLAG_CAN_MULTI_CONN for shareable writable exports
> diff --git a/docs/tools/qemu-nbd.rst b/docs/tools/qemu-nbd.rst
> index 5643da26e982..81be32164a55 100644
> --- a/docs/tools/qemu-nbd.rst
> +++ b/docs/tools/qemu-nbd.rst
> @@ -138,8 +138,7 @@ driver options if ``--image-opts`` is specified.
>  .. option:: -e, --shared=NUM
> 
>    Allow up to *NUM* clients to share the device (default
> -  ``1``), 0 for unlimited. Safe for readers, but for now,
> -  consistency is not guaranteed between multiple writers.
> +  ``1``), 0 for unlimited.
> 
>  .. option:: -t, --persistent

If qemu-nbd supports a maximum number of connections rather than just a
bool...

> diff --git a/qapi/block-export.json b/qapi/block-export.json
> index 0ed63442a819..b2085a9fdd4c 100644
> --- a/qapi/block-export.json
> +++ b/qapi/block-export.json
> @@ -95,11 +95,15 @@
>  #                    the metadata context name "qemu:allocation-depth" to
>  #                    inspect allocation details. (since 5.2)
>  #
> +# @shared: True if the server should advertise that multiple clients may
> +#          connect, default false. (since 6.2)
> +#
>  # Since: 5.2
>  ##
>  { 'struct': 'BlockExportOptionsNbd',
>    'base': 'BlockExportOptionsNbdBase',
> -  'data': { '*bitmaps': ['str'], '*allocation-depth': 'bool' } }
> +  'data': { '*bitmaps': ['str'], '*allocation-depth': 'bool',
> +             '*shared': 'bool' } }

...wouldn't it be better to mirror this in the QAPI interface?

I think eventually we want to add everything missing to the built-in NBD
server and then change qemu-nbd to use it instead of managing the
connections itself. So I'm not sure if diverging here is a good idea.

Kevin


Re: [PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports
Posted by Eric Blake 2 years, 5 months ago
On Thu, Oct 28, 2021 at 04:37:36PM +0200, Kevin Wolf wrote:
> Am 27.08.2021 um 17:09 hat Eric Blake geschrieben:
> > According to the NBD spec, a server advertising
> > NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
> > not see any cache inconsistencies: when properly separated by a single
> > flush, actions performed by one client will be visible to another
> > client, regardless of which client did the flush.  We satisfy these
> > conditions in qemu because our block layer serializes any overlapping
> > operations (see bdrv_find_conflicting_request and friends): no matter
> > which client performs a flush, parallel requests coming from distinct
> > NBD clients will still be well-ordered by the time they are passed on
> > to the underlying device, with no caching in qemu proper to allow
> > stale results to leak after a flush.
> > 
> > We don't want to advertise MULTI_CONN when we know that a second
> > client can connect (which is the default for qemu-nbd, but not for QMP
> > nbd-server-add),
> 
> Do you mean when a second client _can't_ connect?

Oops, yes. The default for qemu-nbd is a single client (you have to
request -e for more than one), so a second client can't connect; for
nbd-server-add it is unlimited clients [1].

> 
> > so it does require a QAPI addition.  But other than
> > that, the actual change to advertise the bit for writable servers is
> > fairly small.  The harder part of this patch is setting up an iotest
> > to demonstrate behavior of multiple NBD clients to a single server.
> > It might be possible with parallel qemu-io processes, but concisely
> > managing that in shell is painful.
> 
> I think it should be fairly straightforward in a Python test case.

Probably, but my python is rather weak for writing such a case
off-hand from scratch. Is there an existing test that you are aware of
that might be easy to copy-and-paste from?

> 
> Another option is using a single QEMU or QSD instance that has multiple
> -blockdev for the same NBD server. For the server these are multiple
> clients, even if all connnection come from a single process.
> 
> > I found it easier to do by relying
> > on the libnbd project's nbdsh, which means this test will be skipped
> > on platforms where that is not available.
> > 
> > Signed-off-by: Eric Blake <eblake@redhat.com>
> 
> > diff --git a/docs/interop/nbd.txt b/docs/interop/nbd.txt
> > index 10ce098a29bf..d03910f1e9eb 100644
> > --- a/docs/interop/nbd.txt
> > +++ b/docs/interop/nbd.txt
> > @@ -68,3 +68,4 @@ NBD_CMD_BLOCK_STATUS for "qemu:dirty-bitmap:", NBD_CMD_CACHE
> >  * 4.2: NBD_FLAG_CAN_MULTI_CONN for shareable read-only exports,
> >  NBD_CMD_FLAG_FAST_ZERO
> >  * 5.2: NBD_CMD_BLOCK_STATUS for "qemu:allocation-depth"
> > +* 6.2: NBD_FLAG_CAN_MULTI_CONN for shareable writable exports
> > diff --git a/docs/tools/qemu-nbd.rst b/docs/tools/qemu-nbd.rst
> > index 5643da26e982..81be32164a55 100644
> > --- a/docs/tools/qemu-nbd.rst
> > +++ b/docs/tools/qemu-nbd.rst
> > @@ -138,8 +138,7 @@ driver options if ``--image-opts`` is specified.
> >  .. option:: -e, --shared=NUM
> > 
> >    Allow up to *NUM* clients to share the device (default
> > -  ``1``), 0 for unlimited. Safe for readers, but for now,
> > -  consistency is not guaranteed between multiple writers.
> > +  ``1``), 0 for unlimited.
> > 
> >  .. option:: -t, --persistent
> 
> If qemu-nbd supports a maximum number of connections rather than just a
> bool...
> 
> > diff --git a/qapi/block-export.json b/qapi/block-export.json
> > index 0ed63442a819..b2085a9fdd4c 100644
> > --- a/qapi/block-export.json
> > +++ b/qapi/block-export.json
> > @@ -95,11 +95,15 @@
> >  #                    the metadata context name "qemu:allocation-depth" to
> >  #                    inspect allocation details. (since 5.2)
> >  #
> > +# @shared: True if the server should advertise that multiple clients may
> > +#          connect, default false. (since 6.2)
> > +#
> >  # Since: 5.2
> >  ##
> >  { 'struct': 'BlockExportOptionsNbd',
> >    'base': 'BlockExportOptionsNbdBase',
> > -  'data': { '*bitmaps': ['str'], '*allocation-depth': 'bool' } }
> > +  'data': { '*bitmaps': ['str'], '*allocation-depth': 'bool',
> > +             '*shared': 'bool' } }
> 
> ...wouldn't it be better to mirror this in the QAPI interface?

Yeah, now that you mention it.  (In fact, before I got to this part of
the email, at point [1] above I was trying to look in
block-export.json to see whether there is any way to use QMP to expose
less than unlimited clients, and couldn't find it - because it isn't
there)

> 
> I think eventually we want to add everything missing to the built-in NBD
> server and then change qemu-nbd to use it instead of managing the
> connections itself. So I'm not sure if diverging here is a good idea.

That argument alone makes it sound like it is worth respinning this
series to at least pick up on exposing max-clients through QMP, so
that qemu-nbd and QMP have the same knobs (even if those knobs have
different defaults).

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


Re: [PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports
Posted by Vladimir Sementsov-Ogievskiy 2 years, 7 months ago
27.08.2021 18:09, Eric Blake wrote:
> According to the NBD spec, a server advertising
> NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
> not see any cache inconsistencies: when properly separated by a single
> flush, actions performed by one client will be visible to another
> client, regardless of which client did the flush.  We satisfy these
> conditions in qemu because our block layer serializes any overlapping
> operations (see bdrv_find_conflicting_request and friends): no matter
> which client performs a flush, parallel requests coming from distinct
> NBD clients will still be well-ordered by the time they are passed on
> to the underlying device, with no caching in qemu proper to allow
> stale results to leak after a flush.
> 
> We don't want to advertise MULTI_CONN when we know that a second
> client can connect (which is the default for qemu-nbd, but not for QMP
> nbd-server-add), so it does require a QAPI addition.  But other than
> that, the actual change to advertise the bit for writable servers is
> fairly small.  The harder part of this patch is setting up an iotest
> to demonstrate behavior of multiple NBD clients to a single server.
> It might be possible with parallel qemu-io processes, but concisely
> managing that in shell is painful.  I found it easier to do by relying
> on the libnbd project's nbdsh, which means this test will be skipped
> on platforms where that is not available.
> 
> Signed-off-by: Eric Blake <eblake@redhat.com>
> ---
>   docs/interop/nbd.txt                       |  1 +
>   docs/tools/qemu-nbd.rst                    |  3 +-
>   qapi/block-export.json                     |  6 +-
>   blockdev-nbd.c                             |  4 +
>   nbd/server.c                               | 10 +--
>   qemu-nbd.c                                 |  2 +
>   MAINTAINERS                                |  1 +
>   tests/qemu-iotests/tests/nbd-multiconn     | 91 ++++++++++++++++++++++
>   tests/qemu-iotests/tests/nbd-multiconn.out | 14 ++++
>   9 files changed, 124 insertions(+), 8 deletions(-)
>   create mode 100755 tests/qemu-iotests/tests/nbd-multiconn
>   create mode 100644 tests/qemu-iotests/tests/nbd-multiconn.out
> 
> diff --git a/docs/interop/nbd.txt b/docs/interop/nbd.txt
> index 10ce098a29bf..d03910f1e9eb 100644
> --- a/docs/interop/nbd.txt
> +++ b/docs/interop/nbd.txt
> @@ -68,3 +68,4 @@ NBD_CMD_BLOCK_STATUS for "qemu:dirty-bitmap:", NBD_CMD_CACHE
>   * 4.2: NBD_FLAG_CAN_MULTI_CONN for shareable read-only exports,
>   NBD_CMD_FLAG_FAST_ZERO
>   * 5.2: NBD_CMD_BLOCK_STATUS for "qemu:allocation-depth"
> +* 6.2: NBD_FLAG_CAN_MULTI_CONN for shareable writable exports
> diff --git a/docs/tools/qemu-nbd.rst b/docs/tools/qemu-nbd.rst
> index 5643da26e982..81be32164a55 100644
> --- a/docs/tools/qemu-nbd.rst
> +++ b/docs/tools/qemu-nbd.rst
> @@ -138,8 +138,7 @@ driver options if ``--image-opts`` is specified.
>   .. option:: -e, --shared=NUM
> 
>     Allow up to *NUM* clients to share the device (default
> -  ``1``), 0 for unlimited. Safe for readers, but for now,
> -  consistency is not guaranteed between multiple writers.
> +  ``1``), 0 for unlimited.
> 
>   .. option:: -t, --persistent
> 
> diff --git a/qapi/block-export.json b/qapi/block-export.json
> index 0ed63442a819..b2085a9fdd4c 100644
> --- a/qapi/block-export.json
> +++ b/qapi/block-export.json
> @@ -95,11 +95,15 @@
>   #                    the metadata context name "qemu:allocation-depth" to
>   #                    inspect allocation details. (since 5.2)
>   #
> +# @shared: True if the server should advertise that multiple clients may
> +#          connect, default false. (since 6.2)
> +#
>   # Since: 5.2
>   ##
>   { 'struct': 'BlockExportOptionsNbd',
>     'base': 'BlockExportOptionsNbdBase',
> -  'data': { '*bitmaps': ['str'], '*allocation-depth': 'bool' } }
> +  'data': { '*bitmaps': ['str'], '*allocation-depth': 'bool',
> +             '*shared': 'bool' } }
> 
>   ##
>   # @BlockExportOptionsVhostUserBlk:
> diff --git a/blockdev-nbd.c b/blockdev-nbd.c
> index bdfa7ed3a5a9..258feaa76e02 100644
> --- a/blockdev-nbd.c
> +++ b/blockdev-nbd.c
> @@ -221,6 +221,10 @@ void qmp_nbd_server_add(NbdServerAddOptions *arg, Error **errp)
>           QAPI_LIST_PREPEND(export_opts->u.nbd.bitmaps, g_strdup(arg->bitmap));
>       }
> 
> +    /* nbd-server-add always permits parallel clients */
> +    export_opts->u.nbd.has_shared = true;
> +    export_opts->u.nbd.shared = true;

Hmm, I don't follow.. Before the patch we allowed multicon only for readonly exports. Now with nbd-server-add we allow it for any kind of export?

> +
>       /*
>        * nbd-server-add doesn't complain when a read-only device should be
>        * exported as writable, but simply downgrades it. This is an error with
> diff --git a/nbd/server.c b/nbd/server.c
> index 3927f7789dcf..1646796a4798 100644
> --- a/nbd/server.c
> +++ b/nbd/server.c
> @@ -1,5 +1,5 @@
>   /*
> - *  Copyright (C) 2016-2020 Red Hat, Inc.
> + *  Copyright (C) 2016-2021 Red Hat, Inc.
>    *  Copyright (C) 2005  Anthony Liguori <anthony@codemonkey.ws>
>    *
>    *  Network Block Device Server Side
> @@ -1634,7 +1634,7 @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
>       int64_t size;
>       uint64_t perm, shared_perm;
>       bool readonly = !exp_args->writable;
> -    bool shared = !exp_args->writable;
> +    bool shared = arg->shared;
>       strList *bitmaps;
>       size_t i;
>       int ret;
> @@ -1685,11 +1685,11 @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
>       exp->description = g_strdup(arg->description);
>       exp->nbdflags = (NBD_FLAG_HAS_FLAGS | NBD_FLAG_SEND_FLUSH |
>                        NBD_FLAG_SEND_FUA | NBD_FLAG_SEND_CACHE);
> +    if (shared) {
> +        exp->nbdflags |= NBD_FLAG_CAN_MULTI_CONN;
> +    }
>       if (readonly) {
>           exp->nbdflags |= NBD_FLAG_READ_ONLY;
> -        if (shared) {
> -            exp->nbdflags |= NBD_FLAG_CAN_MULTI_CONN;
> -        }

And here, before the patch we allowed multicon for any readonly export. Now in case of block-export-add, the default behavior for readonly export changes.. Is it OK?

More compatible way would be modify with new option only behavior of writable exports and keep readonly exports as is.

>       } else {
>           exp->nbdflags |= (NBD_FLAG_SEND_TRIM | NBD_FLAG_SEND_WRITE_ZEROES |
>                             NBD_FLAG_SEND_FAST_ZERO);
> diff --git a/qemu-nbd.c b/qemu-nbd.c
> index f68fcceadd9e..198b1c55729d 100644
> --- a/qemu-nbd.c
> +++ b/qemu-nbd.c
> @@ -1084,6 +1084,8 @@ int main(int argc, char **argv)
>               .bitmaps              = bitmaps,
>               .has_allocation_depth = alloc_depth,
>               .allocation_depth     = alloc_depth,
> +            .has_shared           = true,
> +            .shared               = shared != 1,
>           },
>       };
>       blk_exp_add(export_opts, &error_fatal);
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 6b3697962c1b..6eab82e6b982 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -3142,6 +3142,7 @@ F: qemu-nbd.*
>   F: blockdev-nbd.c
>   F: docs/interop/nbd.txt
>   F: docs/tools/qemu-nbd.rst
> +F: tests/qemu-iotests/tests/*nbd*
>   T: git https://repo.or.cz/qemu/ericb.git nbd
>   T: git https://src.openvz.org/scm/~vsementsov/qemu.git nbd
> 
> diff --git a/tests/qemu-iotests/tests/nbd-multiconn b/tests/qemu-iotests/tests/nbd-multiconn
> new file mode 100755
> index 000000000000..66b410546e2f
> --- /dev/null
> +++ b/tests/qemu-iotests/tests/nbd-multiconn
> @@ -0,0 +1,91 @@
> +#!/usr/bin/env bash
> +# group: rw auto quick
> +#
> +# Test that qemu-nbd MULTI_CONN works
> +#
> +# Copyright (C) 2018-2021 Red Hat, Inc.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 2 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +#
> +
> +seq="$(basename $0)"
> +echo "QA output created by $seq"
> +
> +status=1 # failure is the default!
> +
> +_cleanup()
> +{
> +    _cleanup_test_img
> +    nbd_server_stop
> +}
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +# get standard environment, filters and checks
> +cd ..
> +. ./common.rc
> +. ./common.filter
> +. ./common.nbd
> +
> +_supported_fmt qcow2
> +_supported_proto file
> +_supported_os Linux
> +_require_command QEMU_NBD
> +
> +# Parallel client connections are easier with libnbd than qemu-io:
> +if ! (nbdsh --version) >/dev/null 2>&1; then
> +    _notrun "nbdsh utility required, skipped this test"
> +fi
> +
> +echo
> +echo "=== Initial image setup ==="
> +echo
> +
> +_make_test_img 4M
> +$QEMU_IO -c 'w -P 1 0 2M' -c 'w -P 2 2M 2M' "$TEST_IMG" | _filter_qemu_io
> +IMG="driver=nbd,server.type=unix,server.path=$nbd_unix_socket"
> +nbd_server_start_unix_socket -f qcow2 -e 5 "$TEST_IMG"
> +
> +echo
> +echo "=== Demonstrate parallel clients ==="
> +echo
> +
> +export nbd_unix_socket
> +nbdsh -c '
> +import os
> +sock = os.getenv("nbd_unix_socket")
> +h = []
> +
> +for i in range(3):
> +  h.append(nbd.NBD())
> +  h[i].connect_unix(sock)
> +  assert h[i].can_multi_conn()
> +
> +buf1 = h[0].pread(1024 * 1024, 0)
> +if buf1 != b"\x01" * 1024 * 1024:
> +  print("Unexpected initial read")
> +buf2 = b"\x03" * 1024 * 1024
> +h[1].pwrite(buf2, 0)
> +h[2].flush()
> +buf1 = h[0].pread(1024 * 1024, 0)
> +if buf1 == buf2:
> +  print("Flush appears to be consistent across connections")
> +
> +for i in range(3):
> +  h[i].shutdown()
> +'
> +
> +# success, all done
> +echo '*** done'
> +rm -f $seq.full
> +status=0
> diff --git a/tests/qemu-iotests/tests/nbd-multiconn.out b/tests/qemu-iotests/tests/nbd-multiconn.out
> new file mode 100644
> index 000000000000..4554099e4d62
> --- /dev/null
> +++ b/tests/qemu-iotests/tests/nbd-multiconn.out
> @@ -0,0 +1,14 @@
> +QA output created by nbd-multiconn
> +
> +=== Initial image setup ===
> +
> +Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=4194304
> +wrote 2097152/2097152 bytes at offset 0
> +2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +wrote 2097152/2097152 bytes at offset 2097152
> +2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +
> +=== Demonstrate parallel clients ===
> +
> +Flush appears to be consistent across connections
> +*** done
> 


-- 
Best regards,
Vladimir

Re: [PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports
Posted by Richard W.M. Jones 2 years, 8 months ago
On Fri, Aug 27, 2021 at 10:09:16AM -0500, Eric Blake wrote:
> +# Parallel client connections are easier with libnbd than qemu-io:
> +if ! (nbdsh --version) >/dev/null 2>&1; then

I'm curious why the parentheses are needed here?

> +export nbd_unix_socket
> +nbdsh -c '
> +import os
> +sock = os.getenv("nbd_unix_socket")
> +h = []
> +
> +for i in range(3):
> +  h.append(nbd.NBD())
> +  h[i].connect_unix(sock)
> +  assert h[i].can_multi_conn()
> +
> +buf1 = h[0].pread(1024 * 1024, 0)
> +if buf1 != b"\x01" * 1024 * 1024:
> +  print("Unexpected initial read")
> +buf2 = b"\x03" * 1024 * 1024
> +h[1].pwrite(buf2, 0)                   #1
> +h[2].flush()
> +buf1 = h[0].pread(1024 * 1024, 0)
> +if buf1 == buf2:
> +  print("Flush appears to be consistent across connections")

The test is ... simple.

Would it be a better test if the statement at line #1 above was
h.aio_pwrite instead, so the operation is only started?  This would
depend on some assumptions inside libnbd: That the h[1] socket is
immediately writable and that libnbd's statement will write before
returning, which are likely to be true, but perhaps you could do
something with parsing libnbd debug statements to check that the state
machine has started the write.

Another idea would be to make the write at line #1 be very large, so
that perhaps the qemu block layer would split it up.  After the flush
you would read the beginning and end bytes of the written part to
approximate a check that the whole write has been flushed so parts of
it are not hanging around in the cache of the first connection.

Anyway the patch as written seems fine to me so:

Reviewed-by: Richard W.M. Jones <rjones@redhat.com>

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html


Re: [PATCH] nbd/server: Advertise MULTI_CONN for shared writable exports
Posted by Eric Blake 2 years, 8 months ago
On Fri, Aug 27, 2021 at 05:48:10PM +0100, Richard W.M. Jones wrote:
> On Fri, Aug 27, 2021 at 10:09:16AM -0500, Eric Blake wrote:
> > +# Parallel client connections are easier with libnbd than qemu-io:
> > +if ! (nbdsh --version) >/dev/null 2>&1; then
> 
> I'm curious why the parentheses are needed here?

Habit - some earlier Bourne shells would leak an error message if
nbdsh was not found unless you guarantee that stderr is redirected
first by using the subshell (or maybe that's what happen when you do
feature tests of a shell builtin, rather than an external app).  But
since this test uses bash, you're right that it's not necessary in
this instance, so I'll simplify.

> 
> > +export nbd_unix_socket
> > +nbdsh -c '
> > +import os
> > +sock = os.getenv("nbd_unix_socket")
> > +h = []
> > +
> > +for i in range(3):
> > +  h.append(nbd.NBD())
> > +  h[i].connect_unix(sock)
> > +  assert h[i].can_multi_conn()
> > +
> > +buf1 = h[0].pread(1024 * 1024, 0)
> > +if buf1 != b"\x01" * 1024 * 1024:
> > +  print("Unexpected initial read")
> > +buf2 = b"\x03" * 1024 * 1024
> > +h[1].pwrite(buf2, 0)                   #1
> > +h[2].flush()
> > +buf1 = h[0].pread(1024 * 1024, 0)
> > +if buf1 == buf2:
> > +  print("Flush appears to be consistent across connections")
> 
> The test is ... simple.
> 
> Would it be a better test if the statement at line #1 above was
> h.aio_pwrite instead, so the operation is only started?  This would
> depend on some assumptions inside libnbd: That the h[1] socket is
> immediately writable and that libnbd's statement will write before
> returning, which are likely to be true, but perhaps you could do
> something with parsing libnbd debug statements to check that the state
> machine has started the write.

The rules on consistent operations are tricky - it is not enough that
the client started the request in order, but that the server processed
things in order.  Even though you can have two clients in one process
and enough happens-between code in that you are sure that client A
sent NBD_CMD_WRITE prior to client B sending NBD_CMD_FLUSH, you do NOT
have enough power over TCP to prove that the server did not receive
client B's request first, unless you ALSO ensure that client A waited
for the server's response to the NBD_CMD_WRITE.  Similarly for client
C sending NBD_CMD_READ prior to client B getting the server's response
to NBD_CMD_FLUSH.  As soon as out-of-order request processing can
happen (which is more likely when you have parallel sockets), you can
only guarantee the cross-client consistency if you can also guarantee
which replies were received prior to sending a given request.  Using
asynch I/O does not provide those guarantees; the only reliable way to
test cross-client consistency is with synchronous commands.

Hmm - I should probably tweak the qemu-nbd command to set an explicit
cache mode (if it accidentally settles on cache=writethrough, that
defeats the point of an independent client's flush impacting all other
clients, since the client doing the write will also be flushing).

> 
> Another idea would be to make the write at line #1 be very large, so
> that perhaps the qemu block layer would split it up.  After the flush
> you would read the beginning and end bytes of the written part to
> approximate a check that the whole write has been flushed so parts of
> it are not hanging around in the cache of the first connection.
> 
> Anyway the patch as written seems fine to me so:
> 
> Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org