[Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support

Peter Xu posted 15 patches 6 years, 7 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/1505375436-28439-1-git-send-email-peterx@redhat.com
Test checkpatch passed
Test docker failed
Test s390x passed
There is a newer version of this series
chardev/char-io.c                |  15 ++-
docs/devel/qapi-code-gen.txt     |  51 ++++++-
include/monitor/monitor.h        |   2 +-
include/qapi/qmp/dispatch.h      |   2 +
include/qapi/qmp/json-streamer.h |   8 +-
include/qapi/qmp/qstring.h       |   1 +
monitor.c                        | 283 +++++++++++++++++++++++++++++++--------
qapi/introspect.json             |   6 +-
qapi/migration.json              |   3 +-
qapi/qmp-dispatch.c              |  34 +++++
qga/main.c                       |   5 +-
qobject/json-streamer.c          |   7 +-
qobject/qjson.c                  |   5 +-
qobject/qstring.c                |  13 +-
scripts/qapi-commands.py         |  19 ++-
scripts/qapi-introspect.py       |  10 +-
scripts/qapi.py                  |  15 ++-
scripts/qapi2texi.py             |   2 +-
tests/libqtest.c                 |   5 +-
tests/qapi-schema/test-qapi.py   |   2 +-
trace-events                     |   2 +
vl.c                             |   3 +-
22 files changed, 398 insertions(+), 95 deletions(-)
[Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 7 months ago
This series was born from this one:

  https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html

The design comes from Markus, and also the whole-bunch-of discussions
in previous thread.  My heartful thanks to Markus, Daniel, Dave,
Stefan, etc. on discussing the topic (...again!), providing shiny
ideas and suggestions.  Finally we got such a solution that seems to
satisfy everyone.

I re-started the versioning since this series is totally different
from previous one.  Now it's version 1.

In case new reviewers come along the way without reading previous
discussions, I will try to do a summary on what this is all about.

What is OOB execution?
======================

It's the shortcut of Out-Of-Band execution, its name is given by
Markus.  It's a way to quickly execute a QMP request.  Say, originally
QMP is going throw these steps:

      JSON Parser --> QMP Dispatcher --> Respond
          /|\    (2)                (3)     |
       (1) |                               \|/ (4)
           +---------  main thread  --------+

The requests are executed by the so-called QMP-dispatcher after the
JSON is parsed.  If OOB is on, we run the command directly in the
parser and quickly returns.

Yeah I know in current code the parser calls dispatcher directly
(please see handle_qmp_command()).  However it's not true again after
this series (parser will has its own IO thread, and dispatcher will
still be run in main thread).  So this OOB does brings something
different.

There are more details on why OOB and the difference/relationship
between OOB, async QMP, block/general jobs, etc.. but IMHO that's
slightly out of topic (and believe me, it's not easy for me to
summarize that).  For more information, please refers to [1].

Summary ends here.

Some Implementation Details
===========================

Again, I mentioned that the old QMP workflow is this:

      JSON Parser --> QMP Dispatcher --> Respond
          /|\    (2)                (3)     |
       (1) |                               \|/ (4)
           +---------  main thread  --------+

What this series does is, firstly:

      JSON Parser     QMP Dispatcher --> Respond
          /|\ |           /|\       (4)     |
           |  | (2)        | (3)            |  (5)
       (1) |  +----->      |               \|/
           +---------  main thread  <-------+

And further:

               queue/kick
     JSON Parser ======> QMP Dispatcher --> Respond
         /|\ |     (3)       /|\        (4)    |
      (1) |  | (2)            |                |  (5)
          | \|/               |               \|/
        IO thread         main thread  <-------+

Then it introduced the "allow-oob" parameter in QAPI schema to define
commands, and "run-oob" flag to let oob-allowed command to run in the
parser.

The last patch enables this for "migrate-incoming" command.

Please review.  Thanks.

[1] https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html

Peter Xu (15):
  char-io: fix possible race on IOWatchPoll
  qobject: allow NULL for qstring_get_str()
  qobject: introduce qobject_to_str()
  monitor: move skip_flush into monitor_data_init
  qjson: add "opaque" field to JSONMessageParser
  monitor: move the cur_mon hack deeper for QMP
  monitor: unify global init
  monitor: create IO thread
  monitor: allow to use IO thread for parsing
  monitor: introduce monitor_qmp_respond()
  monitor: separate QMP parser and dispatcher
  monitor: enable IO thread for (qmp & !mux) typed
  qapi: introduce new cmd option "allow-oob"
  qmp: support out-of-band (oob) execution
  qmp: let migrate-incoming allow out-of-band

 chardev/char-io.c                |  15 ++-
 docs/devel/qapi-code-gen.txt     |  51 ++++++-
 include/monitor/monitor.h        |   2 +-
 include/qapi/qmp/dispatch.h      |   2 +
 include/qapi/qmp/json-streamer.h |   8 +-
 include/qapi/qmp/qstring.h       |   1 +
 monitor.c                        | 283 +++++++++++++++++++++++++++++++--------
 qapi/introspect.json             |   6 +-
 qapi/migration.json              |   3 +-
 qapi/qmp-dispatch.c              |  34 +++++
 qga/main.c                       |   5 +-
 qobject/json-streamer.c          |   7 +-
 qobject/qjson.c                  |   5 +-
 qobject/qstring.c                |  13 +-
 scripts/qapi-commands.py         |  19 ++-
 scripts/qapi-introspect.py       |  10 +-
 scripts/qapi.py                  |  15 ++-
 scripts/qapi2texi.py             |   2 +-
 tests/libqtest.c                 |   5 +-
 tests/qapi-schema/test-qapi.py   |   2 +-
 trace-events                     |   2 +
 vl.c                             |   3 +-
 22 files changed, 398 insertions(+), 95 deletions(-)

-- 
2.7.4


Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Marc-André Lureau 6 years, 7 months ago
Hi

On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> This series was born from this one:
>
>   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
>
> The design comes from Markus, and also the whole-bunch-of discussions
> in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> Stefan, etc. on discussing the topic (...again!), providing shiny
> ideas and suggestions.  Finally we got such a solution that seems to
> satisfy everyone.
>
> I re-started the versioning since this series is totally different
> from previous one.  Now it's version 1.
>
> In case new reviewers come along the way without reading previous
> discussions, I will try to do a summary on what this is all about.
>
> What is OOB execution?
> ======================
>
> It's the shortcut of Out-Of-Band execution, its name is given by
> Markus.  It's a way to quickly execute a QMP request.  Say, originally
> QMP is going throw these steps:
>
>       JSON Parser --> QMP Dispatcher --> Respond
>           /|\    (2)                (3)     |
>        (1) |                               \|/ (4)
>            +---------  main thread  --------+
>
> The requests are executed by the so-called QMP-dispatcher after the
> JSON is parsed.  If OOB is on, we run the command directly in the
> parser and quickly returns.

All commands should have the "id" field mandatory in this case, else
the client will not distinguish the replies coming from the last/oob
and the previous commands.

This should probably be enforced upfront by client capability checks,
more below.

> Yeah I know in current code the parser calls dispatcher directly
> (please see handle_qmp_command()).  However it's not true again after
> this series (parser will has its own IO thread, and dispatcher will
> still be run in main thread).  So this OOB does brings something
> different.
>
> There are more details on why OOB and the difference/relationship
> between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> slightly out of topic (and believe me, it's not easy for me to
> summarize that).  For more information, please refers to [1].
>
> Summary ends here.
>
> Some Implementation Details
> ===========================
>
> Again, I mentioned that the old QMP workflow is this:
>
>       JSON Parser --> QMP Dispatcher --> Respond
>           /|\    (2)                (3)     |
>        (1) |                               \|/ (4)
>            +---------  main thread  --------+
>
> What this series does is, firstly:
>
>       JSON Parser     QMP Dispatcher --> Respond
>           /|\ |           /|\       (4)     |
>            |  | (2)        | (3)            |  (5)
>        (1) |  +----->      |               \|/
>            +---------  main thread  <-------+
>
> And further:
>
>                queue/kick
>      JSON Parser ======> QMP Dispatcher --> Respond
>          /|\ |     (3)       /|\        (4)    |
>       (1) |  | (2)            |                |  (5)
>           | \|/               |               \|/
>         IO thread         main thread  <-------+

Is the queue per monitor or per client? And is the dispatching going
to be processed even if the client is disconnected, and are new
clients going to receive the replies from previous clients commands? I
believe there should be a per-client context, so there won't be "id"
request conflicts.

>
> Then it introduced the "allow-oob" parameter in QAPI schema to define
> commands, and "run-oob" flag to let oob-allowed command to run in the
> parser.

From a protocol point of view, I find that "run-oob" distinction per
command a bit pointless. It helps with legacy client that wouldn't
expect out-of-order replies if qemu were to run oob commands oob by
default though. Clients shouldn't care about how/where a command is
being queued or not. If they send a command, they want it processed as
quickly as possible. However, it can be interesting to know if the
implementation of the command will be able to deliver oob, so that
data in the introspection could be useful.

I would rather propose a client/server capability in qmp_capabilities,
call it "oob":

This capability indicates oob commands support.

An oob command is a regular client message request with the "id"
member mandatory, but the reply may be delivered
out of order by the server if the client supports
it too.

If both the server and the client have the "oob" capability, the
server can handle new client requests while previous requests are being
processed.

If the client doesn't have the "oob" capability, it may still call
an oob command, and make multiple outstanding calls. In this case,
the commands are processed in order, so the replies will also be in
order. The "id" member isn't mandatory in this case.

The client should match the replies with the "id" member associated
with the requests.

When a client is disconnected, the pending commands are not
necessarily cancelled. But the future clients will not get replies from
commands they didn't make (they might, however, receive side-effects
events).

Note that without "oob" support, a client may still receive
 messages (or events) from the server between the time a
request is handled by the server and the reply is received. It must
thus be prepared to handle dispatching both events and reply after
sending a request.


(see also https://lists.gnu.org/archive/html/qemu-devel/2017-01/msg03641.html)


> The last patch enables this for "migrate-incoming" command.
>
> Please review.  Thanks.
>
> [1] https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
>
> Peter Xu (15):
>   char-io: fix possible race on IOWatchPoll
>   qobject: allow NULL for qstring_get_str()
>   qobject: introduce qobject_to_str()
>   monitor: move skip_flush into monitor_data_init
>   qjson: add "opaque" field to JSONMessageParser
>   monitor: move the cur_mon hack deeper for QMP
>   monitor: unify global init
>   monitor: create IO thread
>   monitor: allow to use IO thread for parsing
>   monitor: introduce monitor_qmp_respond()
>   monitor: separate QMP parser and dispatcher

There should be a limit in the number of requests the thread can
queue. Before the patch, the limit was enforced by system socket
buffering I think. Now, should oob commands still be processed even if
the queue is full? If so, the thread can't be suspended.

>   monitor: enable IO thread for (qmp & !mux) typed
>   qapi: introduce new cmd option "allow-oob"
>   qmp: support out-of-band (oob) execution
>   qmp: let migrate-incoming allow out-of-band
>
>  chardev/char-io.c                |  15 ++-
>  docs/devel/qapi-code-gen.txt     |  51 ++++++-
>  include/monitor/monitor.h        |   2 +-
>  include/qapi/qmp/dispatch.h      |   2 +
>  include/qapi/qmp/json-streamer.h |   8 +-
>  include/qapi/qmp/qstring.h       |   1 +
>  monitor.c                        | 283 +++++++++++++++++++++++++++++++--------
>  qapi/introspect.json             |   6 +-
>  qapi/migration.json              |   3 +-
>  qapi/qmp-dispatch.c              |  34 +++++
>  qga/main.c                       |   5 +-
>  qobject/json-streamer.c          |   7 +-
>  qobject/qjson.c                  |   5 +-
>  qobject/qstring.c                |  13 +-
>  scripts/qapi-commands.py         |  19 ++-
>  scripts/qapi-introspect.py       |  10 +-
>  scripts/qapi.py                  |  15 ++-
>  scripts/qapi2texi.py             |   2 +-
>  tests/libqtest.c                 |   5 +-
>  tests/qapi-schema/test-qapi.py   |   2 +-
>  trace-events                     |   2 +
>  vl.c                             |   3 +-
>  22 files changed, 398 insertions(+), 95 deletions(-)
>
> --
> 2.7.4
>



-- 
Marc-André Lureau

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Stefan Hajnoczi 6 years, 7 months ago
On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> There should be a limit in the number of requests the thread can
> queue. Before the patch, the limit was enforced by system socket
> buffering I think. Now, should oob commands still be processed even if
> the queue is full? If so, the thread can't be suspended.

I agree.

Memory usage must be bounded.  The number of requests is less important
than the amount of memory consumed by them.

Existing QMP clients that send multiple QMP commands without waiting for
replies need to rethink their strategy because OOB commands cannot be
processed if queued non-OOB commands consume too much memory.

Stefan

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 7 months ago
On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > There should be a limit in the number of requests the thread can
> > queue. Before the patch, the limit was enforced by system socket
> > buffering I think. Now, should oob commands still be processed even if
> > the queue is full? If so, the thread can't be suspended.
> 
> I agree.
> 
> Memory usage must be bounded.  The number of requests is less important
> than the amount of memory consumed by them.
> 
> Existing QMP clients that send multiple QMP commands without waiting for
> replies need to rethink their strategy because OOB commands cannot be
> processed if queued non-OOB commands consume too much memory.

Thanks for pointing out this.  Yes the memory usage problem is valid,
as Markus pointed out as well in previous discussions (in "Flow
Control" section of that long reply).  Hopefully this series basically
can work from design prospective, then I'll add this flow control in
next version.

Regarding to what we should do if the limit is reached: Markus
provided a few options, but the one I prefer most is that we don't
respond, but send an event showing that a command is dropped.
However, I would like it not queued, but a direct reply (after all,
it's an event, and we should not need to care much on ordering of it).
Then we can get rid of the babysitting of those "to be failed"
requests asap, meanwhile we don't lose anything IMHO.

I think I also missed at least a unit test for this new interface.
Again, I'll add it after the whole idea is proved solid.  Thanks,

-- 
Peter Xu

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Stefan Hajnoczi 6 years, 7 months ago
On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > There should be a limit in the number of requests the thread can
> > > queue. Before the patch, the limit was enforced by system socket
> > > buffering I think. Now, should oob commands still be processed even if
> > > the queue is full? If so, the thread can't be suspended.
> > 
> > I agree.
> > 
> > Memory usage must be bounded.  The number of requests is less important
> > than the amount of memory consumed by them.
> > 
> > Existing QMP clients that send multiple QMP commands without waiting for
> > replies need to rethink their strategy because OOB commands cannot be
> > processed if queued non-OOB commands consume too much memory.
> 
> Thanks for pointing out this.  Yes the memory usage problem is valid,
> as Markus pointed out as well in previous discussions (in "Flow
> Control" section of that long reply).  Hopefully this series basically
> can work from design prospective, then I'll add this flow control in
> next version.
> 
> Regarding to what we should do if the limit is reached: Markus
> provided a few options, but the one I prefer most is that we don't
> respond, but send an event showing that a command is dropped.
> However, I would like it not queued, but a direct reply (after all,
> it's an event, and we should not need to care much on ordering of it).
> Then we can get rid of the babysitting of those "to be failed"
> requests asap, meanwhile we don't lose anything IMHO.
> 
> I think I also missed at least a unit test for this new interface.
> Again, I'll add it after the whole idea is proved solid.  Thanks,

Another solution: the server reports available receive buffer space to
the client.  The server only guarantees immediate OOB processing when
the client stays within the receive buffer size.

Clients wishing to take advantage of OOB must query the receive buffer
size and make sure to leave enough room.

The advantage of this approach is that the semantics are backwards
compatible (existing clients may continue to queue as many commands as
they wish) and it requires no new behavior in the client (no new QMP
event code path).

Stefan

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Daniel P. Berrange 6 years, 7 months ago
On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > There should be a limit in the number of requests the thread can
> > > > queue. Before the patch, the limit was enforced by system socket
> > > > buffering I think. Now, should oob commands still be processed even if
> > > > the queue is full? If so, the thread can't be suspended.
> > > 
> > > I agree.
> > > 
> > > Memory usage must be bounded.  The number of requests is less important
> > > than the amount of memory consumed by them.
> > > 
> > > Existing QMP clients that send multiple QMP commands without waiting for
> > > replies need to rethink their strategy because OOB commands cannot be
> > > processed if queued non-OOB commands consume too much memory.
> > 
> > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > as Markus pointed out as well in previous discussions (in "Flow
> > Control" section of that long reply).  Hopefully this series basically
> > can work from design prospective, then I'll add this flow control in
> > next version.
> > 
> > Regarding to what we should do if the limit is reached: Markus
> > provided a few options, but the one I prefer most is that we don't
> > respond, but send an event showing that a command is dropped.
> > However, I would like it not queued, but a direct reply (after all,
> > it's an event, and we should not need to care much on ordering of it).
> > Then we can get rid of the babysitting of those "to be failed"
> > requests asap, meanwhile we don't lose anything IMHO.
> > 
> > I think I also missed at least a unit test for this new interface.
> > Again, I'll add it after the whole idea is proved solid.  Thanks,
> 
> Another solution: the server reports available receive buffer space to
> the client.  The server only guarantees immediate OOB processing when
> the client stays within the receive buffer size.
> 
> Clients wishing to take advantage of OOB must query the receive buffer
> size and make sure to leave enough room.

I don't think having to query it ahead of time is particularly nice,
and of course it is inherantly racy.

I would just have QEMU emit an event when it pausing processing of the
incoming commands due to a full queue.  If the event includes the ID
of the last queued command, the client will know which (if any) of
its outstanding commands are delayed. Another even can be sent when
it restarts reading.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 7 months ago
* Daniel P. Berrange (berrange@redhat.com) wrote:
> On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > There should be a limit in the number of requests the thread can
> > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > the queue is full? If so, the thread can't be suspended.
> > > > 
> > > > I agree.
> > > > 
> > > > Memory usage must be bounded.  The number of requests is less important
> > > > than the amount of memory consumed by them.
> > > > 
> > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > replies need to rethink their strategy because OOB commands cannot be
> > > > processed if queued non-OOB commands consume too much memory.
> > > 
> > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > as Markus pointed out as well in previous discussions (in "Flow
> > > Control" section of that long reply).  Hopefully this series basically
> > > can work from design prospective, then I'll add this flow control in
> > > next version.
> > > 
> > > Regarding to what we should do if the limit is reached: Markus
> > > provided a few options, but the one I prefer most is that we don't
> > > respond, but send an event showing that a command is dropped.
> > > However, I would like it not queued, but a direct reply (after all,
> > > it's an event, and we should not need to care much on ordering of it).
> > > Then we can get rid of the babysitting of those "to be failed"
> > > requests asap, meanwhile we don't lose anything IMHO.
> > > 
> > > I think I also missed at least a unit test for this new interface.
> > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > 
> > Another solution: the server reports available receive buffer space to
> > the client.  The server only guarantees immediate OOB processing when
> > the client stays within the receive buffer size.
> > 
> > Clients wishing to take advantage of OOB must query the receive buffer
> > size and make sure to leave enough room.
> 
> I don't think having to query it ahead of time is particularly nice,
> and of course it is inherantly racy.
> 
> I would just have QEMU emit an event when it pausing processing of the
> incoming commands due to a full queue.  If the event includes the ID
> of the last queued command, the client will know which (if any) of
> its outstanding commands are delayed. Another even can be sent when
> it restarts reading.

Hmm and now we're implementing flow control!

a) What exactly is the current semantics/buffer sizes?
b) When do clients send multiple QMP commands on one channel without
waiting for the response to the previous command?
c) Would one queue entry for each class of commands/channel work
  (Where a class of commands is currently 'normal' and 'oob')

Dave

> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Daniel P. Berrange 6 years, 7 months ago
On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrange (berrange@redhat.com) wrote:
> > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > There should be a limit in the number of requests the thread can
> > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > 
> > > > > I agree.
> > > > > 
> > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > than the amount of memory consumed by them.
> > > > > 
> > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > processed if queued non-OOB commands consume too much memory.
> > > > 
> > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > Control" section of that long reply).  Hopefully this series basically
> > > > can work from design prospective, then I'll add this flow control in
> > > > next version.
> > > > 
> > > > Regarding to what we should do if the limit is reached: Markus
> > > > provided a few options, but the one I prefer most is that we don't
> > > > respond, but send an event showing that a command is dropped.
> > > > However, I would like it not queued, but a direct reply (after all,
> > > > it's an event, and we should not need to care much on ordering of it).
> > > > Then we can get rid of the babysitting of those "to be failed"
> > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > 
> > > > I think I also missed at least a unit test for this new interface.
> > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > 
> > > Another solution: the server reports available receive buffer space to
> > > the client.  The server only guarantees immediate OOB processing when
> > > the client stays within the receive buffer size.
> > > 
> > > Clients wishing to take advantage of OOB must query the receive buffer
> > > size and make sure to leave enough room.
> > 
> > I don't think having to query it ahead of time is particularly nice,
> > and of course it is inherantly racy.
> > 
> > I would just have QEMU emit an event when it pausing processing of the
> > incoming commands due to a full queue.  If the event includes the ID
> > of the last queued command, the client will know which (if any) of
> > its outstanding commands are delayed. Another even can be sent when
> > it restarts reading.
> 
> Hmm and now we're implementing flow control!
> 
> a) What exactly is the current semantics/buffer sizes?
> b) When do clients send multiple QMP commands on one channel without
> waiting for the response to the previous command?
> c) Would one queue entry for each class of commands/channel work
>   (Where a class of commands is currently 'normal' and 'oob')

I do wonder if we need to worry about request limiting at all from the
client side.  For non-OOB commands clients will wait for a reply before
sending a 2nd non-OOB command, so you'll never get a deep queue for.

OOB commands are supposed to be things which can be handled quickly
without blocking, so even if a client sent several commands at once
without waiting for replies, they're going to be processed quickly,
so whether we temporarily block reading off the wire is a minor
detail.

IOW, I think we could just have a fixed 10 command queue and apps just
pretend that there's an infinite queue and nothing bad would happen from
the app's POV.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 7 months ago
* Daniel P. Berrange (berrange@redhat.com) wrote:
> On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > 
> > > > > > I agree.
> > > > > > 
> > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > than the amount of memory consumed by them.
> > > > > > 
> > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > 
> > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > can work from design prospective, then I'll add this flow control in
> > > > > next version.
> > > > > 
> > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > provided a few options, but the one I prefer most is that we don't
> > > > > respond, but send an event showing that a command is dropped.
> > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > 
> > > > > I think I also missed at least a unit test for this new interface.
> > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > 
> > > > Another solution: the server reports available receive buffer space to
> > > > the client.  The server only guarantees immediate OOB processing when
> > > > the client stays within the receive buffer size.
> > > > 
> > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > size and make sure to leave enough room.
> > > 
> > > I don't think having to query it ahead of time is particularly nice,
> > > and of course it is inherantly racy.
> > > 
> > > I would just have QEMU emit an event when it pausing processing of the
> > > incoming commands due to a full queue.  If the event includes the ID
> > > of the last queued command, the client will know which (if any) of
> > > its outstanding commands are delayed. Another even can be sent when
> > > it restarts reading.
> > 
> > Hmm and now we're implementing flow control!
> > 
> > a) What exactly is the current semantics/buffer sizes?
> > b) When do clients send multiple QMP commands on one channel without
> > waiting for the response to the previous command?
> > c) Would one queue entry for each class of commands/channel work
> >   (Where a class of commands is currently 'normal' and 'oob')
> 
> I do wonder if we need to worry about request limiting at all from the
> client side.  For non-OOB commands clients will wait for a reply before
> sending a 2nd non-OOB command, so you'll never get a deep queue for.
> 
> OOB commands are supposed to be things which can be handled quickly
> without blocking, so even if a client sent several commands at once
> without waiting for replies, they're going to be processed quickly,
> so whether we temporarily block reading off the wire is a minor
> detail.

Lets just define it so that it can't - you send an OOB command and wait
for it's response before sending another on that channel.

> IOW, I think we could just have a fixed 10 command queue and apps just
> pretend that there's an infinite queue and nothing bad would happen from
> the app's POV.

Can you justify 10 as opposed to 1?

Dave

> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Daniel P. Berrange 6 years, 7 months ago
On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrange (berrange@redhat.com) wrote:
> > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > 
> > > > > > > I agree.
> > > > > > > 
> > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > than the amount of memory consumed by them.
> > > > > > > 
> > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > 
> > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > next version.
> > > > > > 
> > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > respond, but send an event showing that a command is dropped.
> > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > 
> > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > 
> > > > > Another solution: the server reports available receive buffer space to
> > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > the client stays within the receive buffer size.
> > > > > 
> > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > size and make sure to leave enough room.
> > > > 
> > > > I don't think having to query it ahead of time is particularly nice,
> > > > and of course it is inherantly racy.
> > > > 
> > > > I would just have QEMU emit an event when it pausing processing of the
> > > > incoming commands due to a full queue.  If the event includes the ID
> > > > of the last queued command, the client will know which (if any) of
> > > > its outstanding commands are delayed. Another even can be sent when
> > > > it restarts reading.
> > > 
> > > Hmm and now we're implementing flow control!
> > > 
> > > a) What exactly is the current semantics/buffer sizes?
> > > b) When do clients send multiple QMP commands on one channel without
> > > waiting for the response to the previous command?
> > > c) Would one queue entry for each class of commands/channel work
> > >   (Where a class of commands is currently 'normal' and 'oob')
> > 
> > I do wonder if we need to worry about request limiting at all from the
> > client side.  For non-OOB commands clients will wait for a reply before
> > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > 
> > OOB commands are supposed to be things which can be handled quickly
> > without blocking, so even if a client sent several commands at once
> > without waiting for replies, they're going to be processed quickly,
> > so whether we temporarily block reading off the wire is a minor
> > detail.
> 
> Lets just define it so that it can't - you send an OOB command and wait
> for it's response before sending another on that channel.
> 
> > IOW, I think we could just have a fixed 10 command queue and apps just
> > pretend that there's an infinite queue and nothing bad would happen from
> > the app's POV.
> 
> Can you justify 10 as opposed to 1?

Semantically I don't think it makes a difference if the OOB commands are
being processed sequentially by their thread. A >1 length queue would only
matter for non-OOB commands if an app was filling the pipeline with non-OOB
requests, as then that could block reading of OOB commands. 


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 7 months ago
* Daniel P. Berrange (berrange@redhat.com) wrote:
> On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > 
> > > > > > > > I agree.
> > > > > > > > 
> > > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > > than the amount of memory consumed by them.
> > > > > > > > 
> > > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > 
> > > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > > next version.
> > > > > > > 
> > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > 
> > > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > > 
> > > > > > Another solution: the server reports available receive buffer space to
> > > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > > the client stays within the receive buffer size.
> > > > > > 
> > > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > > size and make sure to leave enough room.
> > > > > 
> > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > and of course it is inherantly racy.
> > > > > 
> > > > > I would just have QEMU emit an event when it pausing processing of the
> > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > of the last queued command, the client will know which (if any) of
> > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > it restarts reading.
> > > > 
> > > > Hmm and now we're implementing flow control!
> > > > 
> > > > a) What exactly is the current semantics/buffer sizes?
> > > > b) When do clients send multiple QMP commands on one channel without
> > > > waiting for the response to the previous command?
> > > > c) Would one queue entry for each class of commands/channel work
> > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > 
> > > I do wonder if we need to worry about request limiting at all from the
> > > client side.  For non-OOB commands clients will wait for a reply before
> > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > 
> > > OOB commands are supposed to be things which can be handled quickly
> > > without blocking, so even if a client sent several commands at once
> > > without waiting for replies, they're going to be processed quickly,
> > > so whether we temporarily block reading off the wire is a minor
> > > detail.
> > 
> > Lets just define it so that it can't - you send an OOB command and wait
> > for it's response before sending another on that channel.
> > 
> > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > pretend that there's an infinite queue and nothing bad would happen from
> > > the app's POV.
> > 
> > Can you justify 10 as opposed to 1?
> 
> Semantically I don't think it makes a difference if the OOB commands are
> being processed sequentially by their thread. A >1 length queue would only
> matter for non-OOB commands if an app was filling the pipeline with non-OOB
> requests, as then that could block reading of OOB commands. 

But can't we just tell the app not to?

Dave

> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Daniel P. Berrange 6 years, 7 months ago
On Fri, Sep 15, 2017 at 03:29:46PM +0100, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrange (berrange@redhat.com) wrote:
> > On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > > 
> > > > > > > > > I agree.
> > > > > > > > > 
> > > > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > > > than the amount of memory consumed by them.
> > > > > > > > > 
> > > > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > > 
> > > > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > > > next version.
> > > > > > > > 
> > > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > > 
> > > > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > > > 
> > > > > > > Another solution: the server reports available receive buffer space to
> > > > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > > > the client stays within the receive buffer size.
> > > > > > > 
> > > > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > > > size and make sure to leave enough room.
> > > > > > 
> > > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > > and of course it is inherantly racy.
> > > > > > 
> > > > > > I would just have QEMU emit an event when it pausing processing of the
> > > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > > of the last queued command, the client will know which (if any) of
> > > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > > it restarts reading.
> > > > > 
> > > > > Hmm and now we're implementing flow control!
> > > > > 
> > > > > a) What exactly is the current semantics/buffer sizes?
> > > > > b) When do clients send multiple QMP commands on one channel without
> > > > > waiting for the response to the previous command?
> > > > > c) Would one queue entry for each class of commands/channel work
> > > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > > 
> > > > I do wonder if we need to worry about request limiting at all from the
> > > > client side.  For non-OOB commands clients will wait for a reply before
> > > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > > 
> > > > OOB commands are supposed to be things which can be handled quickly
> > > > without blocking, so even if a client sent several commands at once
> > > > without waiting for replies, they're going to be processed quickly,
> > > > so whether we temporarily block reading off the wire is a minor
> > > > detail.
> > > 
> > > Lets just define it so that it can't - you send an OOB command and wait
> > > for it's response before sending another on that channel.
> > > 
> > > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > > pretend that there's an infinite queue and nothing bad would happen from
> > > > the app's POV.
> > > 
> > > Can you justify 10 as opposed to 1?
> > 
> > Semantically I don't think it makes a difference if the OOB commands are
> > being processed sequentially by their thread. A >1 length queue would only
> > matter for non-OOB commands if an app was filling the pipeline with non-OOB
> > requests, as then that could block reading of OOB commands. 
> 
> But can't we just tell the app not to?

Yes, a sensible app would not do that. So this feels like mostly a documentation
problem.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Stefan Hajnoczi 6 years, 7 months ago
On Fri, Sep 15, 2017 at 01:29:13PM +0100, Daniel P. Berrange wrote:
> On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > 
> > > > > > > > I agree.
> > > > > > > > 
> > > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > > than the amount of memory consumed by them.
> > > > > > > > 
> > > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > 
> > > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > > next version.
> > > > > > > 
> > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > 
> > > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > > 
> > > > > > Another solution: the server reports available receive buffer space to
> > > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > > the client stays within the receive buffer size.
> > > > > > 
> > > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > > size and make sure to leave enough room.
> > > > > 
> > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > and of course it is inherantly racy.
> > > > > 
> > > > > I would just have QEMU emit an event when it pausing processing of the
> > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > of the last queued command, the client will know which (if any) of
> > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > it restarts reading.
> > > > 
> > > > Hmm and now we're implementing flow control!
> > > > 
> > > > a) What exactly is the current semantics/buffer sizes?
> > > > b) When do clients send multiple QMP commands on one channel without
> > > > waiting for the response to the previous command?
> > > > c) Would one queue entry for each class of commands/channel work
> > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > 
> > > I do wonder if we need to worry about request limiting at all from the
> > > client side.  For non-OOB commands clients will wait for a reply before
> > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > 
> > > OOB commands are supposed to be things which can be handled quickly
> > > without blocking, so even if a client sent several commands at once
> > > without waiting for replies, they're going to be processed quickly,
> > > so whether we temporarily block reading off the wire is a minor
> > > detail.
> > 
> > Lets just define it so that it can't - you send an OOB command and wait
> > for it's response before sending another on that channel.
> > 
> > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > pretend that there's an infinite queue and nothing bad would happen from
> > > the app's POV.
> > 
> > Can you justify 10 as opposed to 1?
> 
> Semantically I don't think it makes a difference if the OOB commands are
> being processed sequentially by their thread. A >1 length queue would only
> matter for non-OOB commands if an app was filling the pipeline with non-OOB
> requests, as then that could block reading of OOB commands. 

To summarize:

The QMP server has a lookahead of 1 command so it can dispatch
out-of-band commands.  If 2 or more non-OOB commands are queued at the
same time then OOB processing will not occur.

Is that right?

Stefan

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 7 months ago
* Stefan Hajnoczi (stefanha@redhat.com) wrote:
> On Fri, Sep 15, 2017 at 01:29:13PM +0100, Daniel P. Berrange wrote:
> > On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > > 
> > > > > > > > > I agree.
> > > > > > > > > 
> > > > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > > > than the amount of memory consumed by them.
> > > > > > > > > 
> > > > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > > 
> > > > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > > > next version.
> > > > > > > > 
> > > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > > 
> > > > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > > > 
> > > > > > > Another solution: the server reports available receive buffer space to
> > > > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > > > the client stays within the receive buffer size.
> > > > > > > 
> > > > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > > > size and make sure to leave enough room.
> > > > > > 
> > > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > > and of course it is inherantly racy.
> > > > > > 
> > > > > > I would just have QEMU emit an event when it pausing processing of the
> > > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > > of the last queued command, the client will know which (if any) of
> > > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > > it restarts reading.
> > > > > 
> > > > > Hmm and now we're implementing flow control!
> > > > > 
> > > > > a) What exactly is the current semantics/buffer sizes?
> > > > > b) When do clients send multiple QMP commands on one channel without
> > > > > waiting for the response to the previous command?
> > > > > c) Would one queue entry for each class of commands/channel work
> > > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > > 
> > > > I do wonder if we need to worry about request limiting at all from the
> > > > client side.  For non-OOB commands clients will wait for a reply before
> > > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > > 
> > > > OOB commands are supposed to be things which can be handled quickly
> > > > without blocking, so even if a client sent several commands at once
> > > > without waiting for replies, they're going to be processed quickly,
> > > > so whether we temporarily block reading off the wire is a minor
> > > > detail.
> > > 
> > > Lets just define it so that it can't - you send an OOB command and wait
> > > for it's response before sending another on that channel.
> > > 
> > > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > > pretend that there's an infinite queue and nothing bad would happen from
> > > > the app's POV.
> > > 
> > > Can you justify 10 as opposed to 1?
> > 
> > Semantically I don't think it makes a difference if the OOB commands are
> > being processed sequentially by their thread. A >1 length queue would only
> > matter for non-OOB commands if an app was filling the pipeline with non-OOB
> > requests, as then that could block reading of OOB commands. 
> 
> To summarize:
> 
> The QMP server has a lookahead of 1 command so it can dispatch
> out-of-band commands.  If 2 or more non-OOB commands are queued at the
> same time then OOB processing will not occur.
> 
> Is that right?

I think my view is slightly more complex;
  a) There's a pair of queues for each channel
  b) There's a central pair of queues on the QMP server
    one for OOB commands and one for normal commands.
  c) Each queue is only really guaranteed to be one deep.

  That means that each one of the channels can send a non-OOB
command without getting in the way of a channel that wants
to send one.

Dave

> Stefan
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 7 months ago
On Fri, Sep 15, 2017 at 04:17:07PM +0100, Dr. David Alan Gilbert wrote:
> * Stefan Hajnoczi (stefanha@redhat.com) wrote:
> > On Fri, Sep 15, 2017 at 01:29:13PM +0100, Daniel P. Berrange wrote:
> > > On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > > > 
> > > > > > > > > > I agree.
> > > > > > > > > > 
> > > > > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > > > > than the amount of memory consumed by them.
> > > > > > > > > > 
> > > > > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > > > 
> > > > > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > > > > next version.
> > > > > > > > > 
> > > > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > > > 
> > > > > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > > > > 
> > > > > > > > Another solution: the server reports available receive buffer space to
> > > > > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > > > > the client stays within the receive buffer size.
> > > > > > > > 
> > > > > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > > > > size and make sure to leave enough room.
> > > > > > > 
> > > > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > > > and of course it is inherantly racy.
> > > > > > > 
> > > > > > > I would just have QEMU emit an event when it pausing processing of the
> > > > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > > > of the last queued command, the client will know which (if any) of
> > > > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > > > it restarts reading.
> > > > > > 
> > > > > > Hmm and now we're implementing flow control!
> > > > > > 
> > > > > > a) What exactly is the current semantics/buffer sizes?
> > > > > > b) When do clients send multiple QMP commands on one channel without
> > > > > > waiting for the response to the previous command?
> > > > > > c) Would one queue entry for each class of commands/channel work
> > > > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > > > 
> > > > > I do wonder if we need to worry about request limiting at all from the
> > > > > client side.  For non-OOB commands clients will wait for a reply before
> > > > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > > > 
> > > > > OOB commands are supposed to be things which can be handled quickly
> > > > > without blocking, so even if a client sent several commands at once
> > > > > without waiting for replies, they're going to be processed quickly,
> > > > > so whether we temporarily block reading off the wire is a minor
> > > > > detail.
> > > > 
> > > > Lets just define it so that it can't - you send an OOB command and wait
> > > > for it's response before sending another on that channel.
> > > > 
> > > > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > > > pretend that there's an infinite queue and nothing bad would happen from
> > > > > the app's POV.
> > > > 
> > > > Can you justify 10 as opposed to 1?
> > > 
> > > Semantically I don't think it makes a difference if the OOB commands are
> > > being processed sequentially by their thread. A >1 length queue would only
> > > matter for non-OOB commands if an app was filling the pipeline with non-OOB
> > > requests, as then that could block reading of OOB commands. 
> > 
> > To summarize:
> > 
> > The QMP server has a lookahead of 1 command so it can dispatch
> > out-of-band commands.  If 2 or more non-OOB commands are queued at the
> > same time then OOB processing will not occur.
> > 
> > Is that right?
> 
> I think my view is slightly more complex;
>   a) There's a pair of queues for each channel
>   b) There's a central pair of queues on the QMP server
>     one for OOB commands and one for normal commands.
>   c) Each queue is only really guaranteed to be one deep.
> 
>   That means that each one of the channels can send a non-OOB
> command without getting in the way of a channel that wants
> to send one.

But current version should not be that complex:

Firstly, parser thread will only be enabled for QMP+NO_MIXED monitors.

Then, we only have a single global queue for QMP non-oob commands, and
we don't have response queue yet.  We do respond just like before in a
synchronous way (I explained why - for OOB we don't need that
complexity IMHO).

When we parse commands, we execute it directly if OOB, otherwise we
put it onto request queue.  Request queue handling is done by a main
thread QEMUBH.  That's all.

Would this "simple version" suffice to implement this whole OOB idea?

(Again, I really don't think we need to specify queue length to 1,
 though we can make it small)

-- 
Peter Xu

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 7 months ago
* Peter Xu (peterx@redhat.com) wrote:
> On Fri, Sep 15, 2017 at 04:17:07PM +0100, Dr. David Alan Gilbert wrote:
> > * Stefan Hajnoczi (stefanha@redhat.com) wrote:
> > > On Fri, Sep 15, 2017 at 01:29:13PM +0100, Daniel P. Berrange wrote:
> > > > On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > > > > 
> > > > > > > > > > > I agree.
> > > > > > > > > > > 
> > > > > > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > > > > > than the amount of memory consumed by them.
> > > > > > > > > > > 
> > > > > > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > > > > 
> > > > > > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > > > > > next version.
> > > > > > > > > > 
> > > > > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > > > > 
> > > > > > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > > > > > 
> > > > > > > > > Another solution: the server reports available receive buffer space to
> > > > > > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > > > > > the client stays within the receive buffer size.
> > > > > > > > > 
> > > > > > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > > > > > size and make sure to leave enough room.
> > > > > > > > 
> > > > > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > > > > and of course it is inherantly racy.
> > > > > > > > 
> > > > > > > > I would just have QEMU emit an event when it pausing processing of the
> > > > > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > > > > of the last queued command, the client will know which (if any) of
> > > > > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > > > > it restarts reading.
> > > > > > > 
> > > > > > > Hmm and now we're implementing flow control!
> > > > > > > 
> > > > > > > a) What exactly is the current semantics/buffer sizes?
> > > > > > > b) When do clients send multiple QMP commands on one channel without
> > > > > > > waiting for the response to the previous command?
> > > > > > > c) Would one queue entry for each class of commands/channel work
> > > > > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > > > > 
> > > > > > I do wonder if we need to worry about request limiting at all from the
> > > > > > client side.  For non-OOB commands clients will wait for a reply before
> > > > > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > > > > 
> > > > > > OOB commands are supposed to be things which can be handled quickly
> > > > > > without blocking, so even if a client sent several commands at once
> > > > > > without waiting for replies, they're going to be processed quickly,
> > > > > > so whether we temporarily block reading off the wire is a minor
> > > > > > detail.
> > > > > 
> > > > > Lets just define it so that it can't - you send an OOB command and wait
> > > > > for it's response before sending another on that channel.
> > > > > 
> > > > > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > > > > pretend that there's an infinite queue and nothing bad would happen from
> > > > > > the app's POV.
> > > > > 
> > > > > Can you justify 10 as opposed to 1?
> > > > 
> > > > Semantically I don't think it makes a difference if the OOB commands are
> > > > being processed sequentially by their thread. A >1 length queue would only
> > > > matter for non-OOB commands if an app was filling the pipeline with non-OOB
> > > > requests, as then that could block reading of OOB commands. 
> > > 
> > > To summarize:
> > > 
> > > The QMP server has a lookahead of 1 command so it can dispatch
> > > out-of-band commands.  If 2 or more non-OOB commands are queued at the
> > > same time then OOB processing will not occur.
> > > 
> > > Is that right?
> > 
> > I think my view is slightly more complex;
> >   a) There's a pair of queues for each channel
> >   b) There's a central pair of queues on the QMP server
> >     one for OOB commands and one for normal commands.
> >   c) Each queue is only really guaranteed to be one deep.
> > 
> >   That means that each one of the channels can send a non-OOB
> > command without getting in the way of a channel that wants
> > to send one.
> 
> But current version should not be that complex:
> 
> Firstly, parser thread will only be enabled for QMP+NO_MIXED monitors.
> 
> Then, we only have a single global queue for QMP non-oob commands, and
> we don't have response queue yet.  We do respond just like before in a
> synchronous way (I explained why - for OOB we don't need that
> complexity IMHO).

I think  the discussion started because of two related comments:
  Marc-André said :
     'There should be a limit in the number of requests the thread can
queue'
and Stefan said :
     'Memory usage must be bounded.'

actually neither of those cases really worried me (because they only
happen if the client keeps pumping commands, and that seems it's fault).

However, once you start adding a limit, you've got to be careful - if
you just added a limit to the central queue, then what happens if that
queue is filled by non-OOB commands?

Dave

> When we parse commands, we execute it directly if OOB, otherwise we
> put it onto request queue.  Request queue handling is done by a main
> thread QEMUBH.  That's all.
> 
> Would this "simple version" suffice to implement this whole OOB idea?
> 
> (Again, I really don't think we need to specify queue length to 1,
>  though we can make it small)
> 
> -- 
> Peter Xu
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 6 months ago
On Mon, Sep 18, 2017 at 11:40:40AM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > On Fri, Sep 15, 2017 at 04:17:07PM +0100, Dr. David Alan Gilbert wrote:
> > > * Stefan Hajnoczi (stefanha@redhat.com) wrote:
> > > > On Fri, Sep 15, 2017 at 01:29:13PM +0100, Daniel P. Berrange wrote:
> > > > > On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > > > > > 
> > > > > > > > > > > > I agree.
> > > > > > > > > > > > 
> > > > > > > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > > > > > > than the amount of memory consumed by them.
> > > > > > > > > > > > 
> > > > > > > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > > > > > 
> > > > > > > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > > > > > > next version.
> > > > > > > > > > > 
> > > > > > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > > > > > 
> > > > > > > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > > > > > > 
> > > > > > > > > > Another solution: the server reports available receive buffer space to
> > > > > > > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > > > > > > the client stays within the receive buffer size.
> > > > > > > > > > 
> > > > > > > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > > > > > > size and make sure to leave enough room.
> > > > > > > > > 
> > > > > > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > > > > > and of course it is inherantly racy.
> > > > > > > > > 
> > > > > > > > > I would just have QEMU emit an event when it pausing processing of the
> > > > > > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > > > > > of the last queued command, the client will know which (if any) of
> > > > > > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > > > > > it restarts reading.
> > > > > > > > 
> > > > > > > > Hmm and now we're implementing flow control!
> > > > > > > > 
> > > > > > > > a) What exactly is the current semantics/buffer sizes?
> > > > > > > > b) When do clients send multiple QMP commands on one channel without
> > > > > > > > waiting for the response to the previous command?
> > > > > > > > c) Would one queue entry for each class of commands/channel work
> > > > > > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > > > > > 
> > > > > > > I do wonder if we need to worry about request limiting at all from the
> > > > > > > client side.  For non-OOB commands clients will wait for a reply before
> > > > > > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > > > > > 
> > > > > > > OOB commands are supposed to be things which can be handled quickly
> > > > > > > without blocking, so even if a client sent several commands at once
> > > > > > > without waiting for replies, they're going to be processed quickly,
> > > > > > > so whether we temporarily block reading off the wire is a minor
> > > > > > > detail.
> > > > > > 
> > > > > > Lets just define it so that it can't - you send an OOB command and wait
> > > > > > for it's response before sending another on that channel.
> > > > > > 
> > > > > > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > > > > > pretend that there's an infinite queue and nothing bad would happen from
> > > > > > > the app's POV.
> > > > > > 
> > > > > > Can you justify 10 as opposed to 1?
> > > > > 
> > > > > Semantically I don't think it makes a difference if the OOB commands are
> > > > > being processed sequentially by their thread. A >1 length queue would only
> > > > > matter for non-OOB commands if an app was filling the pipeline with non-OOB
> > > > > requests, as then that could block reading of OOB commands. 
> > > > 
> > > > To summarize:
> > > > 
> > > > The QMP server has a lookahead of 1 command so it can dispatch
> > > > out-of-band commands.  If 2 or more non-OOB commands are queued at the
> > > > same time then OOB processing will not occur.
> > > > 
> > > > Is that right?
> > > 
> > > I think my view is slightly more complex;
> > >   a) There's a pair of queues for each channel
> > >   b) There's a central pair of queues on the QMP server
> > >     one for OOB commands and one for normal commands.
> > >   c) Each queue is only really guaranteed to be one deep.
> > > 
> > >   That means that each one of the channels can send a non-OOB
> > > command without getting in the way of a channel that wants
> > > to send one.
> > 
> > But current version should not be that complex:
> > 
> > Firstly, parser thread will only be enabled for QMP+NO_MIXED monitors.
> > 
> > Then, we only have a single global queue for QMP non-oob commands, and
> > we don't have response queue yet.  We do respond just like before in a
> > synchronous way (I explained why - for OOB we don't need that
> > complexity IMHO).
> 
> I think  the discussion started because of two related comments:
>   Marc-André said :
>      'There should be a limit in the number of requests the thread can
> queue'
> and Stefan said :
>      'Memory usage must be bounded.'
> 
> actually neither of those cases really worried me (because they only
> happen if the client keeps pumping commands, and that seems it's fault).
> 
> However, once you start adding a limit, you've got to be careful - if
> you just added a limit to the central queue, then what happens if that
> queue is filled by non-OOB commands?

Ah!  So I misunderstood "a pair of queues for each channel".  I
thought it means the input and output of a single monitor, while I
think it actually means "OOB channel" and "non-OOB channel".

My plan (or say, this version) starts from only one global queue for
non-OOB commands.  There is no queue for OOB commands at all.  As
discussed below [1], if we receive one OOB command, we execute it
directly and reply to client.  And here the "request queue" will only
queue non-OOB commands.  Maybe the name "request queue" sounds
confusing here.

If so, we should not have above problem, right?  Since even if the
queue is full (of course there will only be non-OOB commands in the
queue), the parsing is still working, and we will still be able to
handle OOB ones:

  req = parse(stream);

  if (is_oob(req)) {
    execute(req);
    return;
  }

  if (queue_full(req_queue)) {
    emit_full_event(req);
    return;
  }

  enqueue(req_queue, req);

So again, this version is a simplified version from previous
discussion (no oob-queue but only non-oob-queue, no respond queue but
only request queue, etc...), but hope it can work.

Thanks,

> 
> Dave
> 
> > When we parse commands, we execute it directly if OOB, otherwise we
> > put it onto request queue.  Request queue handling is done by a main
> > thread QEMUBH.  That's all.

[1]

> > 
> > Would this "simple version" suffice to implement this whole OOB idea?
> > 
> > (Again, I really don't think we need to specify queue length to 1,
> >  though we can make it small)
> > 
> > -- 
> > Peter Xu
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

-- 
Peter Xu

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 6 months ago
* Peter Xu (peterx@redhat.com) wrote:
> On Mon, Sep 18, 2017 at 11:40:40AM +0100, Dr. David Alan Gilbert wrote:
> > * Peter Xu (peterx@redhat.com) wrote:
> > > On Fri, Sep 15, 2017 at 04:17:07PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Stefan Hajnoczi (stefanha@redhat.com) wrote:
> > > > > On Fri, Sep 15, 2017 at 01:29:13PM +0100, Daniel P. Berrange wrote:
> > > > > > On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > > > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > > > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > I agree.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > > > > > > > than the amount of memory consumed by them.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > > > > > > 
> > > > > > > > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > > > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > > > > > > > next version.
> > > > > > > > > > > > 
> > > > > > > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > > > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > > > > > > 
> > > > > > > > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > > > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > > > > > > > 
> > > > > > > > > > > Another solution: the server reports available receive buffer space to
> > > > > > > > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > > > > > > > the client stays within the receive buffer size.
> > > > > > > > > > > 
> > > > > > > > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > > > > > > > size and make sure to leave enough room.
> > > > > > > > > > 
> > > > > > > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > > > > > > and of course it is inherantly racy.
> > > > > > > > > > 
> > > > > > > > > > I would just have QEMU emit an event when it pausing processing of the
> > > > > > > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > > > > > > of the last queued command, the client will know which (if any) of
> > > > > > > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > > > > > > it restarts reading.
> > > > > > > > > 
> > > > > > > > > Hmm and now we're implementing flow control!
> > > > > > > > > 
> > > > > > > > > a) What exactly is the current semantics/buffer sizes?
> > > > > > > > > b) When do clients send multiple QMP commands on one channel without
> > > > > > > > > waiting for the response to the previous command?
> > > > > > > > > c) Would one queue entry for each class of commands/channel work
> > > > > > > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > > > > > > 
> > > > > > > > I do wonder if we need to worry about request limiting at all from the
> > > > > > > > client side.  For non-OOB commands clients will wait for a reply before
> > > > > > > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > > > > > > 
> > > > > > > > OOB commands are supposed to be things which can be handled quickly
> > > > > > > > without blocking, so even if a client sent several commands at once
> > > > > > > > without waiting for replies, they're going to be processed quickly,
> > > > > > > > so whether we temporarily block reading off the wire is a minor
> > > > > > > > detail.
> > > > > > > 
> > > > > > > Lets just define it so that it can't - you send an OOB command and wait
> > > > > > > for it's response before sending another on that channel.
> > > > > > > 
> > > > > > > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > > > > > > pretend that there's an infinite queue and nothing bad would happen from
> > > > > > > > the app's POV.
> > > > > > > 
> > > > > > > Can you justify 10 as opposed to 1?
> > > > > > 
> > > > > > Semantically I don't think it makes a difference if the OOB commands are
> > > > > > being processed sequentially by their thread. A >1 length queue would only
> > > > > > matter for non-OOB commands if an app was filling the pipeline with non-OOB
> > > > > > requests, as then that could block reading of OOB commands. 
> > > > > 
> > > > > To summarize:
> > > > > 
> > > > > The QMP server has a lookahead of 1 command so it can dispatch
> > > > > out-of-band commands.  If 2 or more non-OOB commands are queued at the
> > > > > same time then OOB processing will not occur.
> > > > > 
> > > > > Is that right?
> > > > 
> > > > I think my view is slightly more complex;
> > > >   a) There's a pair of queues for each channel
> > > >   b) There's a central pair of queues on the QMP server
> > > >     one for OOB commands and one for normal commands.
> > > >   c) Each queue is only really guaranteed to be one deep.
> > > > 
> > > >   That means that each one of the channels can send a non-OOB
> > > > command without getting in the way of a channel that wants
> > > > to send one.
> > > 
> > > But current version should not be that complex:
> > > 
> > > Firstly, parser thread will only be enabled for QMP+NO_MIXED monitors.
> > > 
> > > Then, we only have a single global queue for QMP non-oob commands, and
> > > we don't have response queue yet.  We do respond just like before in a
> > > synchronous way (I explained why - for OOB we don't need that
> > > complexity IMHO).
> > 
> > I think  the discussion started because of two related comments:
> >   Marc-André said :
> >      'There should be a limit in the number of requests the thread can
> > queue'
> > and Stefan said :
> >      'Memory usage must be bounded.'
> > 
> > actually neither of those cases really worried me (because they only
> > happen if the client keeps pumping commands, and that seems it's fault).
> > 
> > However, once you start adding a limit, you've got to be careful - if
> > you just added a limit to the central queue, then what happens if that
> > queue is filled by non-OOB commands?
> 
> Ah!  So I misunderstood "a pair of queues for each channel".  I
> thought it means the input and output of a single monitor, while I
> think it actually means "OOB channel" and "non-OOB channel".
> 
> My plan (or say, this version) starts from only one global queue for
> non-OOB commands.  There is no queue for OOB commands at all.  As
> discussed below [1], if we receive one OOB command, we execute it
> directly and reply to client.  And here the "request queue" will only
> queue non-OOB commands.  Maybe the name "request queue" sounds
> confusing here.
> 
> If so, we should not have above problem, right?  Since even if the
> queue is full (of course there will only be non-OOB commands in the
> queue), the parsing is still working, and we will still be able to
> handle OOB ones:
> 
>   req = parse(stream);
> 
>   if (is_oob(req)) {
>     execute(req);
>     return;
>   }
> 
>   if (queue_full(req_queue)) {
>     emit_full_event(req);
>     return;
>   }
> 
>   enqueue(req_queue, req);
> 
> So again, this version is a simplified version from previous
> discussion (no oob-queue but only non-oob-queue, no respond queue but
> only request queue, etc...), but hope it can work.

That might work.  You have to be really careful about allowing
OOB commands to be parsed, even if the non-OOB queue is full.

One problem is that one client could fill up that shared queue,
then another client would be surprised to find the queue is full
when it tries to send just one command - hence why I thought a separate
queue per client would solve that.

Dave

> Thanks,
> 
> > 
> > Dave
> > 
> > > When we parse commands, we execute it directly if OOB, otherwise we
> > > put it onto request queue.  Request queue handling is done by a main
> > > thread QEMUBH.  That's all.
> 
> [1]
> 
> > > 
> > > Would this "simple version" suffice to implement this whole OOB idea?
> > > 
> > > (Again, I really don't think we need to specify queue length to 1,
> > >  though we can make it small)
> > > 
> > > -- 
> > > Peter Xu
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> -- 
> Peter Xu
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 6 months ago
On Tue, Sep 19, 2017 at 10:13:52AM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > On Mon, Sep 18, 2017 at 11:40:40AM +0100, Dr. David Alan Gilbert wrote:
> > > * Peter Xu (peterx@redhat.com) wrote:
> > > > On Fri, Sep 15, 2017 at 04:17:07PM +0100, Dr. David Alan Gilbert wrote:
> > > > > * Stefan Hajnoczi (stefanha@redhat.com) wrote:
> > > > > > On Fri, Sep 15, 2017 at 01:29:13PM +0100, Daniel P. Berrange wrote:
> > > > > > > On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert wrote:
> > > > > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > > > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert wrote:
> > > > > > > > > > * Daniel P. Berrange (berrange@redhat.com) wrote:
> > > > > > > > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi wrote:
> > > > > > > > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André Lureau wrote:
> > > > > > > > > > > > > > > There should be a limit in the number of requests the thread can
> > > > > > > > > > > > > > > queue. Before the patch, the limit was enforced by system socket
> > > > > > > > > > > > > > > buffering I think. Now, should oob commands still be processed even if
> > > > > > > > > > > > > > > the queue is full? If so, the thread can't be suspended.
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > I agree.
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > Memory usage must be bounded.  The number of requests is less important
> > > > > > > > > > > > > > than the amount of memory consumed by them.
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > Existing QMP clients that send multiple QMP commands without waiting for
> > > > > > > > > > > > > > replies need to rethink their strategy because OOB commands cannot be
> > > > > > > > > > > > > > processed if queued non-OOB commands consume too much memory.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Thanks for pointing out this.  Yes the memory usage problem is valid,
> > > > > > > > > > > > > as Markus pointed out as well in previous discussions (in "Flow
> > > > > > > > > > > > > Control" section of that long reply).  Hopefully this series basically
> > > > > > > > > > > > > can work from design prospective, then I'll add this flow control in
> > > > > > > > > > > > > next version.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Regarding to what we should do if the limit is reached: Markus
> > > > > > > > > > > > > provided a few options, but the one I prefer most is that we don't
> > > > > > > > > > > > > respond, but send an event showing that a command is dropped.
> > > > > > > > > > > > > However, I would like it not queued, but a direct reply (after all,
> > > > > > > > > > > > > it's an event, and we should not need to care much on ordering of it).
> > > > > > > > > > > > > Then we can get rid of the babysitting of those "to be failed"
> > > > > > > > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > I think I also missed at least a unit test for this new interface.
> > > > > > > > > > > > > Again, I'll add it after the whole idea is proved solid.  Thanks,
> > > > > > > > > > > > 
> > > > > > > > > > > > Another solution: the server reports available receive buffer space to
> > > > > > > > > > > > the client.  The server only guarantees immediate OOB processing when
> > > > > > > > > > > > the client stays within the receive buffer size.
> > > > > > > > > > > > 
> > > > > > > > > > > > Clients wishing to take advantage of OOB must query the receive buffer
> > > > > > > > > > > > size and make sure to leave enough room.
> > > > > > > > > > > 
> > > > > > > > > > > I don't think having to query it ahead of time is particularly nice,
> > > > > > > > > > > and of course it is inherantly racy.
> > > > > > > > > > > 
> > > > > > > > > > > I would just have QEMU emit an event when it pausing processing of the
> > > > > > > > > > > incoming commands due to a full queue.  If the event includes the ID
> > > > > > > > > > > of the last queued command, the client will know which (if any) of
> > > > > > > > > > > its outstanding commands are delayed. Another even can be sent when
> > > > > > > > > > > it restarts reading.
> > > > > > > > > > 
> > > > > > > > > > Hmm and now we're implementing flow control!
> > > > > > > > > > 
> > > > > > > > > > a) What exactly is the current semantics/buffer sizes?
> > > > > > > > > > b) When do clients send multiple QMP commands on one channel without
> > > > > > > > > > waiting for the response to the previous command?
> > > > > > > > > > c) Would one queue entry for each class of commands/channel work
> > > > > > > > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > > > > > > > 
> > > > > > > > > I do wonder if we need to worry about request limiting at all from the
> > > > > > > > > client side.  For non-OOB commands clients will wait for a reply before
> > > > > > > > > sending a 2nd non-OOB command, so you'll never get a deep queue for.
> > > > > > > > > 
> > > > > > > > > OOB commands are supposed to be things which can be handled quickly
> > > > > > > > > without blocking, so even if a client sent several commands at once
> > > > > > > > > without waiting for replies, they're going to be processed quickly,
> > > > > > > > > so whether we temporarily block reading off the wire is a minor
> > > > > > > > > detail.
> > > > > > > > 
> > > > > > > > Lets just define it so that it can't - you send an OOB command and wait
> > > > > > > > for it's response before sending another on that channel.
> > > > > > > > 
> > > > > > > > > IOW, I think we could just have a fixed 10 command queue and apps just
> > > > > > > > > pretend that there's an infinite queue and nothing bad would happen from
> > > > > > > > > the app's POV.
> > > > > > > > 
> > > > > > > > Can you justify 10 as opposed to 1?
> > > > > > > 
> > > > > > > Semantically I don't think it makes a difference if the OOB commands are
> > > > > > > being processed sequentially by their thread. A >1 length queue would only
> > > > > > > matter for non-OOB commands if an app was filling the pipeline with non-OOB
> > > > > > > requests, as then that could block reading of OOB commands. 
> > > > > > 
> > > > > > To summarize:
> > > > > > 
> > > > > > The QMP server has a lookahead of 1 command so it can dispatch
> > > > > > out-of-band commands.  If 2 or more non-OOB commands are queued at the
> > > > > > same time then OOB processing will not occur.
> > > > > > 
> > > > > > Is that right?
> > > > > 
> > > > > I think my view is slightly more complex;
> > > > >   a) There's a pair of queues for each channel
> > > > >   b) There's a central pair of queues on the QMP server
> > > > >     one for OOB commands and one for normal commands.
> > > > >   c) Each queue is only really guaranteed to be one deep.
> > > > > 
> > > > >   That means that each one of the channels can send a non-OOB
> > > > > command without getting in the way of a channel that wants
> > > > > to send one.
> > > > 
> > > > But current version should not be that complex:
> > > > 
> > > > Firstly, parser thread will only be enabled for QMP+NO_MIXED monitors.
> > > > 
> > > > Then, we only have a single global queue for QMP non-oob commands, and
> > > > we don't have response queue yet.  We do respond just like before in a
> > > > synchronous way (I explained why - for OOB we don't need that
> > > > complexity IMHO).
> > > 
> > > I think  the discussion started because of two related comments:
> > >   Marc-André said :
> > >      'There should be a limit in the number of requests the thread can
> > > queue'
> > > and Stefan said :
> > >      'Memory usage must be bounded.'
> > > 
> > > actually neither of those cases really worried me (because they only
> > > happen if the client keeps pumping commands, and that seems it's fault).
> > > 
> > > However, once you start adding a limit, you've got to be careful - if
> > > you just added a limit to the central queue, then what happens if that
> > > queue is filled by non-OOB commands?
> > 
> > Ah!  So I misunderstood "a pair of queues for each channel".  I
> > thought it means the input and output of a single monitor, while I
> > think it actually means "OOB channel" and "non-OOB channel".
> > 
> > My plan (or say, this version) starts from only one global queue for
> > non-OOB commands.  There is no queue for OOB commands at all.  As
> > discussed below [1], if we receive one OOB command, we execute it
> > directly and reply to client.  And here the "request queue" will only
> > queue non-OOB commands.  Maybe the name "request queue" sounds
> > confusing here.
> > 
> > If so, we should not have above problem, right?  Since even if the
> > queue is full (of course there will only be non-OOB commands in the
> > queue), the parsing is still working, and we will still be able to
> > handle OOB ones:
> > 
> >   req = parse(stream);
> > 
> >   if (is_oob(req)) {
> >     execute(req);
> >     return;
> >   }
> > 
> >   if (queue_full(req_queue)) {
> >     emit_full_event(req);
> >     return;
> >   }
> > 
> >   enqueue(req_queue, req);
> > 
> > So again, this version is a simplified version from previous
> > discussion (no oob-queue but only non-oob-queue, no respond queue but
> > only request queue, etc...), but hope it can work.
> 
> That might work.  You have to be really careful about allowing
> OOB commands to be parsed, even if the non-OOB queue is full.
> 
> One problem is that one client could fill up that shared queue,
> then another client would be surprised to find the queue is full
> when it tries to send just one command - hence why I thought a separate
> queue per client would solve that.

Ah yes.  Let me switch to one queue per monitor in my next post.  Thanks,

-- 
Peter Xu

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 7 months ago
* Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> Hi
> 
> On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> > This series was born from this one:
> >
> >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> >
> > The design comes from Markus, and also the whole-bunch-of discussions
> > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> > Stefan, etc. on discussing the topic (...again!), providing shiny
> > ideas and suggestions.  Finally we got such a solution that seems to
> > satisfy everyone.
> >
> > I re-started the versioning since this series is totally different
> > from previous one.  Now it's version 1.
> >
> > In case new reviewers come along the way without reading previous
> > discussions, I will try to do a summary on what this is all about.
> >
> > What is OOB execution?
> > ======================
> >
> > It's the shortcut of Out-Of-Band execution, its name is given by
> > Markus.  It's a way to quickly execute a QMP request.  Say, originally
> > QMP is going throw these steps:
> >
> >       JSON Parser --> QMP Dispatcher --> Respond
> >           /|\    (2)                (3)     |
> >        (1) |                               \|/ (4)
> >            +---------  main thread  --------+
> >
> > The requests are executed by the so-called QMP-dispatcher after the
> > JSON is parsed.  If OOB is on, we run the command directly in the
> > parser and quickly returns.
> 
> All commands should have the "id" field mandatory in this case, else
> the client will not distinguish the replies coming from the last/oob
> and the previous commands.
> 
> This should probably be enforced upfront by client capability checks,
> more below.
> 
> > Yeah I know in current code the parser calls dispatcher directly
> > (please see handle_qmp_command()).  However it's not true again after
> > this series (parser will has its own IO thread, and dispatcher will
> > still be run in main thread).  So this OOB does brings something
> > different.
> >
> > There are more details on why OOB and the difference/relationship
> > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> > slightly out of topic (and believe me, it's not easy for me to
> > summarize that).  For more information, please refers to [1].
> >
> > Summary ends here.
> >
> > Some Implementation Details
> > ===========================
> >
> > Again, I mentioned that the old QMP workflow is this:
> >
> >       JSON Parser --> QMP Dispatcher --> Respond
> >           /|\    (2)                (3)     |
> >        (1) |                               \|/ (4)
> >            +---------  main thread  --------+
> >
> > What this series does is, firstly:
> >
> >       JSON Parser     QMP Dispatcher --> Respond
> >           /|\ |           /|\       (4)     |
> >            |  | (2)        | (3)            |  (5)
> >        (1) |  +----->      |               \|/
> >            +---------  main thread  <-------+
> >
> > And further:
> >
> >                queue/kick
> >      JSON Parser ======> QMP Dispatcher --> Respond
> >          /|\ |     (3)       /|\        (4)    |
> >       (1) |  | (2)            |                |  (5)
> >           | \|/               |               \|/
> >         IO thread         main thread  <-------+
> 
> Is the queue per monitor or per client? And is the dispatching going
> to be processed even if the client is disconnected, and are new
> clients going to receive the replies from previous clients commands? I
> believe there should be a per-client context, so there won't be "id"
> request conflicts.
> 
> >
> > Then it introduced the "allow-oob" parameter in QAPI schema to define
> > commands, and "run-oob" flag to let oob-allowed command to run in the
> > parser.
> 
> From a protocol point of view, I find that "run-oob" distinction per
> command a bit pointless. It helps with legacy client that wouldn't
> expect out-of-order replies if qemu were to run oob commands oob by
> default though. Clients shouldn't care about how/where a command is
> being queued or not. If they send a command, they want it processed as
> quickly as possible. However, it can be interesting to know if the
> implementation of the command will be able to deliver oob, so that
> data in the introspection could be useful.
> 
> I would rather propose a client/server capability in qmp_capabilities,
> call it "oob":
> 
> This capability indicates oob commands support.

The problem is indicating which commands support oob as opposed to
indicating whether oob is present at all.  Future versions will
probably make more commands oob-able and a client will want to know
whether it can rely on a particular command being non-blocking.

> An oob command is a regular client message request with the "id"
> member mandatory, but the reply may be delivered
> out of order by the server if the client supports
> it too.
> 
> If both the server and the client have the "oob" capability, the
> server can handle new client requests while previous requests are being
> processed.
> 
> If the client doesn't have the "oob" capability, it may still call
> an oob command, and make multiple outstanding calls. In this case,
> the commands are processed in order, so the replies will also be in
> order. The "id" member isn't mandatory in this case.
> 
> The client should match the replies with the "id" member associated
> with the requests.
> 
> When a client is disconnected, the pending commands are not
> necessarily cancelled. But the future clients will not get replies from
> commands they didn't make (they might, however, receive side-effects
> events).

What's the behaviour on the current monitor?


> Note that without "oob" support, a client may still receive
>  messages (or events) from the server between the time a
> request is handled by the server and the reply is received. It must
> thus be prepared to handle dispatching both events and reply after
> sending a request.
> 
> 
> (see also https://lists.gnu.org/archive/html/qemu-devel/2017-01/msg03641.html)
> 
> 
> > The last patch enables this for "migrate-incoming" command.
> >
> > Please review.  Thanks.
> >
> > [1] https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> >
> > Peter Xu (15):
> >   char-io: fix possible race on IOWatchPoll
> >   qobject: allow NULL for qstring_get_str()
> >   qobject: introduce qobject_to_str()
> >   monitor: move skip_flush into monitor_data_init
> >   qjson: add "opaque" field to JSONMessageParser
> >   monitor: move the cur_mon hack deeper for QMP
> >   monitor: unify global init
> >   monitor: create IO thread
> >   monitor: allow to use IO thread for parsing
> >   monitor: introduce monitor_qmp_respond()
> >   monitor: separate QMP parser and dispatcher
> 
> There should be a limit in the number of requests the thread can
> queue. Before the patch, the limit was enforced by system socket
> buffering I think. Now, should oob commands still be processed even if
> the queue is full? If so, the thread can't be suspended.

I think the previous discussion was expecting a pair of queues
per client and perhaps a pair of central queues; each pair being
for normal command and oob commands.
(I'm not expecting these queues to be deep; IMHO '1' is the
right size for this type of queue in both cases).

Dave

> >   monitor: enable IO thread for (qmp & !mux) typed
> >   qapi: introduce new cmd option "allow-oob"
> >   qmp: support out-of-band (oob) execution
> >   qmp: let migrate-incoming allow out-of-band
> >
> >  chardev/char-io.c                |  15 ++-
> >  docs/devel/qapi-code-gen.txt     |  51 ++++++-
> >  include/monitor/monitor.h        |   2 +-
> >  include/qapi/qmp/dispatch.h      |   2 +
> >  include/qapi/qmp/json-streamer.h |   8 +-
> >  include/qapi/qmp/qstring.h       |   1 +
> >  monitor.c                        | 283 +++++++++++++++++++++++++++++++--------
> >  qapi/introspect.json             |   6 +-
> >  qapi/migration.json              |   3 +-
> >  qapi/qmp-dispatch.c              |  34 +++++
> >  qga/main.c                       |   5 +-
> >  qobject/json-streamer.c          |   7 +-
> >  qobject/qjson.c                  |   5 +-
> >  qobject/qstring.c                |  13 +-
> >  scripts/qapi-commands.py         |  19 ++-
> >  scripts/qapi-introspect.py       |  10 +-
> >  scripts/qapi.py                  |  15 ++-
> >  scripts/qapi2texi.py             |   2 +-
> >  tests/libqtest.c                 |   5 +-
> >  tests/qapi-schema/test-qapi.py   |   2 +-
> >  trace-events                     |   2 +
> >  vl.c                             |   3 +-
> >  22 files changed, 398 insertions(+), 95 deletions(-)
> >
> > --
> > 2.7.4
> >
> 
> 
> 
> -- 
> Marc-André Lureau
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 7 months ago
On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> > Hi
> > 
> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> > > This series was born from this one:
> > >
> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> > >
> > > The design comes from Markus, and also the whole-bunch-of discussions
> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> > > Stefan, etc. on discussing the topic (...again!), providing shiny
> > > ideas and suggestions.  Finally we got such a solution that seems to
> > > satisfy everyone.
> > >
> > > I re-started the versioning since this series is totally different
> > > from previous one.  Now it's version 1.
> > >
> > > In case new reviewers come along the way without reading previous
> > > discussions, I will try to do a summary on what this is all about.
> > >
> > > What is OOB execution?
> > > ======================
> > >
> > > It's the shortcut of Out-Of-Band execution, its name is given by
> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
> > > QMP is going throw these steps:
> > >
> > >       JSON Parser --> QMP Dispatcher --> Respond
> > >           /|\    (2)                (3)     |
> > >        (1) |                               \|/ (4)
> > >            +---------  main thread  --------+
> > >
> > > The requests are executed by the so-called QMP-dispatcher after the
> > > JSON is parsed.  If OOB is on, we run the command directly in the
> > > parser and quickly returns.
> > 
> > All commands should have the "id" field mandatory in this case, else
> > the client will not distinguish the replies coming from the last/oob
> > and the previous commands.
> > 
> > This should probably be enforced upfront by client capability checks,
> > more below.

Hmm yes since the oob commands are actually running in async way,
request ID should be needed here.  However I'm not sure whether
enabling the whole "request ID" thing is too big for this "try to be
small" oob change... And IMHO it suites better to be part of the whole
async work (no matter which implementation we'll use).

How about this: we make "id" mandatory for "run-oob" requests only.
For oob commands, they will always have ID then no ordering issue, and
we can do it async; for the rest of non-oob commands, we still allow
them to go without ID, and since they are not oob, they'll always be
done in order as well.  Would this work?

> > 
> > > Yeah I know in current code the parser calls dispatcher directly
> > > (please see handle_qmp_command()).  However it's not true again after
> > > this series (parser will has its own IO thread, and dispatcher will
> > > still be run in main thread).  So this OOB does brings something
> > > different.
> > >
> > > There are more details on why OOB and the difference/relationship
> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> > > slightly out of topic (and believe me, it's not easy for me to
> > > summarize that).  For more information, please refers to [1].
> > >
> > > Summary ends here.
> > >
> > > Some Implementation Details
> > > ===========================
> > >
> > > Again, I mentioned that the old QMP workflow is this:
> > >
> > >       JSON Parser --> QMP Dispatcher --> Respond
> > >           /|\    (2)                (3)     |
> > >        (1) |                               \|/ (4)
> > >            +---------  main thread  --------+
> > >
> > > What this series does is, firstly:
> > >
> > >       JSON Parser     QMP Dispatcher --> Respond
> > >           /|\ |           /|\       (4)     |
> > >            |  | (2)        | (3)            |  (5)
> > >        (1) |  +----->      |               \|/
> > >            +---------  main thread  <-------+
> > >
> > > And further:
> > >
> > >                queue/kick
> > >      JSON Parser ======> QMP Dispatcher --> Respond
> > >          /|\ |     (3)       /|\        (4)    |
> > >       (1) |  | (2)            |                |  (5)
> > >           | \|/               |               \|/
> > >         IO thread         main thread  <-------+
> > 
> > Is the queue per monitor or per client?

The queue is currently global. I think yes maybe at least we can do it
per monitor, but I am not sure whether that is urgent or can be
postponed.  After all now QMPRequest (please refer to patch 11) is
defined as (mon, id, req) tuple, so at least "id" namespace is
per-monitor.

> > And is the dispatching going
> > to be processed even if the client is disconnected, and are new
> > clients going to receive the replies from previous clients
> > commands?

[1]

(will discuss together below)

> > I
> > believe there should be a per-client context, so there won't be "id"
> > request conflicts.

I'd say I am not familiar with this "client" idea, since after all
IMHO one monitor is currently designed to mostly work with a single
client. Say, unix sockets, telnet, all these backends are only single
channeled, and one monitor instance can only work with one client at a
time.  Then do we really need to add this client layer upon it?  IMHO
the user can just provide more monitors if they wants more clients
(and at least these clients should know the existance of the others or
there might be problem, otherwise user2 will fail a migration, finally
noticed that user1 has already triggered one), and the user should
manage them well.

> > 
> > >
> > > Then it introduced the "allow-oob" parameter in QAPI schema to define
> > > commands, and "run-oob" flag to let oob-allowed command to run in the
> > > parser.
> > 
> > From a protocol point of view, I find that "run-oob" distinction per
> > command a bit pointless. It helps with legacy client that wouldn't
> > expect out-of-order replies if qemu were to run oob commands oob by
> > default though.

After all oob somehow breaks existing rules or sync execution.  I
thought the more important goal was at least to keep the legacy
behaviors when adding new things, no?

> > Clients shouldn't care about how/where a command is
> > being queued or not. If they send a command, they want it processed as
> > quickly as possible. However, it can be interesting to know if the
> > implementation of the command will be able to deliver oob, so that
> > data in the introspection could be useful.
> > 
> > I would rather propose a client/server capability in qmp_capabilities,
> > call it "oob":
> > 
> > This capability indicates oob commands support.
> 
> The problem is indicating which commands support oob as opposed to
> indicating whether oob is present at all.  Future versions will
> probably make more commands oob-able and a client will want to know
> whether it can rely on a particular command being non-blocking.

Yes.

And IMHO we don't urgently need that "whether the server globally
supports oob" thing.  Client can just know that from query-qmp-schema
already - there will always be the "allow-oob" new field for command
typed entries.  IMHO that's a solid hint.

But I don't object to return it as well in qmp_capabilities.

> 
> > An oob command is a regular client message request with the "id"
> > member mandatory, but the reply may be delivered
> > out of order by the server if the client supports
> > it too.
> > 
> > If both the server and the client have the "oob" capability, the
> > server can handle new client requests while previous requests are being
> > processed.
> > 
> > If the client doesn't have the "oob" capability, it may still call
> > an oob command, and make multiple outstanding calls. In this case,
> > the commands are processed in order, so the replies will also be in
> > order. The "id" member isn't mandatory in this case.
> > 
> > The client should match the replies with the "id" member associated
> > with the requests.
> > 
> > When a client is disconnected, the pending commands are not
> > necessarily cancelled. But the future clients will not get replies from
> > commands they didn't make (they might, however, receive side-effects
> > events).
> 
> What's the behaviour on the current monitor?

Yeah I want to ask the same question, along with questioning about
above [1].

IMHO this series will not change the behaviors of these, so IMHO the
behaviors will be the same before/after this series. E.g., when client
dropped right after the command is executed, I think we will still
execute the command, though we should encounter something odd in
monitor_json_emitter() somewhere when we want to respond.  And it will
happen the same after this series.

> 
> 
> > Note that without "oob" support, a client may still receive
> >  messages (or events) from the server between the time a
> > request is handled by the server and the reply is received. It must
> > thus be prepared to handle dispatching both events and reply after
> > sending a request.
> > 
> > 
> > (see also https://lists.gnu.org/archive/html/qemu-devel/2017-01/msg03641.html)
> > 
> > 
> > > The last patch enables this for "migrate-incoming" command.
> > >
> > > Please review.  Thanks.
> > >
> > > [1] https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> > >
> > > Peter Xu (15):
> > >   char-io: fix possible race on IOWatchPoll
> > >   qobject: allow NULL for qstring_get_str()
> > >   qobject: introduce qobject_to_str()
> > >   monitor: move skip_flush into monitor_data_init
> > >   qjson: add "opaque" field to JSONMessageParser
> > >   monitor: move the cur_mon hack deeper for QMP
> > >   monitor: unify global init
> > >   monitor: create IO thread
> > >   monitor: allow to use IO thread for parsing
> > >   monitor: introduce monitor_qmp_respond()
> > >   monitor: separate QMP parser and dispatcher
> > 
> > There should be a limit in the number of requests the thread can
> > queue. Before the patch, the limit was enforced by system socket
> > buffering I think. Now, should oob commands still be processed even if
> > the queue is full? If so, the thread can't be suspended.
> 
> I think the previous discussion was expecting a pair of queues
> per client and perhaps a pair of central queues; each pair being
> for normal command and oob commands.
> (I'm not expecting these queues to be deep; IMHO '1' is the
> right size for this type of queue in both cases).

Yes.  One thing to mention is that, if we see the graph above, I
didn't really introduce two queues (input/output), but only one input
queue.  The response is still handled in dispatcher for now, since I
think that's quite enough at least for OOB, and I didn't see much
benefit now to split that 2nd queue.

So I am thinking whether we can just quickly respond an event when the
queue is full, as I proposed in the other reply.

Regarding to queue size: I am afraid max_size=1 may not suffice?
Otherwise a simple batch of:

{"execute": "query-status"} {"execute": "query-status"}

Will trigger the failure.  But I definitely agree it should not be
something very large.  The total memory will be this:

  json limit * queue length limit * monitor count limit
      (X)            (Y)                    (Z)

Now we have (X) already (in form of a few tunables for JSON token
counts, etc.), we don't have (Z), and we definitely need (Y).

How about we add limits on Y=16 and Z=8?

We can do some math if we want some more exact number though.

> 
> Dave
> 
> > >   monitor: enable IO thread for (qmp & !mux) typed
> > >   qapi: introduce new cmd option "allow-oob"
> > >   qmp: support out-of-band (oob) execution
> > >   qmp: let migrate-incoming allow out-of-band
> > >
> > >  chardev/char-io.c                |  15 ++-
> > >  docs/devel/qapi-code-gen.txt     |  51 ++++++-
> > >  include/monitor/monitor.h        |   2 +-
> > >  include/qapi/qmp/dispatch.h      |   2 +
> > >  include/qapi/qmp/json-streamer.h |   8 +-
> > >  include/qapi/qmp/qstring.h       |   1 +
> > >  monitor.c                        | 283 +++++++++++++++++++++++++++++++--------
> > >  qapi/introspect.json             |   6 +-
> > >  qapi/migration.json              |   3 +-
> > >  qapi/qmp-dispatch.c              |  34 +++++
> > >  qga/main.c                       |   5 +-
> > >  qobject/json-streamer.c          |   7 +-
> > >  qobject/qjson.c                  |   5 +-
> > >  qobject/qstring.c                |  13 +-
> > >  scripts/qapi-commands.py         |  19 ++-
> > >  scripts/qapi-introspect.py       |  10 +-
> > >  scripts/qapi.py                  |  15 ++-
> > >  scripts/qapi2texi.py             |   2 +-
> > >  tests/libqtest.c                 |   5 +-
> > >  tests/qapi-schema/test-qapi.py   |   2 +-
> > >  trace-events                     |   2 +
> > >  vl.c                             |   3 +-
> > >  22 files changed, 398 insertions(+), 95 deletions(-)
> > >
> > > --
> > > 2.7.4
> > >
> > 
> > 
> > 
> > -- 
> > Marc-André Lureau
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

-- 
Peter Xu

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Marc-André Lureau 6 years, 7 months ago
Hi

On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
> On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
>> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
>> > Hi
>> >
>> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
>> > > This series was born from this one:
>> > >
>> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
>> > >
>> > > The design comes from Markus, and also the whole-bunch-of discussions
>> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
>> > > Stefan, etc. on discussing the topic (...again!), providing shiny
>> > > ideas and suggestions.  Finally we got such a solution that seems to
>> > > satisfy everyone.
>> > >
>> > > I re-started the versioning since this series is totally different
>> > > from previous one.  Now it's version 1.
>> > >
>> > > In case new reviewers come along the way without reading previous
>> > > discussions, I will try to do a summary on what this is all about.
>> > >
>> > > What is OOB execution?
>> > > ======================
>> > >
>> > > It's the shortcut of Out-Of-Band execution, its name is given by
>> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
>> > > QMP is going throw these steps:
>> > >
>> > >       JSON Parser --> QMP Dispatcher --> Respond
>> > >           /|\    (2)                (3)     |
>> > >        (1) |                               \|/ (4)
>> > >            +---------  main thread  --------+
>> > >
>> > > The requests are executed by the so-called QMP-dispatcher after the
>> > > JSON is parsed.  If OOB is on, we run the command directly in the
>> > > parser and quickly returns.
>> >
>> > All commands should have the "id" field mandatory in this case, else
>> > the client will not distinguish the replies coming from the last/oob
>> > and the previous commands.
>> >
>> > This should probably be enforced upfront by client capability checks,
>> > more below.
>
> Hmm yes since the oob commands are actually running in async way,
> request ID should be needed here.  However I'm not sure whether
> enabling the whole "request ID" thing is too big for this "try to be
> small" oob change... And IMHO it suites better to be part of the whole
> async work (no matter which implementation we'll use).
>
> How about this: we make "id" mandatory for "run-oob" requests only.
> For oob commands, they will always have ID then no ordering issue, and
> we can do it async; for the rest of non-oob commands, we still allow
> them to go without ID, and since they are not oob, they'll always be
> done in order as well.  Would this work?

This mixed-mode is imho more complicated to deal with than having the
protocol enforced one way or the other, but that should work.

>
>> >
>> > > Yeah I know in current code the parser calls dispatcher directly
>> > > (please see handle_qmp_command()).  However it's not true again after
>> > > this series (parser will has its own IO thread, and dispatcher will
>> > > still be run in main thread).  So this OOB does brings something
>> > > different.
>> > >
>> > > There are more details on why OOB and the difference/relationship
>> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
>> > > slightly out of topic (and believe me, it's not easy for me to
>> > > summarize that).  For more information, please refers to [1].
>> > >
>> > > Summary ends here.
>> > >
>> > > Some Implementation Details
>> > > ===========================
>> > >
>> > > Again, I mentioned that the old QMP workflow is this:
>> > >
>> > >       JSON Parser --> QMP Dispatcher --> Respond
>> > >           /|\    (2)                (3)     |
>> > >        (1) |                               \|/ (4)
>> > >            +---------  main thread  --------+
>> > >
>> > > What this series does is, firstly:
>> > >
>> > >       JSON Parser     QMP Dispatcher --> Respond
>> > >           /|\ |           /|\       (4)     |
>> > >            |  | (2)        | (3)            |  (5)
>> > >        (1) |  +----->      |               \|/
>> > >            +---------  main thread  <-------+
>> > >
>> > > And further:
>> > >
>> > >                queue/kick
>> > >      JSON Parser ======> QMP Dispatcher --> Respond
>> > >          /|\ |     (3)       /|\        (4)    |
>> > >       (1) |  | (2)            |                |  (5)
>> > >           | \|/               |               \|/
>> > >         IO thread         main thread  <-------+
>> >
>> > Is the queue per monitor or per client?
>
> The queue is currently global. I think yes maybe at least we can do it
> per monitor, but I am not sure whether that is urgent or can be
> postponed.  After all now QMPRequest (please refer to patch 11) is
> defined as (mon, id, req) tuple, so at least "id" namespace is
> per-monitor.
>
>> > And is the dispatching going
>> > to be processed even if the client is disconnected, and are new
>> > clients going to receive the replies from previous clients
>> > commands?
>
> [1]
>
> (will discuss together below)
>
>> > I
>> > believe there should be a per-client context, so there won't be "id"
>> > request conflicts.
>
> I'd say I am not familiar with this "client" idea, since after all
> IMHO one monitor is currently designed to mostly work with a single
> client. Say, unix sockets, telnet, all these backends are only single
> channeled, and one monitor instance can only work with one client at a
> time.  Then do we really need to add this client layer upon it?  IMHO
> the user can just provide more monitors if they wants more clients
> (and at least these clients should know the existance of the others or
> there might be problem, otherwise user2 will fail a migration, finally
> noticed that user1 has already triggered one), and the user should
> manage them well.

qemu should support a management layer / libvirt restart/reconnect.
Afaik, it mostly work today. There might be a cases where libvirt can
be confused if it receives a reply from a previous connection command,
but due to the sync processing of the chardev, I am not sure you can
get in this situation.  By adding "oob" commands and queuing, the
client will have to remember which was the last "id" used, or it will
create more conflict after a reconnect.

Imho we should introduce the client/connection concept to avoid this
confusion (unexpected reply & per client id space).

>
>> >
>> > >
>> > > Then it introduced the "allow-oob" parameter in QAPI schema to define
>> > > commands, and "run-oob" flag to let oob-allowed command to run in the
>> > > parser.
>> >
>> > From a protocol point of view, I find that "run-oob" distinction per
>> > command a bit pointless. It helps with legacy client that wouldn't
>> > expect out-of-order replies if qemu were to run oob commands oob by
>> > default though.
>
> After all oob somehow breaks existing rules or sync execution.  I
> thought the more important goal was at least to keep the legacy
> behaviors when adding new things, no?

Of course we have to keep compatibily. What do you mean by "oob
somehow breaks existing rules or sync execution"? oob means queuing
and unordered reply support, so clearly this is breaking the current
"mostly ordered" behaviour (mostly because events may still come any
time..., and the reconnect issue discussed above).

>> > Clients shouldn't care about how/where a command is
>> > being queued or not. If they send a command, they want it processed as
>> > quickly as possible. However, it can be interesting to know if the
>> > implementation of the command will be able to deliver oob, so that
>> > data in the introspection could be useful.
>> >
>> > I would rather propose a client/server capability in qmp_capabilities,
>> > call it "oob":
>> >
>> > This capability indicates oob commands support.
>>
>> The problem is indicating which commands support oob as opposed to
>> indicating whether oob is present at all.  Future versions will
>> probably make more commands oob-able and a client will want to know
>> whether it can rely on a particular command being non-blocking.
>
> Yes.
>
> And IMHO we don't urgently need that "whether the server globally
> supports oob" thing.  Client can just know that from query-qmp-schema
> already - there will always be the "allow-oob" new field for command
> typed entries.  IMHO that's a solid hint.
>
> But I don't object to return it as well in qmp_capabilities.

Does it feel right that the client can specify how the command are
processed / queued ? Isn't it preferable to leave that to the server
to decide? Why would a client specify that? And should the server be
expected to behave differently? What the client needs to be able is to
match the unordered replies, and that can be stated during cap
negotiation / qmp_capabilties. The server is expected to do a best
effort to handle commands and their priorities. If the client needs
several command queue, it is simpler to open several connection rather
than trying to fit that weird priority logic in the protocol imho.

>
>>
>> > An oob command is a regular client message request with the "id"
>> > member mandatory, but the reply may be delivered
>> > out of order by the server if the client supports
>> > it too.
>> >
>> > If both the server and the client have the "oob" capability, the
>> > server can handle new client requests while previous requests are being
>> > processed.
>> >
>> > If the client doesn't have the "oob" capability, it may still call
>> > an oob command, and make multiple outstanding calls. In this case,
>> > the commands are processed in order, so the replies will also be in
>> > order. The "id" member isn't mandatory in this case.
>> >
>> > The client should match the replies with the "id" member associated
>> > with the requests.
>> >
>> > When a client is disconnected, the pending commands are not
>> > necessarily cancelled. But the future clients will not get replies from
>> > commands they didn't make (they might, however, receive side-effects
>> > events).
>>
>> What's the behaviour on the current monitor?
>
> Yeah I want to ask the same question, along with questioning about
> above [1].
>
> IMHO this series will not change the behaviors of these, so IMHO the
> behaviors will be the same before/after this series. E.g., when client
> dropped right after the command is executed, I think we will still
> execute the command, though we should encounter something odd in
> monitor_json_emitter() somewhere when we want to respond.  And it will
> happen the same after this series.

I think it can get worse after your series, because you queue the
commands, so clearly a new client can get replies from an old client
commands. As said above, I am not convinced you can get in that
situation with current code.

>
>>
>>
>> > Note that without "oob" support, a client may still receive
>> >  messages (or events) from the server between the time a
>> > request is handled by the server and the reply is received. It must
>> > thus be prepared to handle dispatching both events and reply after
>> > sending a request.
>> >
>> >
>> > (see also https://lists.gnu.org/archive/html/qemu-devel/2017-01/msg03641.html)
>> >
>> >
>> > > The last patch enables this for "migrate-incoming" command.
>> > >
>> > > Please review.  Thanks.
>> > >
>> > > [1] https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
>> > >
>> > > Peter Xu (15):
>> > >   char-io: fix possible race on IOWatchPoll
>> > >   qobject: allow NULL for qstring_get_str()
>> > >   qobject: introduce qobject_to_str()
>> > >   monitor: move skip_flush into monitor_data_init
>> > >   qjson: add "opaque" field to JSONMessageParser
>> > >   monitor: move the cur_mon hack deeper for QMP
>> > >   monitor: unify global init
>> > >   monitor: create IO thread
>> > >   monitor: allow to use IO thread for parsing
>> > >   monitor: introduce monitor_qmp_respond()
>> > >   monitor: separate QMP parser and dispatcher
>> >
>> > There should be a limit in the number of requests the thread can
>> > queue. Before the patch, the limit was enforced by system socket
>> > buffering I think. Now, should oob commands still be processed even if
>> > the queue is full? If so, the thread can't be suspended.
>>
>> I think the previous discussion was expecting a pair of queues
>> per client and perhaps a pair of central queues; each pair being
>> for normal command and oob commands.
>> (I'm not expecting these queues to be deep; IMHO '1' is the
>> right size for this type of queue in both cases).
>
> Yes.  One thing to mention is that, if we see the graph above, I
> didn't really introduce two queues (input/output), but only one input
> queue.  The response is still handled in dispatcher for now, since I
> think that's quite enough at least for OOB, and I didn't see much
> benefit now to split that 2nd queue.
>
> So I am thinking whether we can just quickly respond an event when the
> queue is full, as I proposed in the other reply.
>
> Regarding to queue size: I am afraid max_size=1 may not suffice?
> Otherwise a simple batch of:
>
> {"execute": "query-status"} {"execute": "query-status"}
>
> Will trigger the failure.  But I definitely agree it should not be
> something very large.  The total memory will be this:
>
>   json limit * queue length limit * monitor count limit
>       (X)            (Y)                    (Z)
>
> Now we have (X) already (in form of a few tunables for JSON token
> counts, etc.), we don't have (Z), and we definitely need (Y).
>
> How about we add limits on Y=16 and Z=8?
>
> We can do some math if we want some more exact number though.
>
>>
>> Dave
>>
>> > >   monitor: enable IO thread for (qmp & !mux) typed
>> > >   qapi: introduce new cmd option "allow-oob"
>> > >   qmp: support out-of-band (oob) execution
>> > >   qmp: let migrate-incoming allow out-of-band
>> > >
>> > >  chardev/char-io.c                |  15 ++-
>> > >  docs/devel/qapi-code-gen.txt     |  51 ++++++-
>> > >  include/monitor/monitor.h        |   2 +-
>> > >  include/qapi/qmp/dispatch.h      |   2 +
>> > >  include/qapi/qmp/json-streamer.h |   8 +-
>> > >  include/qapi/qmp/qstring.h       |   1 +
>> > >  monitor.c                        | 283 +++++++++++++++++++++++++++++++--------
>> > >  qapi/introspect.json             |   6 +-
>> > >  qapi/migration.json              |   3 +-
>> > >  qapi/qmp-dispatch.c              |  34 +++++
>> > >  qga/main.c                       |   5 +-
>> > >  qobject/json-streamer.c          |   7 +-
>> > >  qobject/qjson.c                  |   5 +-
>> > >  qobject/qstring.c                |  13 +-
>> > >  scripts/qapi-commands.py         |  19 ++-
>> > >  scripts/qapi-introspect.py       |  10 +-
>> > >  scripts/qapi.py                  |  15 ++-
>> > >  scripts/qapi2texi.py             |   2 +-
>> > >  tests/libqtest.c                 |   5 +-
>> > >  tests/qapi-schema/test-qapi.py   |   2 +-
>> > >  trace-events                     |   2 +
>> > >  vl.c                             |   3 +-
>> > >  22 files changed, 398 insertions(+), 95 deletions(-)
>> > >
>> > > --
>> > > 2.7.4
>> > >
>> >
>> >
>> >
>> > --
>> > Marc-André Lureau
>> --
>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
> --
> Peter Xu



-- 
Marc-André Lureau

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 7 months ago
On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
> Hi
> 
> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> > Hi
> >> >
> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> >> > > This series was born from this one:
> >> > >
> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> >> > >
> >> > > The design comes from Markus, and also the whole-bunch-of discussions
> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
> >> > > ideas and suggestions.  Finally we got such a solution that seems to
> >> > > satisfy everyone.
> >> > >
> >> > > I re-started the versioning since this series is totally different
> >> > > from previous one.  Now it's version 1.
> >> > >
> >> > > In case new reviewers come along the way without reading previous
> >> > > discussions, I will try to do a summary on what this is all about.
> >> > >
> >> > > What is OOB execution?
> >> > > ======================
> >> > >
> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
> >> > > QMP is going throw these steps:
> >> > >
> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> > >           /|\    (2)                (3)     |
> >> > >        (1) |                               \|/ (4)
> >> > >            +---------  main thread  --------+
> >> > >
> >> > > The requests are executed by the so-called QMP-dispatcher after the
> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
> >> > > parser and quickly returns.
> >> >
> >> > All commands should have the "id" field mandatory in this case, else
> >> > the client will not distinguish the replies coming from the last/oob
> >> > and the previous commands.
> >> >
> >> > This should probably be enforced upfront by client capability checks,
> >> > more below.
> >
> > Hmm yes since the oob commands are actually running in async way,
> > request ID should be needed here.  However I'm not sure whether
> > enabling the whole "request ID" thing is too big for this "try to be
> > small" oob change... And IMHO it suites better to be part of the whole
> > async work (no matter which implementation we'll use).
> >
> > How about this: we make "id" mandatory for "run-oob" requests only.
> > For oob commands, they will always have ID then no ordering issue, and
> > we can do it async; for the rest of non-oob commands, we still allow
> > them to go without ID, and since they are not oob, they'll always be
> > done in order as well.  Would this work?
> 
> This mixed-mode is imho more complicated to deal with than having the
> protocol enforced one way or the other, but that should work.
> 
> >
> >> >
> >> > > Yeah I know in current code the parser calls dispatcher directly
> >> > > (please see handle_qmp_command()).  However it's not true again after
> >> > > this series (parser will has its own IO thread, and dispatcher will
> >> > > still be run in main thread).  So this OOB does brings something
> >> > > different.
> >> > >
> >> > > There are more details on why OOB and the difference/relationship
> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> >> > > slightly out of topic (and believe me, it's not easy for me to
> >> > > summarize that).  For more information, please refers to [1].
> >> > >
> >> > > Summary ends here.
> >> > >
> >> > > Some Implementation Details
> >> > > ===========================
> >> > >
> >> > > Again, I mentioned that the old QMP workflow is this:
> >> > >
> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> > >           /|\    (2)                (3)     |
> >> > >        (1) |                               \|/ (4)
> >> > >            +---------  main thread  --------+
> >> > >
> >> > > What this series does is, firstly:
> >> > >
> >> > >       JSON Parser     QMP Dispatcher --> Respond
> >> > >           /|\ |           /|\       (4)     |
> >> > >            |  | (2)        | (3)            |  (5)
> >> > >        (1) |  +----->      |               \|/
> >> > >            +---------  main thread  <-------+
> >> > >
> >> > > And further:
> >> > >
> >> > >                queue/kick
> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
> >> > >          /|\ |     (3)       /|\        (4)    |
> >> > >       (1) |  | (2)            |                |  (5)
> >> > >           | \|/               |               \|/
> >> > >         IO thread         main thread  <-------+
> >> >
> >> > Is the queue per monitor or per client?
> >
> > The queue is currently global. I think yes maybe at least we can do it
> > per monitor, but I am not sure whether that is urgent or can be
> > postponed.  After all now QMPRequest (please refer to patch 11) is
> > defined as (mon, id, req) tuple, so at least "id" namespace is
> > per-monitor.
> >
> >> > And is the dispatching going
> >> > to be processed even if the client is disconnected, and are new
> >> > clients going to receive the replies from previous clients
> >> > commands?
> >
> > [1]
> >
> > (will discuss together below)
> >
> >> > I
> >> > believe there should be a per-client context, so there won't be "id"
> >> > request conflicts.
> >
> > I'd say I am not familiar with this "client" idea, since after all
> > IMHO one monitor is currently designed to mostly work with a single
> > client. Say, unix sockets, telnet, all these backends are only single
> > channeled, and one monitor instance can only work with one client at a
> > time.  Then do we really need to add this client layer upon it?  IMHO
> > the user can just provide more monitors if they wants more clients
> > (and at least these clients should know the existance of the others or
> > there might be problem, otherwise user2 will fail a migration, finally
> > noticed that user1 has already triggered one), and the user should
> > manage them well.
> 
> qemu should support a management layer / libvirt restart/reconnect.
> Afaik, it mostly work today. There might be a cases where libvirt can
> be confused if it receives a reply from a previous connection command,
> but due to the sync processing of the chardev, I am not sure you can
> get in this situation.  By adding "oob" commands and queuing, the
> client will have to remember which was the last "id" used, or it will
> create more conflict after a reconnect.
> 
> Imho we should introduce the client/connection concept to avoid this
> confusion (unexpected reply & per client id space).

Hmm I agree that the reconnect feature would be nice, but if so IMHO
instead of throwing responses away when client disconnect, we should
really keep them, and when the client reconnects, we queue the
responses again.

I think we have other quite simple ways to solve the "unexpected
reply" and "per-client-id duplication" issues you have mentioned.

Firstly, when client gets unexpected replies ("id" field not in its
own request queue), the client should just ignore that reply, which
seems natural to me.

Then, if client disconnected and reconnected, it should not have the
problem to generate duplicated id for request, since it should know
what requests it has sent already.  A simplest case I can think of is,
the ID should contains the following tuple:

  (client name, client unique ID, request ID)

Here "client name" can be something like "libvirt", which is the name
of client application;

"client unique ID" can be anything generated when client starts, it
identifies a single client session, maybe a UUID.

"request ID" can be a unsigned integer starts from zero, and increases
each time the client sends one request.

I believe current libvirt is using "client name" + "request ID".  It's
something similar (after all I think we don't normally have >1 libvirt
to manage single QEMU, so I think it should be good enough).

Then even if client disconnect and reconnect, request ID won't lose,
and no duplication would happen IMHO.

> 
> >
> >> >
> >> > >
> >> > > Then it introduced the "allow-oob" parameter in QAPI schema to define
> >> > > commands, and "run-oob" flag to let oob-allowed command to run in the
> >> > > parser.
> >> >
> >> > From a protocol point of view, I find that "run-oob" distinction per
> >> > command a bit pointless. It helps with legacy client that wouldn't
> >> > expect out-of-order replies if qemu were to run oob commands oob by
> >> > default though.
> >
> > After all oob somehow breaks existing rules or sync execution.  I
> > thought the more important goal was at least to keep the legacy
> > behaviors when adding new things, no?
> 
> Of course we have to keep compatibily. What do you mean by "oob
> somehow breaks existing rules or sync execution"? oob means queuing
> and unordered reply support, so clearly this is breaking the current
> "mostly ordered" behaviour (mostly because events may still come any
> time..., and the reconnect issue discussed above).

Yes.  That's what I mean, it breaks the synchronous scemantic.  But
I should definitely not call it a "break" though since old clients
will work perfectly fine with it.  Sorry for the bad wording.

> 
> >> > Clients shouldn't care about how/where a command is
> >> > being queued or not. If they send a command, they want it processed as
> >> > quickly as possible. However, it can be interesting to know if the
> >> > implementation of the command will be able to deliver oob, so that
> >> > data in the introspection could be useful.
> >> >
> >> > I would rather propose a client/server capability in qmp_capabilities,
> >> > call it "oob":
> >> >
> >> > This capability indicates oob commands support.
> >>
> >> The problem is indicating which commands support oob as opposed to
> >> indicating whether oob is present at all.  Future versions will
> >> probably make more commands oob-able and a client will want to know
> >> whether it can rely on a particular command being non-blocking.
> >
> > Yes.
> >
> > And IMHO we don't urgently need that "whether the server globally
> > supports oob" thing.  Client can just know that from query-qmp-schema
> > already - there will always be the "allow-oob" new field for command
> > typed entries.  IMHO that's a solid hint.
> >
> > But I don't object to return it as well in qmp_capabilities.
> 
> Does it feel right that the client can specify how the command are
> processed / queued ? Isn't it preferable to leave that to the server
> to decide? Why would a client specify that? And should the server be
> expected to behave differently? What the client needs to be able is to
> match the unordered replies, and that can be stated during cap
> negotiation / qmp_capabilties. The server is expected to do a best
> effort to handle commands and their priorities. If the client needs
> several command queue, it is simpler to open several connection rather
> than trying to fit that weird priority logic in the protocol imho.

Sorry I may have missed the point here.  We were discussing about a
global hint for "oob" support, am I right?  Then, could I ask what's
the "weird priority logic" you mentioned?

> 
> >
> >>
> >> > An oob command is a regular client message request with the "id"
> >> > member mandatory, but the reply may be delivered
> >> > out of order by the server if the client supports
> >> > it too.
> >> >
> >> > If both the server and the client have the "oob" capability, the
> >> > server can handle new client requests while previous requests are being
> >> > processed.
> >> >
> >> > If the client doesn't have the "oob" capability, it may still call
> >> > an oob command, and make multiple outstanding calls. In this case,
> >> > the commands are processed in order, so the replies will also be in
> >> > order. The "id" member isn't mandatory in this case.
> >> >
> >> > The client should match the replies with the "id" member associated
> >> > with the requests.
> >> >
> >> > When a client is disconnected, the pending commands are not
> >> > necessarily cancelled. But the future clients will not get replies from
> >> > commands they didn't make (they might, however, receive side-effects
> >> > events).
> >>
> >> What's the behaviour on the current monitor?
> >
> > Yeah I want to ask the same question, along with questioning about
> > above [1].
> >
> > IMHO this series will not change the behaviors of these, so IMHO the
> > behaviors will be the same before/after this series. E.g., when client
> > dropped right after the command is executed, I think we will still
> > execute the command, though we should encounter something odd in
> > monitor_json_emitter() somewhere when we want to respond.  And it will
> > happen the same after this series.
> 
> I think it can get worse after your series, because you queue the
> commands, so clearly a new client can get replies from an old client
> commands. As said above, I am not convinced you can get in that
> situation with current code.

Hmm, seems so.  But would this a big problem?

I really think the new client should just throw that response away if
it does not really know that response (from peeking at "id" field),
just like my opinion above.

Thanks,

-- 
Peter Xu

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Marc-André Lureau 6 years, 7 months ago
Hi

On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <peterx@redhat.com> wrote:
> On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
>> Hi
>>
>> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
>> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
>> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
>> >> > Hi
>> >> >
>> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
>> >> > > This series was born from this one:
>> >> > >
>> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
>> >> > >
>> >> > > The design comes from Markus, and also the whole-bunch-of discussions
>> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
>> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
>> >> > > ideas and suggestions.  Finally we got such a solution that seems to
>> >> > > satisfy everyone.
>> >> > >
>> >> > > I re-started the versioning since this series is totally different
>> >> > > from previous one.  Now it's version 1.
>> >> > >
>> >> > > In case new reviewers come along the way without reading previous
>> >> > > discussions, I will try to do a summary on what this is all about.
>> >> > >
>> >> > > What is OOB execution?
>> >> > > ======================
>> >> > >
>> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
>> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
>> >> > > QMP is going throw these steps:
>> >> > >
>> >> > >       JSON Parser --> QMP Dispatcher --> Respond
>> >> > >           /|\    (2)                (3)     |
>> >> > >        (1) |                               \|/ (4)
>> >> > >            +---------  main thread  --------+
>> >> > >
>> >> > > The requests are executed by the so-called QMP-dispatcher after the
>> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
>> >> > > parser and quickly returns.
>> >> >
>> >> > All commands should have the "id" field mandatory in this case, else
>> >> > the client will not distinguish the replies coming from the last/oob
>> >> > and the previous commands.
>> >> >
>> >> > This should probably be enforced upfront by client capability checks,
>> >> > more below.
>> >
>> > Hmm yes since the oob commands are actually running in async way,
>> > request ID should be needed here.  However I'm not sure whether
>> > enabling the whole "request ID" thing is too big for this "try to be
>> > small" oob change... And IMHO it suites better to be part of the whole
>> > async work (no matter which implementation we'll use).
>> >
>> > How about this: we make "id" mandatory for "run-oob" requests only.
>> > For oob commands, they will always have ID then no ordering issue, and
>> > we can do it async; for the rest of non-oob commands, we still allow
>> > them to go without ID, and since they are not oob, they'll always be
>> > done in order as well.  Would this work?
>>
>> This mixed-mode is imho more complicated to deal with than having the
>> protocol enforced one way or the other, but that should work.
>>
>> >
>> >> >
>> >> > > Yeah I know in current code the parser calls dispatcher directly
>> >> > > (please see handle_qmp_command()).  However it's not true again after
>> >> > > this series (parser will has its own IO thread, and dispatcher will
>> >> > > still be run in main thread).  So this OOB does brings something
>> >> > > different.
>> >> > >
>> >> > > There are more details on why OOB and the difference/relationship
>> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
>> >> > > slightly out of topic (and believe me, it's not easy for me to
>> >> > > summarize that).  For more information, please refers to [1].
>> >> > >
>> >> > > Summary ends here.
>> >> > >
>> >> > > Some Implementation Details
>> >> > > ===========================
>> >> > >
>> >> > > Again, I mentioned that the old QMP workflow is this:
>> >> > >
>> >> > >       JSON Parser --> QMP Dispatcher --> Respond
>> >> > >           /|\    (2)                (3)     |
>> >> > >        (1) |                               \|/ (4)
>> >> > >            +---------  main thread  --------+
>> >> > >
>> >> > > What this series does is, firstly:
>> >> > >
>> >> > >       JSON Parser     QMP Dispatcher --> Respond
>> >> > >           /|\ |           /|\       (4)     |
>> >> > >            |  | (2)        | (3)            |  (5)
>> >> > >        (1) |  +----->      |               \|/
>> >> > >            +---------  main thread  <-------+
>> >> > >
>> >> > > And further:
>> >> > >
>> >> > >                queue/kick
>> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
>> >> > >          /|\ |     (3)       /|\        (4)    |
>> >> > >       (1) |  | (2)            |                |  (5)
>> >> > >           | \|/               |               \|/
>> >> > >         IO thread         main thread  <-------+
>> >> >
>> >> > Is the queue per monitor or per client?
>> >
>> > The queue is currently global. I think yes maybe at least we can do it
>> > per monitor, but I am not sure whether that is urgent or can be
>> > postponed.  After all now QMPRequest (please refer to patch 11) is
>> > defined as (mon, id, req) tuple, so at least "id" namespace is
>> > per-monitor.
>> >
>> >> > And is the dispatching going
>> >> > to be processed even if the client is disconnected, and are new
>> >> > clients going to receive the replies from previous clients
>> >> > commands?
>> >
>> > [1]
>> >
>> > (will discuss together below)
>> >
>> >> > I
>> >> > believe there should be a per-client context, so there won't be "id"
>> >> > request conflicts.
>> >
>> > I'd say I am not familiar with this "client" idea, since after all
>> > IMHO one monitor is currently designed to mostly work with a single
>> > client. Say, unix sockets, telnet, all these backends are only single
>> > channeled, and one monitor instance can only work with one client at a
>> > time.  Then do we really need to add this client layer upon it?  IMHO
>> > the user can just provide more monitors if they wants more clients
>> > (and at least these clients should know the existance of the others or
>> > there might be problem, otherwise user2 will fail a migration, finally
>> > noticed that user1 has already triggered one), and the user should
>> > manage them well.
>>
>> qemu should support a management layer / libvirt restart/reconnect.
>> Afaik, it mostly work today. There might be a cases where libvirt can
>> be confused if it receives a reply from a previous connection command,
>> but due to the sync processing of the chardev, I am not sure you can
>> get in this situation.  By adding "oob" commands and queuing, the
>> client will have to remember which was the last "id" used, or it will
>> create more conflict after a reconnect.
>>
>> Imho we should introduce the client/connection concept to avoid this
>> confusion (unexpected reply & per client id space).
>
> Hmm I agree that the reconnect feature would be nice, but if so IMHO
> instead of throwing responses away when client disconnect, we should
> really keep them, and when the client reconnects, we queue the
> responses again.
>
> I think we have other quite simple ways to solve the "unexpected
> reply" and "per-client-id duplication" issues you have mentioned.
>
> Firstly, when client gets unexpected replies ("id" field not in its
> own request queue), the client should just ignore that reply, which
> seems natural to me.

The trouble is that it may legitimately use the same "id" value for
new requests. And I don't see a simple way to handle that without
races.

>
> Then, if client disconnected and reconnected, it should not have the
> problem to generate duplicated id for request, since it should know
> what requests it has sent already.  A simplest case I can think of is,
> the ID should contains the following tuple:

If you assume the "same" client will recover its state, yes.

>
>   (client name, client unique ID, request ID)
>
> Here "client name" can be something like "libvirt", which is the name
> of client application;
>
> "client unique ID" can be anything generated when client starts, it
> identifies a single client session, maybe a UUID.
>
> "request ID" can be a unsigned integer starts from zero, and increases
> each time the client sends one request.

This is introducing  session handling, and can be done in server side
only without changes in the protocol I believe.

>
> I believe current libvirt is using "client name" + "request ID".  It's
> something similar (after all I think we don't normally have >1 libvirt
> to manage single QEMU, so I think it should be good enough).

I am not sure we should base our protocol usage assumptions based on
libvirt only, but rather on what is possible today (like queuing
requests in the socket etc..).

> Then even if client disconnect and reconnect, request ID won't lose,
> and no duplication would happen IMHO.
>
>>
>> >
>> >> >
>> >> > >
>> >> > > Then it introduced the "allow-oob" parameter in QAPI schema to define
>> >> > > commands, and "run-oob" flag to let oob-allowed command to run in the
>> >> > > parser.
>> >> >
>> >> > From a protocol point of view, I find that "run-oob" distinction per
>> >> > command a bit pointless. It helps with legacy client that wouldn't
>> >> > expect out-of-order replies if qemu were to run oob commands oob by
>> >> > default though.
>> >
>> > After all oob somehow breaks existing rules or sync execution.  I
>> > thought the more important goal was at least to keep the legacy
>> > behaviors when adding new things, no?
>>
>> Of course we have to keep compatibily. What do you mean by "oob
>> somehow breaks existing rules or sync execution"? oob means queuing
>> and unordered reply support, so clearly this is breaking the current
>> "mostly ordered" behaviour (mostly because events may still come any
>> time..., and the reconnect issue discussed above).
>
> Yes.  That's what I mean, it breaks the synchronous scemantic.  But
> I should definitely not call it a "break" though since old clients
> will work perfectly fine with it.  Sorry for the bad wording.
>
>>
>> >> > Clients shouldn't care about how/where a command is
>> >> > being queued or not. If they send a command, they want it processed as
>> >> > quickly as possible. However, it can be interesting to know if the
>> >> > implementation of the command will be able to deliver oob, so that
>> >> > data in the introspection could be useful.
>> >> >
>> >> > I would rather propose a client/server capability in qmp_capabilities,
>> >> > call it "oob":
>> >> >
>> >> > This capability indicates oob commands support.
>> >>
>> >> The problem is indicating which commands support oob as opposed to
>> >> indicating whether oob is present at all.  Future versions will
>> >> probably make more commands oob-able and a client will want to know
>> >> whether it can rely on a particular command being non-blocking.
>> >
>> > Yes.
>> >
>> > And IMHO we don't urgently need that "whether the server globally
>> > supports oob" thing.  Client can just know that from query-qmp-schema
>> > already - there will always be the "allow-oob" new field for command
>> > typed entries.  IMHO that's a solid hint.
>> >
>> > But I don't object to return it as well in qmp_capabilities.
>>
>> Does it feel right that the client can specify how the command are
>> processed / queued ? Isn't it preferable to leave that to the server
>> to decide? Why would a client specify that? And should the server be
>> expected to behave differently? What the client needs to be able is to
>> match the unordered replies, and that can be stated during cap
>> negotiation / qmp_capabilties. The server is expected to do a best
>> effort to handle commands and their priorities. If the client needs
>> several command queue, it is simpler to open several connection rather
>> than trying to fit that weird priority logic in the protocol imho.
>
> Sorry I may have missed the point here.  We were discussing about a
> global hint for "oob" support, am I right?  Then, could I ask what's
> the "weird priority logic" you mentioned?

I call per-message oob hint a kind of priority logic, since you can
make the same request without oob in the same session and in parallel.

>>
>> >
>> >>
>> >> > An oob command is a regular client message request with the "id"
>> >> > member mandatory, but the reply may be delivered
>> >> > out of order by the server if the client supports
>> >> > it too.
>> >> >
>> >> > If both the server and the client have the "oob" capability, the
>> >> > server can handle new client requests while previous requests are being
>> >> > processed.
>> >> >
>> >> > If the client doesn't have the "oob" capability, it may still call
>> >> > an oob command, and make multiple outstanding calls. In this case,
>> >> > the commands are processed in order, so the replies will also be in
>> >> > order. The "id" member isn't mandatory in this case.
>> >> >
>> >> > The client should match the replies with the "id" member associated
>> >> > with the requests.
>> >> >
>> >> > When a client is disconnected, the pending commands are not
>> >> > necessarily cancelled. But the future clients will not get replies from
>> >> > commands they didn't make (they might, however, receive side-effects
>> >> > events).
>> >>
>> >> What's the behaviour on the current monitor?
>> >
>> > Yeah I want to ask the same question, along with questioning about
>> > above [1].
>> >
>> > IMHO this series will not change the behaviors of these, so IMHO the
>> > behaviors will be the same before/after this series. E.g., when client
>> > dropped right after the command is executed, I think we will still
>> > execute the command, though we should encounter something odd in
>> > monitor_json_emitter() somewhere when we want to respond.  And it will
>> > happen the same after this series.
>>
>> I think it can get worse after your series, because you queue the
>> commands, so clearly a new client can get replies from an old client
>> commands. As said above, I am not convinced you can get in that
>> situation with current code.
>
> Hmm, seems so.  But would this a big problem?
>
> I really think the new client should just throw that response away if
> it does not really know that response (from peeking at "id" field),
> just like my opinion above.

This is a high expectation.


-- 
Marc-André Lureau

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 7 months ago
* Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> Hi
> 
> On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <peterx@redhat.com> wrote:
> > On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
> >> Hi
> >>
> >> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
> >> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
> >> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> >> > Hi
> >> >> >
> >> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> >> >> > > This series was born from this one:
> >> >> > >
> >> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> >> >> > >
> >> >> > > The design comes from Markus, and also the whole-bunch-of discussions
> >> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> >> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
> >> >> > > ideas and suggestions.  Finally we got such a solution that seems to
> >> >> > > satisfy everyone.
> >> >> > >
> >> >> > > I re-started the versioning since this series is totally different
> >> >> > > from previous one.  Now it's version 1.
> >> >> > >
> >> >> > > In case new reviewers come along the way without reading previous
> >> >> > > discussions, I will try to do a summary on what this is all about.
> >> >> > >
> >> >> > > What is OOB execution?
> >> >> > > ======================
> >> >> > >
> >> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
> >> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
> >> >> > > QMP is going throw these steps:
> >> >> > >
> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> >> > >           /|\    (2)                (3)     |
> >> >> > >        (1) |                               \|/ (4)
> >> >> > >            +---------  main thread  --------+
> >> >> > >
> >> >> > > The requests are executed by the so-called QMP-dispatcher after the
> >> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
> >> >> > > parser and quickly returns.
> >> >> >
> >> >> > All commands should have the "id" field mandatory in this case, else
> >> >> > the client will not distinguish the replies coming from the last/oob
> >> >> > and the previous commands.
> >> >> >
> >> >> > This should probably be enforced upfront by client capability checks,
> >> >> > more below.
> >> >
> >> > Hmm yes since the oob commands are actually running in async way,
> >> > request ID should be needed here.  However I'm not sure whether
> >> > enabling the whole "request ID" thing is too big for this "try to be
> >> > small" oob change... And IMHO it suites better to be part of the whole
> >> > async work (no matter which implementation we'll use).
> >> >
> >> > How about this: we make "id" mandatory for "run-oob" requests only.
> >> > For oob commands, they will always have ID then no ordering issue, and
> >> > we can do it async; for the rest of non-oob commands, we still allow
> >> > them to go without ID, and since they are not oob, they'll always be
> >> > done in order as well.  Would this work?
> >>
> >> This mixed-mode is imho more complicated to deal with than having the
> >> protocol enforced one way or the other, but that should work.
> >>
> >> >
> >> >> >
> >> >> > > Yeah I know in current code the parser calls dispatcher directly
> >> >> > > (please see handle_qmp_command()).  However it's not true again after
> >> >> > > this series (parser will has its own IO thread, and dispatcher will
> >> >> > > still be run in main thread).  So this OOB does brings something
> >> >> > > different.
> >> >> > >
> >> >> > > There are more details on why OOB and the difference/relationship
> >> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> >> >> > > slightly out of topic (and believe me, it's not easy for me to
> >> >> > > summarize that).  For more information, please refers to [1].
> >> >> > >
> >> >> > > Summary ends here.
> >> >> > >
> >> >> > > Some Implementation Details
> >> >> > > ===========================
> >> >> > >
> >> >> > > Again, I mentioned that the old QMP workflow is this:
> >> >> > >
> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> >> > >           /|\    (2)                (3)     |
> >> >> > >        (1) |                               \|/ (4)
> >> >> > >            +---------  main thread  --------+
> >> >> > >
> >> >> > > What this series does is, firstly:
> >> >> > >
> >> >> > >       JSON Parser     QMP Dispatcher --> Respond
> >> >> > >           /|\ |           /|\       (4)     |
> >> >> > >            |  | (2)        | (3)            |  (5)
> >> >> > >        (1) |  +----->      |               \|/
> >> >> > >            +---------  main thread  <-------+
> >> >> > >
> >> >> > > And further:
> >> >> > >
> >> >> > >                queue/kick
> >> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
> >> >> > >          /|\ |     (3)       /|\        (4)    |
> >> >> > >       (1) |  | (2)            |                |  (5)
> >> >> > >           | \|/               |               \|/
> >> >> > >         IO thread         main thread  <-------+
> >> >> >
> >> >> > Is the queue per monitor or per client?
> >> >
> >> > The queue is currently global. I think yes maybe at least we can do it
> >> > per monitor, but I am not sure whether that is urgent or can be
> >> > postponed.  After all now QMPRequest (please refer to patch 11) is
> >> > defined as (mon, id, req) tuple, so at least "id" namespace is
> >> > per-monitor.
> >> >
> >> >> > And is the dispatching going
> >> >> > to be processed even if the client is disconnected, and are new
> >> >> > clients going to receive the replies from previous clients
> >> >> > commands?
> >> >
> >> > [1]
> >> >
> >> > (will discuss together below)
> >> >
> >> >> > I
> >> >> > believe there should be a per-client context, so there won't be "id"
> >> >> > request conflicts.
> >> >
> >> > I'd say I am not familiar with this "client" idea, since after all
> >> > IMHO one monitor is currently designed to mostly work with a single
> >> > client. Say, unix sockets, telnet, all these backends are only single
> >> > channeled, and one monitor instance can only work with one client at a
> >> > time.  Then do we really need to add this client layer upon it?  IMHO
> >> > the user can just provide more monitors if they wants more clients
> >> > (and at least these clients should know the existance of the others or
> >> > there might be problem, otherwise user2 will fail a migration, finally
> >> > noticed that user1 has already triggered one), and the user should
> >> > manage them well.
> >>
> >> qemu should support a management layer / libvirt restart/reconnect.
> >> Afaik, it mostly work today. There might be a cases where libvirt can
> >> be confused if it receives a reply from a previous connection command,
> >> but due to the sync processing of the chardev, I am not sure you can
> >> get in this situation.  By adding "oob" commands and queuing, the
> >> client will have to remember which was the last "id" used, or it will
> >> create more conflict after a reconnect.
> >>
> >> Imho we should introduce the client/connection concept to avoid this
> >> confusion (unexpected reply & per client id space).
> >
> > Hmm I agree that the reconnect feature would be nice, but if so IMHO
> > instead of throwing responses away when client disconnect, we should
> > really keep them, and when the client reconnects, we queue the
> > responses again.
> >
> > I think we have other quite simple ways to solve the "unexpected
> > reply" and "per-client-id duplication" issues you have mentioned.
> >
> > Firstly, when client gets unexpected replies ("id" field not in its
> > own request queue), the client should just ignore that reply, which
> > seems natural to me.
> 
> The trouble is that it may legitimately use the same "id" value for
> new requests. And I don't see a simple way to handle that without
> races.

Under what circumstances can it reuse the same ID for new requests?
Can't we simply tell it not to?

Dave

> >
> > Then, if client disconnected and reconnected, it should not have the
> > problem to generate duplicated id for request, since it should know
> > what requests it has sent already.  A simplest case I can think of is,
> > the ID should contains the following tuple:
> 
> If you assume the "same" client will recover its state, yes.
> 
> >
> >   (client name, client unique ID, request ID)
> >
> > Here "client name" can be something like "libvirt", which is the name
> > of client application;
> >
> > "client unique ID" can be anything generated when client starts, it
> > identifies a single client session, maybe a UUID.
> >
> > "request ID" can be a unsigned integer starts from zero, and increases
> > each time the client sends one request.
> 
> This is introducing  session handling, and can be done in server side
> only without changes in the protocol I believe.
> 
> >
> > I believe current libvirt is using "client name" + "request ID".  It's
> > something similar (after all I think we don't normally have >1 libvirt
> > to manage single QEMU, so I think it should be good enough).
> 
> I am not sure we should base our protocol usage assumptions based on
> libvirt only, but rather on what is possible today (like queuing
> requests in the socket etc..).
> 
> > Then even if client disconnect and reconnect, request ID won't lose,
> > and no duplication would happen IMHO.
> >
> >>
> >> >
> >> >> >
> >> >> > >
> >> >> > > Then it introduced the "allow-oob" parameter in QAPI schema to define
> >> >> > > commands, and "run-oob" flag to let oob-allowed command to run in the
> >> >> > > parser.
> >> >> >
> >> >> > From a protocol point of view, I find that "run-oob" distinction per
> >> >> > command a bit pointless. It helps with legacy client that wouldn't
> >> >> > expect out-of-order replies if qemu were to run oob commands oob by
> >> >> > default though.
> >> >
> >> > After all oob somehow breaks existing rules or sync execution.  I
> >> > thought the more important goal was at least to keep the legacy
> >> > behaviors when adding new things, no?
> >>
> >> Of course we have to keep compatibily. What do you mean by "oob
> >> somehow breaks existing rules or sync execution"? oob means queuing
> >> and unordered reply support, so clearly this is breaking the current
> >> "mostly ordered" behaviour (mostly because events may still come any
> >> time..., and the reconnect issue discussed above).
> >
> > Yes.  That's what I mean, it breaks the synchronous scemantic.  But
> > I should definitely not call it a "break" though since old clients
> > will work perfectly fine with it.  Sorry for the bad wording.
> >
> >>
> >> >> > Clients shouldn't care about how/where a command is
> >> >> > being queued or not. If they send a command, they want it processed as
> >> >> > quickly as possible. However, it can be interesting to know if the
> >> >> > implementation of the command will be able to deliver oob, so that
> >> >> > data in the introspection could be useful.
> >> >> >
> >> >> > I would rather propose a client/server capability in qmp_capabilities,
> >> >> > call it "oob":
> >> >> >
> >> >> > This capability indicates oob commands support.
> >> >>
> >> >> The problem is indicating which commands support oob as opposed to
> >> >> indicating whether oob is present at all.  Future versions will
> >> >> probably make more commands oob-able and a client will want to know
> >> >> whether it can rely on a particular command being non-blocking.
> >> >
> >> > Yes.
> >> >
> >> > And IMHO we don't urgently need that "whether the server globally
> >> > supports oob" thing.  Client can just know that from query-qmp-schema
> >> > already - there will always be the "allow-oob" new field for command
> >> > typed entries.  IMHO that's a solid hint.
> >> >
> >> > But I don't object to return it as well in qmp_capabilities.
> >>
> >> Does it feel right that the client can specify how the command are
> >> processed / queued ? Isn't it preferable to leave that to the server
> >> to decide? Why would a client specify that? And should the server be
> >> expected to behave differently? What the client needs to be able is to
> >> match the unordered replies, and that can be stated during cap
> >> negotiation / qmp_capabilties. The server is expected to do a best
> >> effort to handle commands and their priorities. If the client needs
> >> several command queue, it is simpler to open several connection rather
> >> than trying to fit that weird priority logic in the protocol imho.
> >
> > Sorry I may have missed the point here.  We were discussing about a
> > global hint for "oob" support, am I right?  Then, could I ask what's
> > the "weird priority logic" you mentioned?
> 
> I call per-message oob hint a kind of priority logic, since you can
> make the same request without oob in the same session and in parallel.
> 
> >>
> >> >
> >> >>
> >> >> > An oob command is a regular client message request with the "id"
> >> >> > member mandatory, but the reply may be delivered
> >> >> > out of order by the server if the client supports
> >> >> > it too.
> >> >> >
> >> >> > If both the server and the client have the "oob" capability, the
> >> >> > server can handle new client requests while previous requests are being
> >> >> > processed.
> >> >> >
> >> >> > If the client doesn't have the "oob" capability, it may still call
> >> >> > an oob command, and make multiple outstanding calls. In this case,
> >> >> > the commands are processed in order, so the replies will also be in
> >> >> > order. The "id" member isn't mandatory in this case.
> >> >> >
> >> >> > The client should match the replies with the "id" member associated
> >> >> > with the requests.
> >> >> >
> >> >> > When a client is disconnected, the pending commands are not
> >> >> > necessarily cancelled. But the future clients will not get replies from
> >> >> > commands they didn't make (they might, however, receive side-effects
> >> >> > events).
> >> >>
> >> >> What's the behaviour on the current monitor?
> >> >
> >> > Yeah I want to ask the same question, along with questioning about
> >> > above [1].
> >> >
> >> > IMHO this series will not change the behaviors of these, so IMHO the
> >> > behaviors will be the same before/after this series. E.g., when client
> >> > dropped right after the command is executed, I think we will still
> >> > execute the command, though we should encounter something odd in
> >> > monitor_json_emitter() somewhere when we want to respond.  And it will
> >> > happen the same after this series.
> >>
> >> I think it can get worse after your series, because you queue the
> >> commands, so clearly a new client can get replies from an old client
> >> commands. As said above, I am not convinced you can get in that
> >> situation with current code.
> >
> > Hmm, seems so.  But would this a big problem?
> >
> > I really think the new client should just throw that response away if
> > it does not really know that response (from peeking at "id" field),
> > just like my opinion above.
> 
> This is a high expectation.
> 
> 
> -- 
> Marc-André Lureau
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Marc-André Lureau 6 years, 7 months ago
Hi

On Mon, Sep 18, 2017 at 12:55 PM, Dr. David Alan Gilbert
<dgilbert@redhat.com> wrote:
> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
>> Hi
>>
>> On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <peterx@redhat.com> wrote:
>> > On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
>> >> Hi
>> >>
>> >> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
>> >> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
>> >> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
>> >> >> > Hi
>> >> >> >
>> >> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
>> >> >> > > This series was born from this one:
>> >> >> > >
>> >> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
>> >> >> > >
>> >> >> > > The design comes from Markus, and also the whole-bunch-of discussions
>> >> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
>> >> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
>> >> >> > > ideas and suggestions.  Finally we got such a solution that seems to
>> >> >> > > satisfy everyone.
>> >> >> > >
>> >> >> > > I re-started the versioning since this series is totally different
>> >> >> > > from previous one.  Now it's version 1.
>> >> >> > >
>> >> >> > > In case new reviewers come along the way without reading previous
>> >> >> > > discussions, I will try to do a summary on what this is all about.
>> >> >> > >
>> >> >> > > What is OOB execution?
>> >> >> > > ======================
>> >> >> > >
>> >> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
>> >> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
>> >> >> > > QMP is going throw these steps:
>> >> >> > >
>> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
>> >> >> > >           /|\    (2)                (3)     |
>> >> >> > >        (1) |                               \|/ (4)
>> >> >> > >            +---------  main thread  --------+
>> >> >> > >
>> >> >> > > The requests are executed by the so-called QMP-dispatcher after the
>> >> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
>> >> >> > > parser and quickly returns.
>> >> >> >
>> >> >> > All commands should have the "id" field mandatory in this case, else
>> >> >> > the client will not distinguish the replies coming from the last/oob
>> >> >> > and the previous commands.
>> >> >> >
>> >> >> > This should probably be enforced upfront by client capability checks,
>> >> >> > more below.
>> >> >
>> >> > Hmm yes since the oob commands are actually running in async way,
>> >> > request ID should be needed here.  However I'm not sure whether
>> >> > enabling the whole "request ID" thing is too big for this "try to be
>> >> > small" oob change... And IMHO it suites better to be part of the whole
>> >> > async work (no matter which implementation we'll use).
>> >> >
>> >> > How about this: we make "id" mandatory for "run-oob" requests only.
>> >> > For oob commands, they will always have ID then no ordering issue, and
>> >> > we can do it async; for the rest of non-oob commands, we still allow
>> >> > them to go without ID, and since they are not oob, they'll always be
>> >> > done in order as well.  Would this work?
>> >>
>> >> This mixed-mode is imho more complicated to deal with than having the
>> >> protocol enforced one way or the other, but that should work.
>> >>
>> >> >
>> >> >> >
>> >> >> > > Yeah I know in current code the parser calls dispatcher directly
>> >> >> > > (please see handle_qmp_command()).  However it's not true again after
>> >> >> > > this series (parser will has its own IO thread, and dispatcher will
>> >> >> > > still be run in main thread).  So this OOB does brings something
>> >> >> > > different.
>> >> >> > >
>> >> >> > > There are more details on why OOB and the difference/relationship
>> >> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
>> >> >> > > slightly out of topic (and believe me, it's not easy for me to
>> >> >> > > summarize that).  For more information, please refers to [1].
>> >> >> > >
>> >> >> > > Summary ends here.
>> >> >> > >
>> >> >> > > Some Implementation Details
>> >> >> > > ===========================
>> >> >> > >
>> >> >> > > Again, I mentioned that the old QMP workflow is this:
>> >> >> > >
>> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
>> >> >> > >           /|\    (2)                (3)     |
>> >> >> > >        (1) |                               \|/ (4)
>> >> >> > >            +---------  main thread  --------+
>> >> >> > >
>> >> >> > > What this series does is, firstly:
>> >> >> > >
>> >> >> > >       JSON Parser     QMP Dispatcher --> Respond
>> >> >> > >           /|\ |           /|\       (4)     |
>> >> >> > >            |  | (2)        | (3)            |  (5)
>> >> >> > >        (1) |  +----->      |               \|/
>> >> >> > >            +---------  main thread  <-------+
>> >> >> > >
>> >> >> > > And further:
>> >> >> > >
>> >> >> > >                queue/kick
>> >> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
>> >> >> > >          /|\ |     (3)       /|\        (4)    |
>> >> >> > >       (1) |  | (2)            |                |  (5)
>> >> >> > >           | \|/               |               \|/
>> >> >> > >         IO thread         main thread  <-------+
>> >> >> >
>> >> >> > Is the queue per monitor or per client?
>> >> >
>> >> > The queue is currently global. I think yes maybe at least we can do it
>> >> > per monitor, but I am not sure whether that is urgent or can be
>> >> > postponed.  After all now QMPRequest (please refer to patch 11) is
>> >> > defined as (mon, id, req) tuple, so at least "id" namespace is
>> >> > per-monitor.
>> >> >
>> >> >> > And is the dispatching going
>> >> >> > to be processed even if the client is disconnected, and are new
>> >> >> > clients going to receive the replies from previous clients
>> >> >> > commands?
>> >> >
>> >> > [1]
>> >> >
>> >> > (will discuss together below)
>> >> >
>> >> >> > I
>> >> >> > believe there should be a per-client context, so there won't be "id"
>> >> >> > request conflicts.
>> >> >
>> >> > I'd say I am not familiar with this "client" idea, since after all
>> >> > IMHO one monitor is currently designed to mostly work with a single
>> >> > client. Say, unix sockets, telnet, all these backends are only single
>> >> > channeled, and one monitor instance can only work with one client at a
>> >> > time.  Then do we really need to add this client layer upon it?  IMHO
>> >> > the user can just provide more monitors if they wants more clients
>> >> > (and at least these clients should know the existance of the others or
>> >> > there might be problem, otherwise user2 will fail a migration, finally
>> >> > noticed that user1 has already triggered one), and the user should
>> >> > manage them well.
>> >>
>> >> qemu should support a management layer / libvirt restart/reconnect.
>> >> Afaik, it mostly work today. There might be a cases where libvirt can
>> >> be confused if it receives a reply from a previous connection command,
>> >> but due to the sync processing of the chardev, I am not sure you can
>> >> get in this situation.  By adding "oob" commands and queuing, the
>> >> client will have to remember which was the last "id" used, or it will
>> >> create more conflict after a reconnect.
>> >>
>> >> Imho we should introduce the client/connection concept to avoid this
>> >> confusion (unexpected reply & per client id space).
>> >
>> > Hmm I agree that the reconnect feature would be nice, but if so IMHO
>> > instead of throwing responses away when client disconnect, we should
>> > really keep them, and when the client reconnects, we queue the
>> > responses again.
>> >
>> > I think we have other quite simple ways to solve the "unexpected
>> > reply" and "per-client-id duplication" issues you have mentioned.
>> >
>> > Firstly, when client gets unexpected replies ("id" field not in its
>> > own request queue), the client should just ignore that reply, which
>> > seems natural to me.
>>
>> The trouble is that it may legitimately use the same "id" value for
>> new requests. And I don't see a simple way to handle that without
>> races.
>
> Under what circumstances can it reuse the same ID for new requests?
> Can't we simply tell it not to?

I don't see any restriction today in the protocol in connecting with a
new client that may not know anything from a previous client.

How would you tell it not to use old IDs? Just by writing an unwritten
rule, because we don't want to fix the per connection client session
handling in qemu?

>
> Dave
>
>> >
>> > Then, if client disconnected and reconnected, it should not have the
>> > problem to generate duplicated id for request, since it should know
>> > what requests it has sent already.  A simplest case I can think of is,
>> > the ID should contains the following tuple:
>>
>> If you assume the "same" client will recover its state, yes.
>>
>> >
>> >   (client name, client unique ID, request ID)
>> >
>> > Here "client name" can be something like "libvirt", which is the name
>> > of client application;
>> >
>> > "client unique ID" can be anything generated when client starts, it
>> > identifies a single client session, maybe a UUID.
>> >
>> > "request ID" can be a unsigned integer starts from zero, and increases
>> > each time the client sends one request.
>>
>> This is introducing  session handling, and can be done in server side
>> only without changes in the protocol I believe.
>>
>> >
>> > I believe current libvirt is using "client name" + "request ID".  It's
>> > something similar (after all I think we don't normally have >1 libvirt
>> > to manage single QEMU, so I think it should be good enough).
>>
>> I am not sure we should base our protocol usage assumptions based on
>> libvirt only, but rather on what is possible today (like queuing
>> requests in the socket etc..).
>>
>> > Then even if client disconnect and reconnect, request ID won't lose,
>> > and no duplication would happen IMHO.
>> >
>> >>
>> >> >
>> >> >> >
>> >> >> > >
>> >> >> > > Then it introduced the "allow-oob" parameter in QAPI schema to define
>> >> >> > > commands, and "run-oob" flag to let oob-allowed command to run in the
>> >> >> > > parser.
>> >> >> >
>> >> >> > From a protocol point of view, I find that "run-oob" distinction per
>> >> >> > command a bit pointless. It helps with legacy client that wouldn't
>> >> >> > expect out-of-order replies if qemu were to run oob commands oob by
>> >> >> > default though.
>> >> >
>> >> > After all oob somehow breaks existing rules or sync execution.  I
>> >> > thought the more important goal was at least to keep the legacy
>> >> > behaviors when adding new things, no?
>> >>
>> >> Of course we have to keep compatibily. What do you mean by "oob
>> >> somehow breaks existing rules or sync execution"? oob means queuing
>> >> and unordered reply support, so clearly this is breaking the current
>> >> "mostly ordered" behaviour (mostly because events may still come any
>> >> time..., and the reconnect issue discussed above).
>> >
>> > Yes.  That's what I mean, it breaks the synchronous scemantic.  But
>> > I should definitely not call it a "break" though since old clients
>> > will work perfectly fine with it.  Sorry for the bad wording.
>> >
>> >>
>> >> >> > Clients shouldn't care about how/where a command is
>> >> >> > being queued or not. If they send a command, they want it processed as
>> >> >> > quickly as possible. However, it can be interesting to know if the
>> >> >> > implementation of the command will be able to deliver oob, so that
>> >> >> > data in the introspection could be useful.
>> >> >> >
>> >> >> > I would rather propose a client/server capability in qmp_capabilities,
>> >> >> > call it "oob":
>> >> >> >
>> >> >> > This capability indicates oob commands support.
>> >> >>
>> >> >> The problem is indicating which commands support oob as opposed to
>> >> >> indicating whether oob is present at all.  Future versions will
>> >> >> probably make more commands oob-able and a client will want to know
>> >> >> whether it can rely on a particular command being non-blocking.
>> >> >
>> >> > Yes.
>> >> >
>> >> > And IMHO we don't urgently need that "whether the server globally
>> >> > supports oob" thing.  Client can just know that from query-qmp-schema
>> >> > already - there will always be the "allow-oob" new field for command
>> >> > typed entries.  IMHO that's a solid hint.
>> >> >
>> >> > But I don't object to return it as well in qmp_capabilities.
>> >>
>> >> Does it feel right that the client can specify how the command are
>> >> processed / queued ? Isn't it preferable to leave that to the server
>> >> to decide? Why would a client specify that? And should the server be
>> >> expected to behave differently? What the client needs to be able is to
>> >> match the unordered replies, and that can be stated during cap
>> >> negotiation / qmp_capabilties. The server is expected to do a best
>> >> effort to handle commands and their priorities. If the client needs
>> >> several command queue, it is simpler to open several connection rather
>> >> than trying to fit that weird priority logic in the protocol imho.
>> >
>> > Sorry I may have missed the point here.  We were discussing about a
>> > global hint for "oob" support, am I right?  Then, could I ask what's
>> > the "weird priority logic" you mentioned?
>>
>> I call per-message oob hint a kind of priority logic, since you can
>> make the same request without oob in the same session and in parallel.
>>
>> >>
>> >> >
>> >> >>
>> >> >> > An oob command is a regular client message request with the "id"
>> >> >> > member mandatory, but the reply may be delivered
>> >> >> > out of order by the server if the client supports
>> >> >> > it too.
>> >> >> >
>> >> >> > If both the server and the client have the "oob" capability, the
>> >> >> > server can handle new client requests while previous requests are being
>> >> >> > processed.
>> >> >> >
>> >> >> > If the client doesn't have the "oob" capability, it may still call
>> >> >> > an oob command, and make multiple outstanding calls. In this case,
>> >> >> > the commands are processed in order, so the replies will also be in
>> >> >> > order. The "id" member isn't mandatory in this case.
>> >> >> >
>> >> >> > The client should match the replies with the "id" member associated
>> >> >> > with the requests.
>> >> >> >
>> >> >> > When a client is disconnected, the pending commands are not
>> >> >> > necessarily cancelled. But the future clients will not get replies from
>> >> >> > commands they didn't make (they might, however, receive side-effects
>> >> >> > events).
>> >> >>
>> >> >> What's the behaviour on the current monitor?
>> >> >
>> >> > Yeah I want to ask the same question, along with questioning about
>> >> > above [1].
>> >> >
>> >> > IMHO this series will not change the behaviors of these, so IMHO the
>> >> > behaviors will be the same before/after this series. E.g., when client
>> >> > dropped right after the command is executed, I think we will still
>> >> > execute the command, though we should encounter something odd in
>> >> > monitor_json_emitter() somewhere when we want to respond.  And it will
>> >> > happen the same after this series.
>> >>
>> >> I think it can get worse after your series, because you queue the
>> >> commands, so clearly a new client can get replies from an old client
>> >> commands. As said above, I am not convinced you can get in that
>> >> situation with current code.
>> >
>> > Hmm, seems so.  But would this a big problem?
>> >
>> > I really think the new client should just throw that response away if
>> > it does not really know that response (from peeking at "id" field),
>> > just like my opinion above.
>>
>> This is a high expectation.
>>
>>
>> --
>> Marc-André Lureau
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



-- 
Marc-André Lureau

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 7 months ago
* Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> Hi
> 
> On Mon, Sep 18, 2017 at 12:55 PM, Dr. David Alan Gilbert
> <dgilbert@redhat.com> wrote:
> > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> Hi
> >>
> >> On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <peterx@redhat.com> wrote:
> >> > On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
> >> >> Hi
> >> >>
> >> >> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
> >> >> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
> >> >> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> >> >> > Hi
> >> >> >> >
> >> >> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> >> >> >> > > This series was born from this one:
> >> >> >> > >
> >> >> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> >> >> >> > >
> >> >> >> > > The design comes from Markus, and also the whole-bunch-of discussions
> >> >> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> >> >> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
> >> >> >> > > ideas and suggestions.  Finally we got such a solution that seems to
> >> >> >> > > satisfy everyone.
> >> >> >> > >
> >> >> >> > > I re-started the versioning since this series is totally different
> >> >> >> > > from previous one.  Now it's version 1.
> >> >> >> > >
> >> >> >> > > In case new reviewers come along the way without reading previous
> >> >> >> > > discussions, I will try to do a summary on what this is all about.
> >> >> >> > >
> >> >> >> > > What is OOB execution?
> >> >> >> > > ======================
> >> >> >> > >
> >> >> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
> >> >> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
> >> >> >> > > QMP is going throw these steps:
> >> >> >> > >
> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> >> >> > >           /|\    (2)                (3)     |
> >> >> >> > >        (1) |                               \|/ (4)
> >> >> >> > >            +---------  main thread  --------+
> >> >> >> > >
> >> >> >> > > The requests are executed by the so-called QMP-dispatcher after the
> >> >> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
> >> >> >> > > parser and quickly returns.
> >> >> >> >
> >> >> >> > All commands should have the "id" field mandatory in this case, else
> >> >> >> > the client will not distinguish the replies coming from the last/oob
> >> >> >> > and the previous commands.
> >> >> >> >
> >> >> >> > This should probably be enforced upfront by client capability checks,
> >> >> >> > more below.
> >> >> >
> >> >> > Hmm yes since the oob commands are actually running in async way,
> >> >> > request ID should be needed here.  However I'm not sure whether
> >> >> > enabling the whole "request ID" thing is too big for this "try to be
> >> >> > small" oob change... And IMHO it suites better to be part of the whole
> >> >> > async work (no matter which implementation we'll use).
> >> >> >
> >> >> > How about this: we make "id" mandatory for "run-oob" requests only.
> >> >> > For oob commands, they will always have ID then no ordering issue, and
> >> >> > we can do it async; for the rest of non-oob commands, we still allow
> >> >> > them to go without ID, and since they are not oob, they'll always be
> >> >> > done in order as well.  Would this work?
> >> >>
> >> >> This mixed-mode is imho more complicated to deal with than having the
> >> >> protocol enforced one way or the other, but that should work.
> >> >>
> >> >> >
> >> >> >> >
> >> >> >> > > Yeah I know in current code the parser calls dispatcher directly
> >> >> >> > > (please see handle_qmp_command()).  However it's not true again after
> >> >> >> > > this series (parser will has its own IO thread, and dispatcher will
> >> >> >> > > still be run in main thread).  So this OOB does brings something
> >> >> >> > > different.
> >> >> >> > >
> >> >> >> > > There are more details on why OOB and the difference/relationship
> >> >> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> >> >> >> > > slightly out of topic (and believe me, it's not easy for me to
> >> >> >> > > summarize that).  For more information, please refers to [1].
> >> >> >> > >
> >> >> >> > > Summary ends here.
> >> >> >> > >
> >> >> >> > > Some Implementation Details
> >> >> >> > > ===========================
> >> >> >> > >
> >> >> >> > > Again, I mentioned that the old QMP workflow is this:
> >> >> >> > >
> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> >> >> > >           /|\    (2)                (3)     |
> >> >> >> > >        (1) |                               \|/ (4)
> >> >> >> > >            +---------  main thread  --------+
> >> >> >> > >
> >> >> >> > > What this series does is, firstly:
> >> >> >> > >
> >> >> >> > >       JSON Parser     QMP Dispatcher --> Respond
> >> >> >> > >           /|\ |           /|\       (4)     |
> >> >> >> > >            |  | (2)        | (3)            |  (5)
> >> >> >> > >        (1) |  +----->      |               \|/
> >> >> >> > >            +---------  main thread  <-------+
> >> >> >> > >
> >> >> >> > > And further:
> >> >> >> > >
> >> >> >> > >                queue/kick
> >> >> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
> >> >> >> > >          /|\ |     (3)       /|\        (4)    |
> >> >> >> > >       (1) |  | (2)            |                |  (5)
> >> >> >> > >           | \|/               |               \|/
> >> >> >> > >         IO thread         main thread  <-------+
> >> >> >> >
> >> >> >> > Is the queue per monitor or per client?
> >> >> >
> >> >> > The queue is currently global. I think yes maybe at least we can do it
> >> >> > per monitor, but I am not sure whether that is urgent or can be
> >> >> > postponed.  After all now QMPRequest (please refer to patch 11) is
> >> >> > defined as (mon, id, req) tuple, so at least "id" namespace is
> >> >> > per-monitor.
> >> >> >
> >> >> >> > And is the dispatching going
> >> >> >> > to be processed even if the client is disconnected, and are new
> >> >> >> > clients going to receive the replies from previous clients
> >> >> >> > commands?
> >> >> >
> >> >> > [1]
> >> >> >
> >> >> > (will discuss together below)
> >> >> >
> >> >> >> > I
> >> >> >> > believe there should be a per-client context, so there won't be "id"
> >> >> >> > request conflicts.
> >> >> >
> >> >> > I'd say I am not familiar with this "client" idea, since after all
> >> >> > IMHO one monitor is currently designed to mostly work with a single
> >> >> > client. Say, unix sockets, telnet, all these backends are only single
> >> >> > channeled, and one monitor instance can only work with one client at a
> >> >> > time.  Then do we really need to add this client layer upon it?  IMHO
> >> >> > the user can just provide more monitors if they wants more clients
> >> >> > (and at least these clients should know the existance of the others or
> >> >> > there might be problem, otherwise user2 will fail a migration, finally
> >> >> > noticed that user1 has already triggered one), and the user should
> >> >> > manage them well.
> >> >>
> >> >> qemu should support a management layer / libvirt restart/reconnect.
> >> >> Afaik, it mostly work today. There might be a cases where libvirt can
> >> >> be confused if it receives a reply from a previous connection command,
> >> >> but due to the sync processing of the chardev, I am not sure you can
> >> >> get in this situation.  By adding "oob" commands and queuing, the
> >> >> client will have to remember which was the last "id" used, or it will
> >> >> create more conflict after a reconnect.
> >> >>
> >> >> Imho we should introduce the client/connection concept to avoid this
> >> >> confusion (unexpected reply & per client id space).
> >> >
> >> > Hmm I agree that the reconnect feature would be nice, but if so IMHO
> >> > instead of throwing responses away when client disconnect, we should
> >> > really keep them, and when the client reconnects, we queue the
> >> > responses again.
> >> >
> >> > I think we have other quite simple ways to solve the "unexpected
> >> > reply" and "per-client-id duplication" issues you have mentioned.
> >> >
> >> > Firstly, when client gets unexpected replies ("id" field not in its
> >> > own request queue), the client should just ignore that reply, which
> >> > seems natural to me.
> >>
> >> The trouble is that it may legitimately use the same "id" value for
> >> new requests. And I don't see a simple way to handle that without
> >> races.
> >
> > Under what circumstances can it reuse the same ID for new requests?
> > Can't we simply tell it not to?
> 
> I don't see any restriction today in the protocol in connecting with a
> new client that may not know anything from a previous client.

Well, it knows it's doing a reconnection.

> How would you tell it not to use old IDs? Just by writing an unwritten
> rule, because we don't want to fix the per connection client session
> handling in qemu?

BY writing a written rule!  This out of order stuff we're adding here
is a change to the interface and we can define what we require of the
client.  As long as what we expect is reasonable then we might end
up with something that's simpler for both the client and qemu.
And I worry this series keeps getting more and more complex for weird
edge cases.

Dave

> >
> > Dave
> >
> >> >
> >> > Then, if client disconnected and reconnected, it should not have the
> >> > problem to generate duplicated id for request, since it should know
> >> > what requests it has sent already.  A simplest case I can think of is,
> >> > the ID should contains the following tuple:
> >>
> >> If you assume the "same" client will recover its state, yes.
> >>
> >> >
> >> >   (client name, client unique ID, request ID)
> >> >
> >> > Here "client name" can be something like "libvirt", which is the name
> >> > of client application;
> >> >
> >> > "client unique ID" can be anything generated when client starts, it
> >> > identifies a single client session, maybe a UUID.
> >> >
> >> > "request ID" can be a unsigned integer starts from zero, and increases
> >> > each time the client sends one request.
> >>
> >> This is introducing  session handling, and can be done in server side
> >> only without changes in the protocol I believe.
> >>
> >> >
> >> > I believe current libvirt is using "client name" + "request ID".  It's
> >> > something similar (after all I think we don't normally have >1 libvirt
> >> > to manage single QEMU, so I think it should be good enough).
> >>
> >> I am not sure we should base our protocol usage assumptions based on
> >> libvirt only, but rather on what is possible today (like queuing
> >> requests in the socket etc..).
> >>
> >> > Then even if client disconnect and reconnect, request ID won't lose,
> >> > and no duplication would happen IMHO.
> >> >
> >> >>
> >> >> >
> >> >> >> >
> >> >> >> > >
> >> >> >> > > Then it introduced the "allow-oob" parameter in QAPI schema to define
> >> >> >> > > commands, and "run-oob" flag to let oob-allowed command to run in the
> >> >> >> > > parser.
> >> >> >> >
> >> >> >> > From a protocol point of view, I find that "run-oob" distinction per
> >> >> >> > command a bit pointless. It helps with legacy client that wouldn't
> >> >> >> > expect out-of-order replies if qemu were to run oob commands oob by
> >> >> >> > default though.
> >> >> >
> >> >> > After all oob somehow breaks existing rules or sync execution.  I
> >> >> > thought the more important goal was at least to keep the legacy
> >> >> > behaviors when adding new things, no?
> >> >>
> >> >> Of course we have to keep compatibily. What do you mean by "oob
> >> >> somehow breaks existing rules or sync execution"? oob means queuing
> >> >> and unordered reply support, so clearly this is breaking the current
> >> >> "mostly ordered" behaviour (mostly because events may still come any
> >> >> time..., and the reconnect issue discussed above).
> >> >
> >> > Yes.  That's what I mean, it breaks the synchronous scemantic.  But
> >> > I should definitely not call it a "break" though since old clients
> >> > will work perfectly fine with it.  Sorry for the bad wording.
> >> >
> >> >>
> >> >> >> > Clients shouldn't care about how/where a command is
> >> >> >> > being queued or not. If they send a command, they want it processed as
> >> >> >> > quickly as possible. However, it can be interesting to know if the
> >> >> >> > implementation of the command will be able to deliver oob, so that
> >> >> >> > data in the introspection could be useful.
> >> >> >> >
> >> >> >> > I would rather propose a client/server capability in qmp_capabilities,
> >> >> >> > call it "oob":
> >> >> >> >
> >> >> >> > This capability indicates oob commands support.
> >> >> >>
> >> >> >> The problem is indicating which commands support oob as opposed to
> >> >> >> indicating whether oob is present at all.  Future versions will
> >> >> >> probably make more commands oob-able and a client will want to know
> >> >> >> whether it can rely on a particular command being non-blocking.
> >> >> >
> >> >> > Yes.
> >> >> >
> >> >> > And IMHO we don't urgently need that "whether the server globally
> >> >> > supports oob" thing.  Client can just know that from query-qmp-schema
> >> >> > already - there will always be the "allow-oob" new field for command
> >> >> > typed entries.  IMHO that's a solid hint.
> >> >> >
> >> >> > But I don't object to return it as well in qmp_capabilities.
> >> >>
> >> >> Does it feel right that the client can specify how the command are
> >> >> processed / queued ? Isn't it preferable to leave that to the server
> >> >> to decide? Why would a client specify that? And should the server be
> >> >> expected to behave differently? What the client needs to be able is to
> >> >> match the unordered replies, and that can be stated during cap
> >> >> negotiation / qmp_capabilties. The server is expected to do a best
> >> >> effort to handle commands and their priorities. If the client needs
> >> >> several command queue, it is simpler to open several connection rather
> >> >> than trying to fit that weird priority logic in the protocol imho.
> >> >
> >> > Sorry I may have missed the point here.  We were discussing about a
> >> > global hint for "oob" support, am I right?  Then, could I ask what's
> >> > the "weird priority logic" you mentioned?
> >>
> >> I call per-message oob hint a kind of priority logic, since you can
> >> make the same request without oob in the same session and in parallel.
> >>
> >> >>
> >> >> >
> >> >> >>
> >> >> >> > An oob command is a regular client message request with the "id"
> >> >> >> > member mandatory, but the reply may be delivered
> >> >> >> > out of order by the server if the client supports
> >> >> >> > it too.
> >> >> >> >
> >> >> >> > If both the server and the client have the "oob" capability, the
> >> >> >> > server can handle new client requests while previous requests are being
> >> >> >> > processed.
> >> >> >> >
> >> >> >> > If the client doesn't have the "oob" capability, it may still call
> >> >> >> > an oob command, and make multiple outstanding calls. In this case,
> >> >> >> > the commands are processed in order, so the replies will also be in
> >> >> >> > order. The "id" member isn't mandatory in this case.
> >> >> >> >
> >> >> >> > The client should match the replies with the "id" member associated
> >> >> >> > with the requests.
> >> >> >> >
> >> >> >> > When a client is disconnected, the pending commands are not
> >> >> >> > necessarily cancelled. But the future clients will not get replies from
> >> >> >> > commands they didn't make (they might, however, receive side-effects
> >> >> >> > events).
> >> >> >>
> >> >> >> What's the behaviour on the current monitor?
> >> >> >
> >> >> > Yeah I want to ask the same question, along with questioning about
> >> >> > above [1].
> >> >> >
> >> >> > IMHO this series will not change the behaviors of these, so IMHO the
> >> >> > behaviors will be the same before/after this series. E.g., when client
> >> >> > dropped right after the command is executed, I think we will still
> >> >> > execute the command, though we should encounter something odd in
> >> >> > monitor_json_emitter() somewhere when we want to respond.  And it will
> >> >> > happen the same after this series.
> >> >>
> >> >> I think it can get worse after your series, because you queue the
> >> >> commands, so clearly a new client can get replies from an old client
> >> >> commands. As said above, I am not convinced you can get in that
> >> >> situation with current code.
> >> >
> >> > Hmm, seems so.  But would this a big problem?
> >> >
> >> > I really think the new client should just throw that response away if
> >> > it does not really know that response (from peeking at "id" field),
> >> > just like my opinion above.
> >>
> >> This is a high expectation.
> >>
> >>
> >> --
> >> Marc-André Lureau
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> 
> 
> -- 
> Marc-André Lureau
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Marc-André Lureau 6 years, 7 months ago
On Mon, Sep 18, 2017 at 1:26 PM, Dr. David Alan Gilbert
<dgilbert@redhat.com> wrote:
> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
>> Hi
>>
>> On Mon, Sep 18, 2017 at 12:55 PM, Dr. David Alan Gilbert
>> <dgilbert@redhat.com> wrote:
>> > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
>> >> Hi
>> >>
>> >> On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <peterx@redhat.com> wrote:
>> >> > On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
>> >> >> Hi
>> >> >>
>> >> >> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
>> >> >> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
>> >> >> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
>> >> >> >> > Hi
>> >> >> >> >
>> >> >> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
>> >> >> >> > > This series was born from this one:
>> >> >> >> > >
>> >> >> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
>> >> >> >> > >
>> >> >> >> > > The design comes from Markus, and also the whole-bunch-of discussions
>> >> >> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
>> >> >> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
>> >> >> >> > > ideas and suggestions.  Finally we got such a solution that seems to
>> >> >> >> > > satisfy everyone.
>> >> >> >> > >
>> >> >> >> > > I re-started the versioning since this series is totally different
>> >> >> >> > > from previous one.  Now it's version 1.
>> >> >> >> > >
>> >> >> >> > > In case new reviewers come along the way without reading previous
>> >> >> >> > > discussions, I will try to do a summary on what this is all about.
>> >> >> >> > >
>> >> >> >> > > What is OOB execution?
>> >> >> >> > > ======================
>> >> >> >> > >
>> >> >> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
>> >> >> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
>> >> >> >> > > QMP is going throw these steps:
>> >> >> >> > >
>> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
>> >> >> >> > >           /|\    (2)                (3)     |
>> >> >> >> > >        (1) |                               \|/ (4)
>> >> >> >> > >            +---------  main thread  --------+
>> >> >> >> > >
>> >> >> >> > > The requests are executed by the so-called QMP-dispatcher after the
>> >> >> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
>> >> >> >> > > parser and quickly returns.
>> >> >> >> >
>> >> >> >> > All commands should have the "id" field mandatory in this case, else
>> >> >> >> > the client will not distinguish the replies coming from the last/oob
>> >> >> >> > and the previous commands.
>> >> >> >> >
>> >> >> >> > This should probably be enforced upfront by client capability checks,
>> >> >> >> > more below.
>> >> >> >
>> >> >> > Hmm yes since the oob commands are actually running in async way,
>> >> >> > request ID should be needed here.  However I'm not sure whether
>> >> >> > enabling the whole "request ID" thing is too big for this "try to be
>> >> >> > small" oob change... And IMHO it suites better to be part of the whole
>> >> >> > async work (no matter which implementation we'll use).
>> >> >> >
>> >> >> > How about this: we make "id" mandatory for "run-oob" requests only.
>> >> >> > For oob commands, they will always have ID then no ordering issue, and
>> >> >> > we can do it async; for the rest of non-oob commands, we still allow
>> >> >> > them to go without ID, and since they are not oob, they'll always be
>> >> >> > done in order as well.  Would this work?
>> >> >>
>> >> >> This mixed-mode is imho more complicated to deal with than having the
>> >> >> protocol enforced one way or the other, but that should work.
>> >> >>
>> >> >> >
>> >> >> >> >
>> >> >> >> > > Yeah I know in current code the parser calls dispatcher directly
>> >> >> >> > > (please see handle_qmp_command()).  However it's not true again after
>> >> >> >> > > this series (parser will has its own IO thread, and dispatcher will
>> >> >> >> > > still be run in main thread).  So this OOB does brings something
>> >> >> >> > > different.
>> >> >> >> > >
>> >> >> >> > > There are more details on why OOB and the difference/relationship
>> >> >> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
>> >> >> >> > > slightly out of topic (and believe me, it's not easy for me to
>> >> >> >> > > summarize that).  For more information, please refers to [1].
>> >> >> >> > >
>> >> >> >> > > Summary ends here.
>> >> >> >> > >
>> >> >> >> > > Some Implementation Details
>> >> >> >> > > ===========================
>> >> >> >> > >
>> >> >> >> > > Again, I mentioned that the old QMP workflow is this:
>> >> >> >> > >
>> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
>> >> >> >> > >           /|\    (2)                (3)     |
>> >> >> >> > >        (1) |                               \|/ (4)
>> >> >> >> > >            +---------  main thread  --------+
>> >> >> >> > >
>> >> >> >> > > What this series does is, firstly:
>> >> >> >> > >
>> >> >> >> > >       JSON Parser     QMP Dispatcher --> Respond
>> >> >> >> > >           /|\ |           /|\       (4)     |
>> >> >> >> > >            |  | (2)        | (3)            |  (5)
>> >> >> >> > >        (1) |  +----->      |               \|/
>> >> >> >> > >            +---------  main thread  <-------+
>> >> >> >> > >
>> >> >> >> > > And further:
>> >> >> >> > >
>> >> >> >> > >                queue/kick
>> >> >> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
>> >> >> >> > >          /|\ |     (3)       /|\        (4)    |
>> >> >> >> > >       (1) |  | (2)            |                |  (5)
>> >> >> >> > >           | \|/               |               \|/
>> >> >> >> > >         IO thread         main thread  <-------+
>> >> >> >> >
>> >> >> >> > Is the queue per monitor or per client?
>> >> >> >
>> >> >> > The queue is currently global. I think yes maybe at least we can do it
>> >> >> > per monitor, but I am not sure whether that is urgent or can be
>> >> >> > postponed.  After all now QMPRequest (please refer to patch 11) is
>> >> >> > defined as (mon, id, req) tuple, so at least "id" namespace is
>> >> >> > per-monitor.
>> >> >> >
>> >> >> >> > And is the dispatching going
>> >> >> >> > to be processed even if the client is disconnected, and are new
>> >> >> >> > clients going to receive the replies from previous clients
>> >> >> >> > commands?
>> >> >> >
>> >> >> > [1]
>> >> >> >
>> >> >> > (will discuss together below)
>> >> >> >
>> >> >> >> > I
>> >> >> >> > believe there should be a per-client context, so there won't be "id"
>> >> >> >> > request conflicts.
>> >> >> >
>> >> >> > I'd say I am not familiar with this "client" idea, since after all
>> >> >> > IMHO one monitor is currently designed to mostly work with a single
>> >> >> > client. Say, unix sockets, telnet, all these backends are only single
>> >> >> > channeled, and one monitor instance can only work with one client at a
>> >> >> > time.  Then do we really need to add this client layer upon it?  IMHO
>> >> >> > the user can just provide more monitors if they wants more clients
>> >> >> > (and at least these clients should know the existance of the others or
>> >> >> > there might be problem, otherwise user2 will fail a migration, finally
>> >> >> > noticed that user1 has already triggered one), and the user should
>> >> >> > manage them well.
>> >> >>
>> >> >> qemu should support a management layer / libvirt restart/reconnect.
>> >> >> Afaik, it mostly work today. There might be a cases where libvirt can
>> >> >> be confused if it receives a reply from a previous connection command,
>> >> >> but due to the sync processing of the chardev, I am not sure you can
>> >> >> get in this situation.  By adding "oob" commands and queuing, the
>> >> >> client will have to remember which was the last "id" used, or it will
>> >> >> create more conflict after a reconnect.
>> >> >>
>> >> >> Imho we should introduce the client/connection concept to avoid this
>> >> >> confusion (unexpected reply & per client id space).
>> >> >
>> >> > Hmm I agree that the reconnect feature would be nice, but if so IMHO
>> >> > instead of throwing responses away when client disconnect, we should
>> >> > really keep them, and when the client reconnects, we queue the
>> >> > responses again.
>> >> >
>> >> > I think we have other quite simple ways to solve the "unexpected
>> >> > reply" and "per-client-id duplication" issues you have mentioned.
>> >> >
>> >> > Firstly, when client gets unexpected replies ("id" field not in its
>> >> > own request queue), the client should just ignore that reply, which
>> >> > seems natural to me.
>> >>
>> >> The trouble is that it may legitimately use the same "id" value for
>> >> new requests. And I don't see a simple way to handle that without
>> >> races.
>> >
>> > Under what circumstances can it reuse the same ID for new requests?
>> > Can't we simply tell it not to?
>>
>> I don't see any restriction today in the protocol in connecting with a
>> new client that may not know anything from a previous client.
>
> Well, it knows it's doing a reconnection.

If you assume the "same client" reconnects to the monitor, I agree.
But this is a restriction of monitor usage.

>
>> How would you tell it not to use old IDs? Just by writing an unwritten
>> rule, because we don't want to fix the per connection client session
>> handling in qemu?
>
> BY writing a written rule!  This out of order stuff we're adding here
> is a change to the interface and we can define what we require of the
> client.  As long as what we expect is reasonable then we might end
> up with something that's simpler for both the client and qemu.

As long as we don't break existing qmp clients.

> And I worry this series keeps getting more and more complex for weird
> edge cases.

That's an interesting point-of-view. I see the point in fixing weird
edge cases in qemu RPC code. More than other code we develop with
weird edge cases in mind & tests, like the parsing/checking of the
json schema for ex, in a similar area with the same maintainer.

> Dave
>
>> >
>> > Dave
>> >
>> >> >
>> >> > Then, if client disconnected and reconnected, it should not have the
>> >> > problem to generate duplicated id for request, since it should know
>> >> > what requests it has sent already.  A simplest case I can think of is,
>> >> > the ID should contains the following tuple:
>> >>
>> >> If you assume the "same" client will recover its state, yes.
>> >>
>> >> >
>> >> >   (client name, client unique ID, request ID)
>> >> >
>> >> > Here "client name" can be something like "libvirt", which is the name
>> >> > of client application;
>> >> >
>> >> > "client unique ID" can be anything generated when client starts, it
>> >> > identifies a single client session, maybe a UUID.
>> >> >
>> >> > "request ID" can be a unsigned integer starts from zero, and increases
>> >> > each time the client sends one request.
>> >>
>> >> This is introducing  session handling, and can be done in server side
>> >> only without changes in the protocol I believe.
>> >>
>> >> >
>> >> > I believe current libvirt is using "client name" + "request ID".  It's
>> >> > something similar (after all I think we don't normally have >1 libvirt
>> >> > to manage single QEMU, so I think it should be good enough).
>> >>
>> >> I am not sure we should base our protocol usage assumptions based on
>> >> libvirt only, but rather on what is possible today (like queuing
>> >> requests in the socket etc..).
>> >>
>> >> > Then even if client disconnect and reconnect, request ID won't lose,
>> >> > and no duplication would happen IMHO.
>> >> >
>> >> >>
>> >> >> >
>> >> >> >> >
>> >> >> >> > >
>> >> >> >> > > Then it introduced the "allow-oob" parameter in QAPI schema to define
>> >> >> >> > > commands, and "run-oob" flag to let oob-allowed command to run in the
>> >> >> >> > > parser.
>> >> >> >> >
>> >> >> >> > From a protocol point of view, I find that "run-oob" distinction per
>> >> >> >> > command a bit pointless. It helps with legacy client that wouldn't
>> >> >> >> > expect out-of-order replies if qemu were to run oob commands oob by
>> >> >> >> > default though.
>> >> >> >
>> >> >> > After all oob somehow breaks existing rules or sync execution.  I
>> >> >> > thought the more important goal was at least to keep the legacy
>> >> >> > behaviors when adding new things, no?
>> >> >>
>> >> >> Of course we have to keep compatibily. What do you mean by "oob
>> >> >> somehow breaks existing rules or sync execution"? oob means queuing
>> >> >> and unordered reply support, so clearly this is breaking the current
>> >> >> "mostly ordered" behaviour (mostly because events may still come any
>> >> >> time..., and the reconnect issue discussed above).
>> >> >
>> >> > Yes.  That's what I mean, it breaks the synchronous scemantic.  But
>> >> > I should definitely not call it a "break" though since old clients
>> >> > will work perfectly fine with it.  Sorry for the bad wording.
>> >> >
>> >> >>
>> >> >> >> > Clients shouldn't care about how/where a command is
>> >> >> >> > being queued or not. If they send a command, they want it processed as
>> >> >> >> > quickly as possible. However, it can be interesting to know if the
>> >> >> >> > implementation of the command will be able to deliver oob, so that
>> >> >> >> > data in the introspection could be useful.
>> >> >> >> >
>> >> >> >> > I would rather propose a client/server capability in qmp_capabilities,
>> >> >> >> > call it "oob":
>> >> >> >> >
>> >> >> >> > This capability indicates oob commands support.
>> >> >> >>
>> >> >> >> The problem is indicating which commands support oob as opposed to
>> >> >> >> indicating whether oob is present at all.  Future versions will
>> >> >> >> probably make more commands oob-able and a client will want to know
>> >> >> >> whether it can rely on a particular command being non-blocking.
>> >> >> >
>> >> >> > Yes.
>> >> >> >
>> >> >> > And IMHO we don't urgently need that "whether the server globally
>> >> >> > supports oob" thing.  Client can just know that from query-qmp-schema
>> >> >> > already - there will always be the "allow-oob" new field for command
>> >> >> > typed entries.  IMHO that's a solid hint.
>> >> >> >
>> >> >> > But I don't object to return it as well in qmp_capabilities.
>> >> >>
>> >> >> Does it feel right that the client can specify how the command are
>> >> >> processed / queued ? Isn't it preferable to leave that to the server
>> >> >> to decide? Why would a client specify that? And should the server be
>> >> >> expected to behave differently? What the client needs to be able is to
>> >> >> match the unordered replies, and that can be stated during cap
>> >> >> negotiation / qmp_capabilties. The server is expected to do a best
>> >> >> effort to handle commands and their priorities. If the client needs
>> >> >> several command queue, it is simpler to open several connection rather
>> >> >> than trying to fit that weird priority logic in the protocol imho.
>> >> >
>> >> > Sorry I may have missed the point here.  We were discussing about a
>> >> > global hint for "oob" support, am I right?  Then, could I ask what's
>> >> > the "weird priority logic" you mentioned?
>> >>
>> >> I call per-message oob hint a kind of priority logic, since you can
>> >> make the same request without oob in the same session and in parallel.
>> >>
>> >> >>
>> >> >> >
>> >> >> >>
>> >> >> >> > An oob command is a regular client message request with the "id"
>> >> >> >> > member mandatory, but the reply may be delivered
>> >> >> >> > out of order by the server if the client supports
>> >> >> >> > it too.
>> >> >> >> >
>> >> >> >> > If both the server and the client have the "oob" capability, the
>> >> >> >> > server can handle new client requests while previous requests are being
>> >> >> >> > processed.
>> >> >> >> >
>> >> >> >> > If the client doesn't have the "oob" capability, it may still call
>> >> >> >> > an oob command, and make multiple outstanding calls. In this case,
>> >> >> >> > the commands are processed in order, so the replies will also be in
>> >> >> >> > order. The "id" member isn't mandatory in this case.
>> >> >> >> >
>> >> >> >> > The client should match the replies with the "id" member associated
>> >> >> >> > with the requests.
>> >> >> >> >
>> >> >> >> > When a client is disconnected, the pending commands are not
>> >> >> >> > necessarily cancelled. But the future clients will not get replies from
>> >> >> >> > commands they didn't make (they might, however, receive side-effects
>> >> >> >> > events).
>> >> >> >>
>> >> >> >> What's the behaviour on the current monitor?
>> >> >> >
>> >> >> > Yeah I want to ask the same question, along with questioning about
>> >> >> > above [1].
>> >> >> >
>> >> >> > IMHO this series will not change the behaviors of these, so IMHO the
>> >> >> > behaviors will be the same before/after this series. E.g., when client
>> >> >> > dropped right after the command is executed, I think we will still
>> >> >> > execute the command, though we should encounter something odd in
>> >> >> > monitor_json_emitter() somewhere when we want to respond.  And it will
>> >> >> > happen the same after this series.
>> >> >>
>> >> >> I think it can get worse after your series, because you queue the
>> >> >> commands, so clearly a new client can get replies from an old client
>> >> >> commands. As said above, I am not convinced you can get in that
>> >> >> situation with current code.
>> >> >
>> >> > Hmm, seems so.  But would this a big problem?
>> >> >
>> >> > I really think the new client should just throw that response away if
>> >> > it does not really know that response (from peeking at "id" field),
>> >> > just like my opinion above.
>> >>
>> >> This is a high expectation.
>> >>
>> >>
>> >> --
>> >> Marc-André Lureau
>> > --
>> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>
>>
>>
>> --
>> Marc-André Lureau
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



-- 
Marc-André Lureau

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 6 months ago
On Mon, Sep 18, 2017 at 06:09:29PM +0200, Marc-André Lureau wrote:
> On Mon, Sep 18, 2017 at 1:26 PM, Dr. David Alan Gilbert
> <dgilbert@redhat.com> wrote:
> > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> Hi
> >>
> >> On Mon, Sep 18, 2017 at 12:55 PM, Dr. David Alan Gilbert
> >> <dgilbert@redhat.com> wrote:
> >> > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> >> Hi
> >> >>
> >> >> On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <peterx@redhat.com> wrote:
> >> >> > On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
> >> >> >> Hi
> >> >> >>
> >> >> >> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
> >> >> >> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
> >> >> >> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> >> >> >> > Hi
> >> >> >> >> >
> >> >> >> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> >> >> >> >> > > This series was born from this one:
> >> >> >> >> > >
> >> >> >> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> >> >> >> >> > >
> >> >> >> >> > > The design comes from Markus, and also the whole-bunch-of discussions
> >> >> >> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> >> >> >> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
> >> >> >> >> > > ideas and suggestions.  Finally we got such a solution that seems to
> >> >> >> >> > > satisfy everyone.
> >> >> >> >> > >
> >> >> >> >> > > I re-started the versioning since this series is totally different
> >> >> >> >> > > from previous one.  Now it's version 1.
> >> >> >> >> > >
> >> >> >> >> > > In case new reviewers come along the way without reading previous
> >> >> >> >> > > discussions, I will try to do a summary on what this is all about.
> >> >> >> >> > >
> >> >> >> >> > > What is OOB execution?
> >> >> >> >> > > ======================
> >> >> >> >> > >
> >> >> >> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
> >> >> >> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
> >> >> >> >> > > QMP is going throw these steps:
> >> >> >> >> > >
> >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> >> >> >> > >           /|\    (2)                (3)     |
> >> >> >> >> > >        (1) |                               \|/ (4)
> >> >> >> >> > >            +---------  main thread  --------+
> >> >> >> >> > >
> >> >> >> >> > > The requests are executed by the so-called QMP-dispatcher after the
> >> >> >> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
> >> >> >> >> > > parser and quickly returns.
> >> >> >> >> >
> >> >> >> >> > All commands should have the "id" field mandatory in this case, else
> >> >> >> >> > the client will not distinguish the replies coming from the last/oob
> >> >> >> >> > and the previous commands.
> >> >> >> >> >
> >> >> >> >> > This should probably be enforced upfront by client capability checks,
> >> >> >> >> > more below.
> >> >> >> >
> >> >> >> > Hmm yes since the oob commands are actually running in async way,
> >> >> >> > request ID should be needed here.  However I'm not sure whether
> >> >> >> > enabling the whole "request ID" thing is too big for this "try to be
> >> >> >> > small" oob change... And IMHO it suites better to be part of the whole
> >> >> >> > async work (no matter which implementation we'll use).
> >> >> >> >
> >> >> >> > How about this: we make "id" mandatory for "run-oob" requests only.
> >> >> >> > For oob commands, they will always have ID then no ordering issue, and
> >> >> >> > we can do it async; for the rest of non-oob commands, we still allow
> >> >> >> > them to go without ID, and since they are not oob, they'll always be
> >> >> >> > done in order as well.  Would this work?
> >> >> >>
> >> >> >> This mixed-mode is imho more complicated to deal with than having the
> >> >> >> protocol enforced one way or the other, but that should work.
> >> >> >>
> >> >> >> >
> >> >> >> >> >
> >> >> >> >> > > Yeah I know in current code the parser calls dispatcher directly
> >> >> >> >> > > (please see handle_qmp_command()).  However it's not true again after
> >> >> >> >> > > this series (parser will has its own IO thread, and dispatcher will
> >> >> >> >> > > still be run in main thread).  So this OOB does brings something
> >> >> >> >> > > different.
> >> >> >> >> > >
> >> >> >> >> > > There are more details on why OOB and the difference/relationship
> >> >> >> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> >> >> >> >> > > slightly out of topic (and believe me, it's not easy for me to
> >> >> >> >> > > summarize that).  For more information, please refers to [1].
> >> >> >> >> > >
> >> >> >> >> > > Summary ends here.
> >> >> >> >> > >
> >> >> >> >> > > Some Implementation Details
> >> >> >> >> > > ===========================
> >> >> >> >> > >
> >> >> >> >> > > Again, I mentioned that the old QMP workflow is this:
> >> >> >> >> > >
> >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> >> >> >> > >           /|\    (2)                (3)     |
> >> >> >> >> > >        (1) |                               \|/ (4)
> >> >> >> >> > >            +---------  main thread  --------+
> >> >> >> >> > >
> >> >> >> >> > > What this series does is, firstly:
> >> >> >> >> > >
> >> >> >> >> > >       JSON Parser     QMP Dispatcher --> Respond
> >> >> >> >> > >           /|\ |           /|\       (4)     |
> >> >> >> >> > >            |  | (2)        | (3)            |  (5)
> >> >> >> >> > >        (1) |  +----->      |               \|/
> >> >> >> >> > >            +---------  main thread  <-------+
> >> >> >> >> > >
> >> >> >> >> > > And further:
> >> >> >> >> > >
> >> >> >> >> > >                queue/kick
> >> >> >> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
> >> >> >> >> > >          /|\ |     (3)       /|\        (4)    |
> >> >> >> >> > >       (1) |  | (2)            |                |  (5)
> >> >> >> >> > >           | \|/               |               \|/
> >> >> >> >> > >         IO thread         main thread  <-------+
> >> >> >> >> >
> >> >> >> >> > Is the queue per monitor or per client?
> >> >> >> >
> >> >> >> > The queue is currently global. I think yes maybe at least we can do it
> >> >> >> > per monitor, but I am not sure whether that is urgent or can be
> >> >> >> > postponed.  After all now QMPRequest (please refer to patch 11) is
> >> >> >> > defined as (mon, id, req) tuple, so at least "id" namespace is
> >> >> >> > per-monitor.
> >> >> >> >
> >> >> >> >> > And is the dispatching going
> >> >> >> >> > to be processed even if the client is disconnected, and are new
> >> >> >> >> > clients going to receive the replies from previous clients
> >> >> >> >> > commands?
> >> >> >> >
> >> >> >> > [1]
> >> >> >> >
> >> >> >> > (will discuss together below)
> >> >> >> >
> >> >> >> >> > I
> >> >> >> >> > believe there should be a per-client context, so there won't be "id"
> >> >> >> >> > request conflicts.
> >> >> >> >
> >> >> >> > I'd say I am not familiar with this "client" idea, since after all
> >> >> >> > IMHO one monitor is currently designed to mostly work with a single
> >> >> >> > client. Say, unix sockets, telnet, all these backends are only single
> >> >> >> > channeled, and one monitor instance can only work with one client at a
> >> >> >> > time.  Then do we really need to add this client layer upon it?  IMHO
> >> >> >> > the user can just provide more monitors if they wants more clients
> >> >> >> > (and at least these clients should know the existance of the others or
> >> >> >> > there might be problem, otherwise user2 will fail a migration, finally
> >> >> >> > noticed that user1 has already triggered one), and the user should
> >> >> >> > manage them well.
> >> >> >>
> >> >> >> qemu should support a management layer / libvirt restart/reconnect.
> >> >> >> Afaik, it mostly work today. There might be a cases where libvirt can
> >> >> >> be confused if it receives a reply from a previous connection command,
> >> >> >> but due to the sync processing of the chardev, I am not sure you can
> >> >> >> get in this situation.  By adding "oob" commands and queuing, the
> >> >> >> client will have to remember which was the last "id" used, or it will
> >> >> >> create more conflict after a reconnect.
> >> >> >>
> >> >> >> Imho we should introduce the client/connection concept to avoid this
> >> >> >> confusion (unexpected reply & per client id space).
> >> >> >
> >> >> > Hmm I agree that the reconnect feature would be nice, but if so IMHO
> >> >> > instead of throwing responses away when client disconnect, we should
> >> >> > really keep them, and when the client reconnects, we queue the
> >> >> > responses again.
> >> >> >
> >> >> > I think we have other quite simple ways to solve the "unexpected
> >> >> > reply" and "per-client-id duplication" issues you have mentioned.
> >> >> >
> >> >> > Firstly, when client gets unexpected replies ("id" field not in its
> >> >> > own request queue), the client should just ignore that reply, which
> >> >> > seems natural to me.
> >> >>
> >> >> The trouble is that it may legitimately use the same "id" value for
> >> >> new requests. And I don't see a simple way to handle that without
> >> >> races.
> >> >
> >> > Under what circumstances can it reuse the same ID for new requests?
> >> > Can't we simply tell it not to?
> >>
> >> I don't see any restriction today in the protocol in connecting with a
> >> new client that may not know anything from a previous client.
> >
> > Well, it knows it's doing a reconnection.
> 
> If you assume the "same client" reconnects to the monitor, I agree.
> But this is a restriction of monitor usage.

In monitor_qmp_event(), we can empty the request queue when got
CHR_EVENT_CLOSED.  Would that be a solution?

-- 
Peter Xu

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 6 months ago
* Peter Xu (peterx@redhat.com) wrote:
> On Mon, Sep 18, 2017 at 06:09:29PM +0200, Marc-André Lureau wrote:
> > On Mon, Sep 18, 2017 at 1:26 PM, Dr. David Alan Gilbert
> > <dgilbert@redhat.com> wrote:
> > > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> > >> Hi
> > >>
> > >> On Mon, Sep 18, 2017 at 12:55 PM, Dr. David Alan Gilbert
> > >> <dgilbert@redhat.com> wrote:
> > >> > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> > >> >> Hi
> > >> >>
> > >> >> On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <peterx@redhat.com> wrote:
> > >> >> > On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
> > >> >> >> Hi
> > >> >> >>
> > >> >> >> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
> > >> >> >> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
> > >> >> >> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> > >> >> >> >> > Hi
> > >> >> >> >> >
> > >> >> >> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> > >> >> >> >> > > This series was born from this one:
> > >> >> >> >> > >
> > >> >> >> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> > >> >> >> >> > >
> > >> >> >> >> > > The design comes from Markus, and also the whole-bunch-of discussions
> > >> >> >> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> > >> >> >> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
> > >> >> >> >> > > ideas and suggestions.  Finally we got such a solution that seems to
> > >> >> >> >> > > satisfy everyone.
> > >> >> >> >> > >
> > >> >> >> >> > > I re-started the versioning since this series is totally different
> > >> >> >> >> > > from previous one.  Now it's version 1.
> > >> >> >> >> > >
> > >> >> >> >> > > In case new reviewers come along the way without reading previous
> > >> >> >> >> > > discussions, I will try to do a summary on what this is all about.
> > >> >> >> >> > >
> > >> >> >> >> > > What is OOB execution?
> > >> >> >> >> > > ======================
> > >> >> >> >> > >
> > >> >> >> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
> > >> >> >> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
> > >> >> >> >> > > QMP is going throw these steps:
> > >> >> >> >> > >
> > >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> > >> >> >> >> > >           /|\    (2)                (3)     |
> > >> >> >> >> > >        (1) |                               \|/ (4)
> > >> >> >> >> > >            +---------  main thread  --------+
> > >> >> >> >> > >
> > >> >> >> >> > > The requests are executed by the so-called QMP-dispatcher after the
> > >> >> >> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
> > >> >> >> >> > > parser and quickly returns.
> > >> >> >> >> >
> > >> >> >> >> > All commands should have the "id" field mandatory in this case, else
> > >> >> >> >> > the client will not distinguish the replies coming from the last/oob
> > >> >> >> >> > and the previous commands.
> > >> >> >> >> >
> > >> >> >> >> > This should probably be enforced upfront by client capability checks,
> > >> >> >> >> > more below.
> > >> >> >> >
> > >> >> >> > Hmm yes since the oob commands are actually running in async way,
> > >> >> >> > request ID should be needed here.  However I'm not sure whether
> > >> >> >> > enabling the whole "request ID" thing is too big for this "try to be
> > >> >> >> > small" oob change... And IMHO it suites better to be part of the whole
> > >> >> >> > async work (no matter which implementation we'll use).
> > >> >> >> >
> > >> >> >> > How about this: we make "id" mandatory for "run-oob" requests only.
> > >> >> >> > For oob commands, they will always have ID then no ordering issue, and
> > >> >> >> > we can do it async; for the rest of non-oob commands, we still allow
> > >> >> >> > them to go without ID, and since they are not oob, they'll always be
> > >> >> >> > done in order as well.  Would this work?
> > >> >> >>
> > >> >> >> This mixed-mode is imho more complicated to deal with than having the
> > >> >> >> protocol enforced one way or the other, but that should work.
> > >> >> >>
> > >> >> >> >
> > >> >> >> >> >
> > >> >> >> >> > > Yeah I know in current code the parser calls dispatcher directly
> > >> >> >> >> > > (please see handle_qmp_command()).  However it's not true again after
> > >> >> >> >> > > this series (parser will has its own IO thread, and dispatcher will
> > >> >> >> >> > > still be run in main thread).  So this OOB does brings something
> > >> >> >> >> > > different.
> > >> >> >> >> > >
> > >> >> >> >> > > There are more details on why OOB and the difference/relationship
> > >> >> >> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> > >> >> >> >> > > slightly out of topic (and believe me, it's not easy for me to
> > >> >> >> >> > > summarize that).  For more information, please refers to [1].
> > >> >> >> >> > >
> > >> >> >> >> > > Summary ends here.
> > >> >> >> >> > >
> > >> >> >> >> > > Some Implementation Details
> > >> >> >> >> > > ===========================
> > >> >> >> >> > >
> > >> >> >> >> > > Again, I mentioned that the old QMP workflow is this:
> > >> >> >> >> > >
> > >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> > >> >> >> >> > >           /|\    (2)                (3)     |
> > >> >> >> >> > >        (1) |                               \|/ (4)
> > >> >> >> >> > >            +---------  main thread  --------+
> > >> >> >> >> > >
> > >> >> >> >> > > What this series does is, firstly:
> > >> >> >> >> > >
> > >> >> >> >> > >       JSON Parser     QMP Dispatcher --> Respond
> > >> >> >> >> > >           /|\ |           /|\       (4)     |
> > >> >> >> >> > >            |  | (2)        | (3)            |  (5)
> > >> >> >> >> > >        (1) |  +----->      |               \|/
> > >> >> >> >> > >            +---------  main thread  <-------+
> > >> >> >> >> > >
> > >> >> >> >> > > And further:
> > >> >> >> >> > >
> > >> >> >> >> > >                queue/kick
> > >> >> >> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
> > >> >> >> >> > >          /|\ |     (3)       /|\        (4)    |
> > >> >> >> >> > >       (1) |  | (2)            |                |  (5)
> > >> >> >> >> > >           | \|/               |               \|/
> > >> >> >> >> > >         IO thread         main thread  <-------+
> > >> >> >> >> >
> > >> >> >> >> > Is the queue per monitor or per client?
> > >> >> >> >
> > >> >> >> > The queue is currently global. I think yes maybe at least we can do it
> > >> >> >> > per monitor, but I am not sure whether that is urgent or can be
> > >> >> >> > postponed.  After all now QMPRequest (please refer to patch 11) is
> > >> >> >> > defined as (mon, id, req) tuple, so at least "id" namespace is
> > >> >> >> > per-monitor.
> > >> >> >> >
> > >> >> >> >> > And is the dispatching going
> > >> >> >> >> > to be processed even if the client is disconnected, and are new
> > >> >> >> >> > clients going to receive the replies from previous clients
> > >> >> >> >> > commands?
> > >> >> >> >
> > >> >> >> > [1]
> > >> >> >> >
> > >> >> >> > (will discuss together below)
> > >> >> >> >
> > >> >> >> >> > I
> > >> >> >> >> > believe there should be a per-client context, so there won't be "id"
> > >> >> >> >> > request conflicts.
> > >> >> >> >
> > >> >> >> > I'd say I am not familiar with this "client" idea, since after all
> > >> >> >> > IMHO one monitor is currently designed to mostly work with a single
> > >> >> >> > client. Say, unix sockets, telnet, all these backends are only single
> > >> >> >> > channeled, and one monitor instance can only work with one client at a
> > >> >> >> > time.  Then do we really need to add this client layer upon it?  IMHO
> > >> >> >> > the user can just provide more monitors if they wants more clients
> > >> >> >> > (and at least these clients should know the existance of the others or
> > >> >> >> > there might be problem, otherwise user2 will fail a migration, finally
> > >> >> >> > noticed that user1 has already triggered one), and the user should
> > >> >> >> > manage them well.
> > >> >> >>
> > >> >> >> qemu should support a management layer / libvirt restart/reconnect.
> > >> >> >> Afaik, it mostly work today. There might be a cases where libvirt can
> > >> >> >> be confused if it receives a reply from a previous connection command,
> > >> >> >> but due to the sync processing of the chardev, I am not sure you can
> > >> >> >> get in this situation.  By adding "oob" commands and queuing, the
> > >> >> >> client will have to remember which was the last "id" used, or it will
> > >> >> >> create more conflict after a reconnect.
> > >> >> >>
> > >> >> >> Imho we should introduce the client/connection concept to avoid this
> > >> >> >> confusion (unexpected reply & per client id space).
> > >> >> >
> > >> >> > Hmm I agree that the reconnect feature would be nice, but if so IMHO
> > >> >> > instead of throwing responses away when client disconnect, we should
> > >> >> > really keep them, and when the client reconnects, we queue the
> > >> >> > responses again.
> > >> >> >
> > >> >> > I think we have other quite simple ways to solve the "unexpected
> > >> >> > reply" and "per-client-id duplication" issues you have mentioned.
> > >> >> >
> > >> >> > Firstly, when client gets unexpected replies ("id" field not in its
> > >> >> > own request queue), the client should just ignore that reply, which
> > >> >> > seems natural to me.
> > >> >>
> > >> >> The trouble is that it may legitimately use the same "id" value for
> > >> >> new requests. And I don't see a simple way to handle that without
> > >> >> races.
> > >> >
> > >> > Under what circumstances can it reuse the same ID for new requests?
> > >> > Can't we simply tell it not to?
> > >>
> > >> I don't see any restriction today in the protocol in connecting with a
> > >> new client that may not know anything from a previous client.
> > >
> > > Well, it knows it's doing a reconnection.
> > 
> > If you assume the "same client" reconnects to the monitor, I agree.
> > But this is a restriction of monitor usage.
> 
> In monitor_qmp_event(), we can empty the request queue when got
> CHR_EVENT_CLOSED.  Would that be a solution?

What happens to commands that are in flight?

Dave

> -- 
> Peter Xu
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 6 months ago
On Tue, Sep 19, 2017 at 10:19:21AM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > On Mon, Sep 18, 2017 at 06:09:29PM +0200, Marc-André Lureau wrote:
> > > On Mon, Sep 18, 2017 at 1:26 PM, Dr. David Alan Gilbert
> > > <dgilbert@redhat.com> wrote:
> > > > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> > > >> Hi
> > > >>
> > > >> On Mon, Sep 18, 2017 at 12:55 PM, Dr. David Alan Gilbert
> > > >> <dgilbert@redhat.com> wrote:
> > > >> > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> > > >> >> Hi
> > > >> >>
> > > >> >> On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <peterx@redhat.com> wrote:
> > > >> >> > On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
> > > >> >> >> Hi
> > > >> >> >>
> > > >> >> >> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
> > > >> >> >> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
> > > >> >> >> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> > > >> >> >> >> > Hi
> > > >> >> >> >> >
> > > >> >> >> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> > > >> >> >> >> > > This series was born from this one:
> > > >> >> >> >> > >
> > > >> >> >> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> > > >> >> >> >> > >
> > > >> >> >> >> > > The design comes from Markus, and also the whole-bunch-of discussions
> > > >> >> >> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> > > >> >> >> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
> > > >> >> >> >> > > ideas and suggestions.  Finally we got such a solution that seems to
> > > >> >> >> >> > > satisfy everyone.
> > > >> >> >> >> > >
> > > >> >> >> >> > > I re-started the versioning since this series is totally different
> > > >> >> >> >> > > from previous one.  Now it's version 1.
> > > >> >> >> >> > >
> > > >> >> >> >> > > In case new reviewers come along the way without reading previous
> > > >> >> >> >> > > discussions, I will try to do a summary on what this is all about.
> > > >> >> >> >> > >
> > > >> >> >> >> > > What is OOB execution?
> > > >> >> >> >> > > ======================
> > > >> >> >> >> > >
> > > >> >> >> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
> > > >> >> >> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
> > > >> >> >> >> > > QMP is going throw these steps:
> > > >> >> >> >> > >
> > > >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> > > >> >> >> >> > >           /|\    (2)                (3)     |
> > > >> >> >> >> > >        (1) |                               \|/ (4)
> > > >> >> >> >> > >            +---------  main thread  --------+
> > > >> >> >> >> > >
> > > >> >> >> >> > > The requests are executed by the so-called QMP-dispatcher after the
> > > >> >> >> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
> > > >> >> >> >> > > parser and quickly returns.
> > > >> >> >> >> >
> > > >> >> >> >> > All commands should have the "id" field mandatory in this case, else
> > > >> >> >> >> > the client will not distinguish the replies coming from the last/oob
> > > >> >> >> >> > and the previous commands.
> > > >> >> >> >> >
> > > >> >> >> >> > This should probably be enforced upfront by client capability checks,
> > > >> >> >> >> > more below.
> > > >> >> >> >
> > > >> >> >> > Hmm yes since the oob commands are actually running in async way,
> > > >> >> >> > request ID should be needed here.  However I'm not sure whether
> > > >> >> >> > enabling the whole "request ID" thing is too big for this "try to be
> > > >> >> >> > small" oob change... And IMHO it suites better to be part of the whole
> > > >> >> >> > async work (no matter which implementation we'll use).
> > > >> >> >> >
> > > >> >> >> > How about this: we make "id" mandatory for "run-oob" requests only.
> > > >> >> >> > For oob commands, they will always have ID then no ordering issue, and
> > > >> >> >> > we can do it async; for the rest of non-oob commands, we still allow
> > > >> >> >> > them to go without ID, and since they are not oob, they'll always be
> > > >> >> >> > done in order as well.  Would this work?
> > > >> >> >>
> > > >> >> >> This mixed-mode is imho more complicated to deal with than having the
> > > >> >> >> protocol enforced one way or the other, but that should work.
> > > >> >> >>
> > > >> >> >> >
> > > >> >> >> >> >
> > > >> >> >> >> > > Yeah I know in current code the parser calls dispatcher directly
> > > >> >> >> >> > > (please see handle_qmp_command()).  However it's not true again after
> > > >> >> >> >> > > this series (parser will has its own IO thread, and dispatcher will
> > > >> >> >> >> > > still be run in main thread).  So this OOB does brings something
> > > >> >> >> >> > > different.
> > > >> >> >> >> > >
> > > >> >> >> >> > > There are more details on why OOB and the difference/relationship
> > > >> >> >> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> > > >> >> >> >> > > slightly out of topic (and believe me, it's not easy for me to
> > > >> >> >> >> > > summarize that).  For more information, please refers to [1].
> > > >> >> >> >> > >
> > > >> >> >> >> > > Summary ends here.
> > > >> >> >> >> > >
> > > >> >> >> >> > > Some Implementation Details
> > > >> >> >> >> > > ===========================
> > > >> >> >> >> > >
> > > >> >> >> >> > > Again, I mentioned that the old QMP workflow is this:
> > > >> >> >> >> > >
> > > >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> > > >> >> >> >> > >           /|\    (2)                (3)     |
> > > >> >> >> >> > >        (1) |                               \|/ (4)
> > > >> >> >> >> > >            +---------  main thread  --------+
> > > >> >> >> >> > >
> > > >> >> >> >> > > What this series does is, firstly:
> > > >> >> >> >> > >
> > > >> >> >> >> > >       JSON Parser     QMP Dispatcher --> Respond
> > > >> >> >> >> > >           /|\ |           /|\       (4)     |
> > > >> >> >> >> > >            |  | (2)        | (3)            |  (5)
> > > >> >> >> >> > >        (1) |  +----->      |               \|/
> > > >> >> >> >> > >            +---------  main thread  <-------+
> > > >> >> >> >> > >
> > > >> >> >> >> > > And further:
> > > >> >> >> >> > >
> > > >> >> >> >> > >                queue/kick
> > > >> >> >> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
> > > >> >> >> >> > >          /|\ |     (3)       /|\        (4)    |
> > > >> >> >> >> > >       (1) |  | (2)            |                |  (5)
> > > >> >> >> >> > >           | \|/               |               \|/
> > > >> >> >> >> > >         IO thread         main thread  <-------+
> > > >> >> >> >> >
> > > >> >> >> >> > Is the queue per monitor or per client?
> > > >> >> >> >
> > > >> >> >> > The queue is currently global. I think yes maybe at least we can do it
> > > >> >> >> > per monitor, but I am not sure whether that is urgent or can be
> > > >> >> >> > postponed.  After all now QMPRequest (please refer to patch 11) is
> > > >> >> >> > defined as (mon, id, req) tuple, so at least "id" namespace is
> > > >> >> >> > per-monitor.
> > > >> >> >> >
> > > >> >> >> >> > And is the dispatching going
> > > >> >> >> >> > to be processed even if the client is disconnected, and are new
> > > >> >> >> >> > clients going to receive the replies from previous clients
> > > >> >> >> >> > commands?
> > > >> >> >> >
> > > >> >> >> > [1]
> > > >> >> >> >
> > > >> >> >> > (will discuss together below)
> > > >> >> >> >
> > > >> >> >> >> > I
> > > >> >> >> >> > believe there should be a per-client context, so there won't be "id"
> > > >> >> >> >> > request conflicts.
> > > >> >> >> >
> > > >> >> >> > I'd say I am not familiar with this "client" idea, since after all
> > > >> >> >> > IMHO one monitor is currently designed to mostly work with a single
> > > >> >> >> > client. Say, unix sockets, telnet, all these backends are only single
> > > >> >> >> > channeled, and one monitor instance can only work with one client at a
> > > >> >> >> > time.  Then do we really need to add this client layer upon it?  IMHO
> > > >> >> >> > the user can just provide more monitors if they wants more clients
> > > >> >> >> > (and at least these clients should know the existance of the others or
> > > >> >> >> > there might be problem, otherwise user2 will fail a migration, finally
> > > >> >> >> > noticed that user1 has already triggered one), and the user should
> > > >> >> >> > manage them well.
> > > >> >> >>
> > > >> >> >> qemu should support a management layer / libvirt restart/reconnect.
> > > >> >> >> Afaik, it mostly work today. There might be a cases where libvirt can
> > > >> >> >> be confused if it receives a reply from a previous connection command,
> > > >> >> >> but due to the sync processing of the chardev, I am not sure you can
> > > >> >> >> get in this situation.  By adding "oob" commands and queuing, the
> > > >> >> >> client will have to remember which was the last "id" used, or it will
> > > >> >> >> create more conflict after a reconnect.
> > > >> >> >>
> > > >> >> >> Imho we should introduce the client/connection concept to avoid this
> > > >> >> >> confusion (unexpected reply & per client id space).
> > > >> >> >
> > > >> >> > Hmm I agree that the reconnect feature would be nice, but if so IMHO
> > > >> >> > instead of throwing responses away when client disconnect, we should
> > > >> >> > really keep them, and when the client reconnects, we queue the
> > > >> >> > responses again.
> > > >> >> >
> > > >> >> > I think we have other quite simple ways to solve the "unexpected
> > > >> >> > reply" and "per-client-id duplication" issues you have mentioned.
> > > >> >> >
> > > >> >> > Firstly, when client gets unexpected replies ("id" field not in its
> > > >> >> > own request queue), the client should just ignore that reply, which
> > > >> >> > seems natural to me.
> > > >> >>
> > > >> >> The trouble is that it may legitimately use the same "id" value for
> > > >> >> new requests. And I don't see a simple way to handle that without
> > > >> >> races.
> > > >> >
> > > >> > Under what circumstances can it reuse the same ID for new requests?
> > > >> > Can't we simply tell it not to?
> > > >>
> > > >> I don't see any restriction today in the protocol in connecting with a
> > > >> new client that may not know anything from a previous client.
> > > >
> > > > Well, it knows it's doing a reconnection.
> > > 
> > > If you assume the "same client" reconnects to the monitor, I agree.
> > > But this is a restriction of monitor usage.
> > 
> > In monitor_qmp_event(), we can empty the request queue when got
> > CHR_EVENT_CLOSED.  Would that be a solution?
> 
> What happens to commands that are in flight?

Good questioning...

I think we can track that one as well, say, provide a simple state
machine for Monitor (possibly with a lock) that can be either "idle",
"processing", "drop".

Then a normal routine to execution of command:

0. by default, monitor state "idle"
1. when dequeue the request, mark that monitor as "processing",
   execute the command
2. when reply: if still "processing", then do it; if "drop", then drop
   that reply for current command.  Here we'll reply.

Instead, if disconnect/reconnect happens:

0. by default, monitor state "idle"
1. when dequeue the request, mark that monitor as "processing",
   execute the command
2. port disconnected, in EVENT_CLOSED, we set state to "drop"
3. port reconnected, we do nothing (so the execution state keeps
   through reconnection)
4. when reply: if still "processing", then do it; if "drop", then drop
   that reply for current command.  Here we drop that reply.

But... IMHO this is too awkward only for this single "drop the last
command reply" purpose.  I would prefer to use documentation intead to
let client drop unknown responses directly if it's ok to everyone.

Thanks,

> 
> Dave
> 
> > -- 
> > Peter Xu
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

-- 
Peter Xu

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 6 months ago
* Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> On Mon, Sep 18, 2017 at 1:26 PM, Dr. David Alan Gilbert
> <dgilbert@redhat.com> wrote:
> > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> Hi
> >>
> >> On Mon, Sep 18, 2017 at 12:55 PM, Dr. David Alan Gilbert
> >> <dgilbert@redhat.com> wrote:
> >> > * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> >> Hi
> >> >>
> >> >> On Mon, Sep 18, 2017 at 10:37 AM, Peter Xu <peterx@redhat.com> wrote:
> >> >> > On Fri, Sep 15, 2017 at 01:14:47PM +0200, Marc-André Lureau wrote:
> >> >> >> Hi
> >> >> >>
> >> >> >> On Thu, Sep 14, 2017 at 9:46 PM, Peter Xu <peterx@redhat.com> wrote:
> >> >> >> > On Thu, Sep 14, 2017 at 07:53:15PM +0100, Dr. David Alan Gilbert wrote:
> >> >> >> >> * Marc-André Lureau (marcandre.lureau@gmail.com) wrote:
> >> >> >> >> > Hi
> >> >> >> >> >
> >> >> >> >> > On Thu, Sep 14, 2017 at 9:50 AM, Peter Xu <peterx@redhat.com> wrote:
> >> >> >> >> > > This series was born from this one:
> >> >> >> >> > >
> >> >> >> >> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> >> >> >> >> > >
> >> >> >> >> > > The design comes from Markus, and also the whole-bunch-of discussions
> >> >> >> >> > > in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> >> >> >> >> > > Stefan, etc. on discussing the topic (...again!), providing shiny
> >> >> >> >> > > ideas and suggestions.  Finally we got such a solution that seems to
> >> >> >> >> > > satisfy everyone.
> >> >> >> >> > >
> >> >> >> >> > > I re-started the versioning since this series is totally different
> >> >> >> >> > > from previous one.  Now it's version 1.
> >> >> >> >> > >
> >> >> >> >> > > In case new reviewers come along the way without reading previous
> >> >> >> >> > > discussions, I will try to do a summary on what this is all about.
> >> >> >> >> > >
> >> >> >> >> > > What is OOB execution?
> >> >> >> >> > > ======================
> >> >> >> >> > >
> >> >> >> >> > > It's the shortcut of Out-Of-Band execution, its name is given by
> >> >> >> >> > > Markus.  It's a way to quickly execute a QMP request.  Say, originally
> >> >> >> >> > > QMP is going throw these steps:
> >> >> >> >> > >
> >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> >> >> >> > >           /|\    (2)                (3)     |
> >> >> >> >> > >        (1) |                               \|/ (4)
> >> >> >> >> > >            +---------  main thread  --------+
> >> >> >> >> > >
> >> >> >> >> > > The requests are executed by the so-called QMP-dispatcher after the
> >> >> >> >> > > JSON is parsed.  If OOB is on, we run the command directly in the
> >> >> >> >> > > parser and quickly returns.
> >> >> >> >> >
> >> >> >> >> > All commands should have the "id" field mandatory in this case, else
> >> >> >> >> > the client will not distinguish the replies coming from the last/oob
> >> >> >> >> > and the previous commands.
> >> >> >> >> >
> >> >> >> >> > This should probably be enforced upfront by client capability checks,
> >> >> >> >> > more below.
> >> >> >> >
> >> >> >> > Hmm yes since the oob commands are actually running in async way,
> >> >> >> > request ID should be needed here.  However I'm not sure whether
> >> >> >> > enabling the whole "request ID" thing is too big for this "try to be
> >> >> >> > small" oob change... And IMHO it suites better to be part of the whole
> >> >> >> > async work (no matter which implementation we'll use).
> >> >> >> >
> >> >> >> > How about this: we make "id" mandatory for "run-oob" requests only.
> >> >> >> > For oob commands, they will always have ID then no ordering issue, and
> >> >> >> > we can do it async; for the rest of non-oob commands, we still allow
> >> >> >> > them to go without ID, and since they are not oob, they'll always be
> >> >> >> > done in order as well.  Would this work?
> >> >> >>
> >> >> >> This mixed-mode is imho more complicated to deal with than having the
> >> >> >> protocol enforced one way or the other, but that should work.
> >> >> >>
> >> >> >> >
> >> >> >> >> >
> >> >> >> >> > > Yeah I know in current code the parser calls dispatcher directly
> >> >> >> >> > > (please see handle_qmp_command()).  However it's not true again after
> >> >> >> >> > > this series (parser will has its own IO thread, and dispatcher will
> >> >> >> >> > > still be run in main thread).  So this OOB does brings something
> >> >> >> >> > > different.
> >> >> >> >> > >
> >> >> >> >> > > There are more details on why OOB and the difference/relationship
> >> >> >> >> > > between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> >> >> >> >> > > slightly out of topic (and believe me, it's not easy for me to
> >> >> >> >> > > summarize that).  For more information, please refers to [1].
> >> >> >> >> > >
> >> >> >> >> > > Summary ends here.
> >> >> >> >> > >
> >> >> >> >> > > Some Implementation Details
> >> >> >> >> > > ===========================
> >> >> >> >> > >
> >> >> >> >> > > Again, I mentioned that the old QMP workflow is this:
> >> >> >> >> > >
> >> >> >> >> > >       JSON Parser --> QMP Dispatcher --> Respond
> >> >> >> >> > >           /|\    (2)                (3)     |
> >> >> >> >> > >        (1) |                               \|/ (4)
> >> >> >> >> > >            +---------  main thread  --------+
> >> >> >> >> > >
> >> >> >> >> > > What this series does is, firstly:
> >> >> >> >> > >
> >> >> >> >> > >       JSON Parser     QMP Dispatcher --> Respond
> >> >> >> >> > >           /|\ |           /|\       (4)     |
> >> >> >> >> > >            |  | (2)        | (3)            |  (5)
> >> >> >> >> > >        (1) |  +----->      |               \|/
> >> >> >> >> > >            +---------  main thread  <-------+
> >> >> >> >> > >
> >> >> >> >> > > And further:
> >> >> >> >> > >
> >> >> >> >> > >                queue/kick
> >> >> >> >> > >      JSON Parser ======> QMP Dispatcher --> Respond
> >> >> >> >> > >          /|\ |     (3)       /|\        (4)    |
> >> >> >> >> > >       (1) |  | (2)            |                |  (5)
> >> >> >> >> > >           | \|/               |               \|/
> >> >> >> >> > >         IO thread         main thread  <-------+
> >> >> >> >> >
> >> >> >> >> > Is the queue per monitor or per client?
> >> >> >> >
> >> >> >> > The queue is currently global. I think yes maybe at least we can do it
> >> >> >> > per monitor, but I am not sure whether that is urgent or can be
> >> >> >> > postponed.  After all now QMPRequest (please refer to patch 11) is
> >> >> >> > defined as (mon, id, req) tuple, so at least "id" namespace is
> >> >> >> > per-monitor.
> >> >> >> >
> >> >> >> >> > And is the dispatching going
> >> >> >> >> > to be processed even if the client is disconnected, and are new
> >> >> >> >> > clients going to receive the replies from previous clients
> >> >> >> >> > commands?
> >> >> >> >
> >> >> >> > [1]
> >> >> >> >
> >> >> >> > (will discuss together below)
> >> >> >> >
> >> >> >> >> > I
> >> >> >> >> > believe there should be a per-client context, so there won't be "id"
> >> >> >> >> > request conflicts.
> >> >> >> >
> >> >> >> > I'd say I am not familiar with this "client" idea, since after all
> >> >> >> > IMHO one monitor is currently designed to mostly work with a single
> >> >> >> > client. Say, unix sockets, telnet, all these backends are only single
> >> >> >> > channeled, and one monitor instance can only work with one client at a
> >> >> >> > time.  Then do we really need to add this client layer upon it?  IMHO
> >> >> >> > the user can just provide more monitors if they wants more clients
> >> >> >> > (and at least these clients should know the existance of the others or
> >> >> >> > there might be problem, otherwise user2 will fail a migration, finally
> >> >> >> > noticed that user1 has already triggered one), and the user should
> >> >> >> > manage them well.
> >> >> >>
> >> >> >> qemu should support a management layer / libvirt restart/reconnect.
> >> >> >> Afaik, it mostly work today. There might be a cases where libvirt can
> >> >> >> be confused if it receives a reply from a previous connection command,
> >> >> >> but due to the sync processing of the chardev, I am not sure you can
> >> >> >> get in this situation.  By adding "oob" commands and queuing, the
> >> >> >> client will have to remember which was the last "id" used, or it will
> >> >> >> create more conflict after a reconnect.
> >> >> >>
> >> >> >> Imho we should introduce the client/connection concept to avoid this
> >> >> >> confusion (unexpected reply & per client id space).
> >> >> >
> >> >> > Hmm I agree that the reconnect feature would be nice, but if so IMHO
> >> >> > instead of throwing responses away when client disconnect, we should
> >> >> > really keep them, and when the client reconnects, we queue the
> >> >> > responses again.
> >> >> >
> >> >> > I think we have other quite simple ways to solve the "unexpected
> >> >> > reply" and "per-client-id duplication" issues you have mentioned.
> >> >> >
> >> >> > Firstly, when client gets unexpected replies ("id" field not in its
> >> >> > own request queue), the client should just ignore that reply, which
> >> >> > seems natural to me.
> >> >>
> >> >> The trouble is that it may legitimately use the same "id" value for
> >> >> new requests. And I don't see a simple way to handle that without
> >> >> races.
> >> >
> >> > Under what circumstances can it reuse the same ID for new requests?
> >> > Can't we simply tell it not to?
> >>
> >> I don't see any restriction today in the protocol in connecting with a
> >> new client that may not know anything from a previous client.
> >
> > Well, it knows it's doing a reconnection.
> 
> If you assume the "same client" reconnects to the monitor, I agree.
> But this is a restriction of monitor usage.

I think I'm just requiring each monitor that connects to have a unique
set of IDs;  I don't really want the objects that Eric suggests; I'll
just take a string starting with a unique ID.

> >> How would you tell it not to use old IDs? Just by writing an unwritten
> >> rule, because we don't want to fix the per connection client session
> >> handling in qemu?
> >
> > BY writing a written rule!  This out of order stuff we're adding here
> > is a change to the interface and we can define what we require of the
> > client.  As long as what we expect is reasonable then we might end
> > up with something that's simpler for both the client and qemu.
> 
> As long as we don't break existing qmp clients.

Right.

> > And I worry this series keeps getting more and more complex for weird
> > edge cases.
> 
> That's an interesting point-of-view. I see the point in fixing weird
> edge cases in qemu RPC code. More than other code we develop with
> weird edge cases in mind & tests, like the parsing/checking of the
> json schema for ex, in a similar area with the same maintainer.

I'm more worried here about the ability to execute non-blocking
commands; and to be able to do it without rewriting the planet.
If we can avoid having edge-cases by just defining what's required
then I'm happy.

Dave

> > Dave
> >
> >> >
> >> > Dave
> >> >
> >> >> >
> >> >> > Then, if client disconnected and reconnected, it should not have the
> >> >> > problem to generate duplicated id for request, since it should know
> >> >> > what requests it has sent already.  A simplest case I can think of is,
> >> >> > the ID should contains the following tuple:
> >> >>
> >> >> If you assume the "same" client will recover its state, yes.
> >> >>
> >> >> >
> >> >> >   (client name, client unique ID, request ID)
> >> >> >
> >> >> > Here "client name" can be something like "libvirt", which is the name
> >> >> > of client application;
> >> >> >
> >> >> > "client unique ID" can be anything generated when client starts, it
> >> >> > identifies a single client session, maybe a UUID.
> >> >> >
> >> >> > "request ID" can be a unsigned integer starts from zero, and increases
> >> >> > each time the client sends one request.
> >> >>
> >> >> This is introducing  session handling, and can be done in server side
> >> >> only without changes in the protocol I believe.
> >> >>
> >> >> >
> >> >> > I believe current libvirt is using "client name" + "request ID".  It's
> >> >> > something similar (after all I think we don't normally have >1 libvirt
> >> >> > to manage single QEMU, so I think it should be good enough).
> >> >>
> >> >> I am not sure we should base our protocol usage assumptions based on
> >> >> libvirt only, but rather on what is possible today (like queuing
> >> >> requests in the socket etc..).
> >> >>
> >> >> > Then even if client disconnect and reconnect, request ID won't lose,
> >> >> > and no duplication would happen IMHO.
> >> >> >
> >> >> >>
> >> >> >> >
> >> >> >> >> >
> >> >> >> >> > >
> >> >> >> >> > > Then it introduced the "allow-oob" parameter in QAPI schema to define
> >> >> >> >> > > commands, and "run-oob" flag to let oob-allowed command to run in the
> >> >> >> >> > > parser.
> >> >> >> >> >
> >> >> >> >> > From a protocol point of view, I find that "run-oob" distinction per
> >> >> >> >> > command a bit pointless. It helps with legacy client that wouldn't
> >> >> >> >> > expect out-of-order replies if qemu were to run oob commands oob by
> >> >> >> >> > default though.
> >> >> >> >
> >> >> >> > After all oob somehow breaks existing rules or sync execution.  I
> >> >> >> > thought the more important goal was at least to keep the legacy
> >> >> >> > behaviors when adding new things, no?
> >> >> >>
> >> >> >> Of course we have to keep compatibily. What do you mean by "oob
> >> >> >> somehow breaks existing rules or sync execution"? oob means queuing
> >> >> >> and unordered reply support, so clearly this is breaking the current
> >> >> >> "mostly ordered" behaviour (mostly because events may still come any
> >> >> >> time..., and the reconnect issue discussed above).
> >> >> >
> >> >> > Yes.  That's what I mean, it breaks the synchronous scemantic.  But
> >> >> > I should definitely not call it a "break" though since old clients
> >> >> > will work perfectly fine with it.  Sorry for the bad wording.
> >> >> >
> >> >> >>
> >> >> >> >> > Clients shouldn't care about how/where a command is
> >> >> >> >> > being queued or not. If they send a command, they want it processed as
> >> >> >> >> > quickly as possible. However, it can be interesting to know if the
> >> >> >> >> > implementation of the command will be able to deliver oob, so that
> >> >> >> >> > data in the introspection could be useful.
> >> >> >> >> >
> >> >> >> >> > I would rather propose a client/server capability in qmp_capabilities,
> >> >> >> >> > call it "oob":
> >> >> >> >> >
> >> >> >> >> > This capability indicates oob commands support.
> >> >> >> >>
> >> >> >> >> The problem is indicating which commands support oob as opposed to
> >> >> >> >> indicating whether oob is present at all.  Future versions will
> >> >> >> >> probably make more commands oob-able and a client will want to know
> >> >> >> >> whether it can rely on a particular command being non-blocking.
> >> >> >> >
> >> >> >> > Yes.
> >> >> >> >
> >> >> >> > And IMHO we don't urgently need that "whether the server globally
> >> >> >> > supports oob" thing.  Client can just know that from query-qmp-schema
> >> >> >> > already - there will always be the "allow-oob" new field for command
> >> >> >> > typed entries.  IMHO that's a solid hint.
> >> >> >> >
> >> >> >> > But I don't object to return it as well in qmp_capabilities.
> >> >> >>
> >> >> >> Does it feel right that the client can specify how the command are
> >> >> >> processed / queued ? Isn't it preferable to leave that to the server
> >> >> >> to decide? Why would a client specify that? And should the server be
> >> >> >> expected to behave differently? What the client needs to be able is to
> >> >> >> match the unordered replies, and that can be stated during cap
> >> >> >> negotiation / qmp_capabilties. The server is expected to do a best
> >> >> >> effort to handle commands and their priorities. If the client needs
> >> >> >> several command queue, it is simpler to open several connection rather
> >> >> >> than trying to fit that weird priority logic in the protocol imho.
> >> >> >
> >> >> > Sorry I may have missed the point here.  We were discussing about a
> >> >> > global hint for "oob" support, am I right?  Then, could I ask what's
> >> >> > the "weird priority logic" you mentioned?
> >> >>
> >> >> I call per-message oob hint a kind of priority logic, since you can
> >> >> make the same request without oob in the same session and in parallel.
> >> >>
> >> >> >>
> >> >> >> >
> >> >> >> >>
> >> >> >> >> > An oob command is a regular client message request with the "id"
> >> >> >> >> > member mandatory, but the reply may be delivered
> >> >> >> >> > out of order by the server if the client supports
> >> >> >> >> > it too.
> >> >> >> >> >
> >> >> >> >> > If both the server and the client have the "oob" capability, the
> >> >> >> >> > server can handle new client requests while previous requests are being
> >> >> >> >> > processed.
> >> >> >> >> >
> >> >> >> >> > If the client doesn't have the "oob" capability, it may still call
> >> >> >> >> > an oob command, and make multiple outstanding calls. In this case,
> >> >> >> >> > the commands are processed in order, so the replies will also be in
> >> >> >> >> > order. The "id" member isn't mandatory in this case.
> >> >> >> >> >
> >> >> >> >> > The client should match the replies with the "id" member associated
> >> >> >> >> > with the requests.
> >> >> >> >> >
> >> >> >> >> > When a client is disconnected, the pending commands are not
> >> >> >> >> > necessarily cancelled. But the future clients will not get replies from
> >> >> >> >> > commands they didn't make (they might, however, receive side-effects
> >> >> >> >> > events).
> >> >> >> >>
> >> >> >> >> What's the behaviour on the current monitor?
> >> >> >> >
> >> >> >> > Yeah I want to ask the same question, along with questioning about
> >> >> >> > above [1].
> >> >> >> >
> >> >> >> > IMHO this series will not change the behaviors of these, so IMHO the
> >> >> >> > behaviors will be the same before/after this series. E.g., when client
> >> >> >> > dropped right after the command is executed, I think we will still
> >> >> >> > execute the command, though we should encounter something odd in
> >> >> >> > monitor_json_emitter() somewhere when we want to respond.  And it will
> >> >> >> > happen the same after this series.
> >> >> >>
> >> >> >> I think it can get worse after your series, because you queue the
> >> >> >> commands, so clearly a new client can get replies from an old client
> >> >> >> commands. As said above, I am not convinced you can get in that
> >> >> >> situation with current code.
> >> >> >
> >> >> > Hmm, seems so.  But would this a big problem?
> >> >> >
> >> >> > I really think the new client should just throw that response away if
> >> >> > it does not really know that response (from peeking at "id" field),
> >> >> > just like my opinion above.
> >> >>
> >> >> This is a high expectation.
> >> >>
> >> >>
> >> >> --
> >> >> Marc-André Lureau
> >> > --
> >> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >>
> >>
> >>
> >> --
> >> Marc-André Lureau
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> 
> 
> -- 
> Marc-André Lureau
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Eric Blake 6 years, 7 months ago
On 09/18/2017 05:55 AM, Dr. David Alan Gilbert wrote:

>>> I think we have other quite simple ways to solve the "unexpected
>>> reply" and "per-client-id duplication" issues you have mentioned.
>>>
>>> Firstly, when client gets unexpected replies ("id" field not in its
>>> own request queue), the client should just ignore that reply, which
>>> seems natural to me.

That's probably reasonable, if we document it.

>>
>> The trouble is that it may legitimately use the same "id" value for
>> new requests. And I don't see a simple way to handle that without
>> races.
> 
> Under what circumstances can it reuse the same ID for new requests?

Libvirt uses distinct id's for every message on a single connection, but
there is always the possibility that it will use id 'libvirt-0' on one
connection, then restart libvirtd, then use id 'libvirt-0' on the second
connection (there's nothing that I saw in libvirt code that saves the
current 'mon->nextSerial' value in XML to survive libvirtd restarts).

> Can't we simply tell it not to?

Since use of OOB handling will require client opt-in, yes, we can make
part of the opt-in process be a contract that the client has to do a
better job of avoiding duplicate id's across reconnects, if we think
that is easier to maintain.

>>>
>>> Then, if client disconnected and reconnected, it should not have the
>>> problem to generate duplicated id for request, since it should know
>>> what requests it has sent already.  A simplest case I can think of is,
>>> the ID should contains the following tuple:
>>
>> If you assume the "same" client will recover its state, yes.
>>
>>>
>>>   (client name, client unique ID, request ID)
>>>
>>> Here "client name" can be something like "libvirt", which is the name
>>> of client application;
>>>
>>> "client unique ID" can be anything generated when client starts, it
>>> identifies a single client session, maybe a UUID.
>>>
>>> "request ID" can be a unsigned integer starts from zero, and increases
>>> each time the client sends one request.
>>
>> This is introducing  session handling, and can be done in server side
>> only without changes in the protocol I believe.

The 'id' field can be _any_ JSON object - libvirt currently sends a
string, but could just as easily send a dict, and then libvirt could
supply whatever it wanted in the dict, including uuids, to ensure that
future reconnects don't reuse the id of a previous connection.  But
right now, 'id' is opaque to qemu (and should stay that way) - if qemu
is going to do any work to ensure that it doesn't send a stale reply to
a new connection, then that has to be tracked externally from whatever
'id' is passed in; or we just document that clients wanting to use OOB
handling have to be careful of their choice of 'id' (and leave it up to
the client to avoid confusion, because qemu doesn't care about it).

I'm also fine with requiring that clients that opt-in to OOB handling be
documented as having to ignore unknown 'id' responses, since we already
document that clients must be able to ignore unknown 'event' messages.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Dr. David Alan Gilbert 6 years, 7 months ago
* Peter Xu (peterx@redhat.com) wrote:
> This series was born from this one:
> 
>   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html

Are patches 1..6 separable and mergable without the rest ?

Dave

> The design comes from Markus, and also the whole-bunch-of discussions
> in previous thread.  My heartful thanks to Markus, Daniel, Dave,
> Stefan, etc. on discussing the topic (...again!), providing shiny
> ideas and suggestions.  Finally we got such a solution that seems to
> satisfy everyone.
> 
> I re-started the versioning since this series is totally different
> from previous one.  Now it's version 1.
> 
> In case new reviewers come along the way without reading previous
> discussions, I will try to do a summary on what this is all about.
> 
> What is OOB execution?
> ======================
> 
> It's the shortcut of Out-Of-Band execution, its name is given by
> Markus.  It's a way to quickly execute a QMP request.  Say, originally
> QMP is going throw these steps:
> 
>       JSON Parser --> QMP Dispatcher --> Respond
>           /|\    (2)                (3)     |
>        (1) |                               \|/ (4)
>            +---------  main thread  --------+
> 
> The requests are executed by the so-called QMP-dispatcher after the
> JSON is parsed.  If OOB is on, we run the command directly in the
> parser and quickly returns.
> 
> Yeah I know in current code the parser calls dispatcher directly
> (please see handle_qmp_command()).  However it's not true again after
> this series (parser will has its own IO thread, and dispatcher will
> still be run in main thread).  So this OOB does brings something
> different.
> 
> There are more details on why OOB and the difference/relationship
> between OOB, async QMP, block/general jobs, etc.. but IMHO that's
> slightly out of topic (and believe me, it's not easy for me to
> summarize that).  For more information, please refers to [1].
> 
> Summary ends here.
> 
> Some Implementation Details
> ===========================
> 
> Again, I mentioned that the old QMP workflow is this:
> 
>       JSON Parser --> QMP Dispatcher --> Respond
>           /|\    (2)                (3)     |
>        (1) |                               \|/ (4)
>            +---------  main thread  --------+
> 
> What this series does is, firstly:
> 
>       JSON Parser     QMP Dispatcher --> Respond
>           /|\ |           /|\       (4)     |
>            |  | (2)        | (3)            |  (5)
>        (1) |  +----->      |               \|/
>            +---------  main thread  <-------+
> 
> And further:
> 
>                queue/kick
>      JSON Parser ======> QMP Dispatcher --> Respond
>          /|\ |     (3)       /|\        (4)    |
>       (1) |  | (2)            |                |  (5)
>           | \|/               |               \|/
>         IO thread         main thread  <-------+
> 
> Then it introduced the "allow-oob" parameter in QAPI schema to define
> commands, and "run-oob" flag to let oob-allowed command to run in the
> parser.
> 
> The last patch enables this for "migrate-incoming" command.
> 
> Please review.  Thanks.
> 
> [1] https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> 
> Peter Xu (15):
>   char-io: fix possible race on IOWatchPoll
>   qobject: allow NULL for qstring_get_str()
>   qobject: introduce qobject_to_str()
>   monitor: move skip_flush into monitor_data_init
>   qjson: add "opaque" field to JSONMessageParser
>   monitor: move the cur_mon hack deeper for QMP
>   monitor: unify global init
>   monitor: create IO thread
>   monitor: allow to use IO thread for parsing
>   monitor: introduce monitor_qmp_respond()
>   monitor: separate QMP parser and dispatcher
>   monitor: enable IO thread for (qmp & !mux) typed
>   qapi: introduce new cmd option "allow-oob"
>   qmp: support out-of-band (oob) execution
>   qmp: let migrate-incoming allow out-of-band
> 
>  chardev/char-io.c                |  15 ++-
>  docs/devel/qapi-code-gen.txt     |  51 ++++++-
>  include/monitor/monitor.h        |   2 +-
>  include/qapi/qmp/dispatch.h      |   2 +
>  include/qapi/qmp/json-streamer.h |   8 +-
>  include/qapi/qmp/qstring.h       |   1 +
>  monitor.c                        | 283 +++++++++++++++++++++++++++++++--------
>  qapi/introspect.json             |   6 +-
>  qapi/migration.json              |   3 +-
>  qapi/qmp-dispatch.c              |  34 +++++
>  qga/main.c                       |   5 +-
>  qobject/json-streamer.c          |   7 +-
>  qobject/qjson.c                  |   5 +-
>  qobject/qstring.c                |  13 +-
>  scripts/qapi-commands.py         |  19 ++-
>  scripts/qapi-introspect.py       |  10 +-
>  scripts/qapi.py                  |  15 ++-
>  scripts/qapi2texi.py             |   2 +-
>  tests/libqtest.c                 |   5 +-
>  tests/qapi-schema/test-qapi.py   |   2 +-
>  trace-events                     |   2 +
>  vl.c                             |   3 +-
>  22 files changed, 398 insertions(+), 95 deletions(-)
> 
> -- 
> 2.7.4
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Posted by Peter Xu 6 years, 7 months ago
On Thu, Sep 14, 2017 at 07:56:04PM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > This series was born from this one:
> > 
> >   https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04310.html
> 
> Are patches 1..6 separable and mergable without the rest ?

Yes I think so.

(I was always trying to put pre-requisite patches like these ones at
 the front of any of my series rather than separating them into more
 series, since I thought it is convenient for me to manage them (or
 add new ones when respin), and also easier for reviewers (so people
 don't need to try to find the dependencies).  And since I put them at
 the head, we can easily merge them without rebasing issue when they
 are good while the rest may still need further work.  Hopefully this
 is the right thing to do.)

-- 
Peter Xu