[PATCH 0/2] Migration time prediction using calc-dirty-rate

Andrei Gudkov via posted 2 patches 1 year, 2 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/cover.1677589218.git.gudkov.andrei@huawei.com
Maintainers: Juan Quintela <quintela@redhat.com>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Eric Blake <eblake@redhat.com>, Markus Armbruster <armbru@redhat.com>, John Snow <jsnow@redhat.com>, Cleber Rosa <crosa@redhat.com>
There is a newer version of this series
MAINTAINERS                  |   1 +
migration/dirtyrate.c        | 219 +++++++++++++++++++++------
migration/dirtyrate.h        |  26 +++-
qapi/migration.json          |  25 ++++
scripts/predict_migration.py | 283 +++++++++++++++++++++++++++++++++++
5 files changed, 502 insertions(+), 52 deletions(-)
create mode 100644 scripts/predict_migration.py
[PATCH 0/2] Migration time prediction using calc-dirty-rate
Posted by Andrei Gudkov via 1 year, 2 months ago
The overall goal of this patch is to be able to predict time it would
take to migrate VM in precopy mode based on max allowed downtime,
network bandwidth, and metrics collected with "calc-dirty-rate".
Predictor itself is a simple python script that closely follows iterations
of the migration algorithm: compute how long it would take to copy
dirty pages, estimate number of pages dirtied by VM from the beginning
of the last iteration; repeat all over again until estimated iteration time
fits max allowed downtime. However, to get reasonable accuracy, predictor
requires more metrics, which have been implemented into "calc-dirty-rate".

Summary of calc-dirty-rate changes:

1. The most important change is that now calc-dirty-rate produces
   a *vector* of dirty page measurements for progressively increasing time
   periods: 125ms, 250, 500, 750, 1000, 1500, .., up to specified calc-time.
   The motivation behind such change is that number of dirtied pages as
   a function of time starting from "clean state" (new migration iteration)
   is far from linear. Shape of this function depends on the workload type
   and intensity. Measuring number of dirty pages at progressively
   increasing periods allows to reconstruct this function using piece-wise
   interpolation.

2. New metric added -- number of all-zero pages.
   Predictor needs to distinguish between number of zero and non-zero pages
   because during migration only 8 byte header is placed on the wire for
   all-zero page.

3. Hashing function was changed from CRC32 to xxHash.
   This reduces overhead of sampling by ~10 times, which is important since
   now some of the measurement periods are sub-second.

4. Other trivial metrics were added for convenience: total number
   of VM pages, number of sampled pages, page size.


After these changes output from calc-dirty-rate looks like this:

{
  "page-size": 4096,
  "periods": [125, 250, 375, 500, 750, 1000, 1500,
              2000, 3000, 4001, 6000, 8000, 10000,
              15000, 20000, 25000, 30000, 35000,
              40000, 45000, 50000, 60000],
  "status": "measured",
  "sample-pages": 512,
  "dirty-rate": 98,
  "mode": "page-sampling",
  "n-dirty-pages": [33, 78, 119, 151, 217, 236, 293, 336,
                    425, 505, 620, 756, 898, 1204, 1457,
                    1723, 1934, 2141, 2328, 2522, 2675, 2958],
  "n-sampled-pages": 16392,
  "n-zero-pages": 10060,
  "n-total-pages": 8392704,
  "start-time": 2916750,
  "calc-time": 60
}

Passing this data into prediction script, we get the following estimations:

Downtime> |    125ms |    250ms |    500ms |   1000ms |   5000ms |    unlim
---------------------------------------------------------------------------
 100 Mbps |        - |        - |        - |        - |        - |   16m59s  
   1 Gbps |        - |        - |        - |        - |        - |    1m40s
   2 Gbps |        - |        - |        - |        - |    1m41s |      50s  
 2.5 Gbps |        - |        - |        - |        - |    1m07s |      40s
   5 Gbps |      48s |      46s |      31s |      28s |      25s |      20s
  10 Gbps |      13s |      12s |      12s |      12s |      12s |      10s
  25 Gbps |       5s |       5s |       5s |       5s |       4s |       4s
  40 Gbps |       3s |       3s |       3s |       3s |       3s |       3s


Quality of prediction was tested with YCSB benchmark. Memcached instance
was installed into 32GiB VM, and a client generated a stream of requests.
Between experiments we varied request size distribution, number of threads,
and location of the client (inside or outside the VM).
After short preheat phase, we measured calc-dirty-rate:
1. {"execute": "calc-dirty-rate", "arguments":{"calc-time":60}}
2. Wait 60 seconds
3. Collect results with {"execute": "query-dirty-rate"}

Afterwards we tried to migrate VM after randomly selecting max downtime
and bandwidth limit. Typical prediction error is 6-7%, with only 180 out
of 5779 experiments failing badly: prediction error >=25% or incorrectly
predicting migration success when in fact it didn't converge.


Andrei Gudkov (2):
  migration/calc-dirty-rate: new metrics in sampling mode
  migration/calc-dirty-rate: tool to predict migration time

 MAINTAINERS                  |   1 +
 migration/dirtyrate.c        | 219 +++++++++++++++++++++------
 migration/dirtyrate.h        |  26 +++-
 qapi/migration.json          |  25 ++++
 scripts/predict_migration.py | 283 +++++++++++++++++++++++++++++++++++
 5 files changed, 502 insertions(+), 52 deletions(-)
 create mode 100644 scripts/predict_migration.py

-- 
2.30.2
RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
Posted by Gudkov Andrei via 1 year ago
ping5

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Monday, April 10, 2023 18:19
To: 'qemu-devel@nongnu.org' <qemu-devel@nongnu.org>
Cc: 'quintela@redhat.com' <quintela@redhat.com>; 'dgilbert@redhat.com' <dgilbert@redhat.com>; 'jsnow@redhat.com' <jsnow@redhat.com>; 'eblake@redhat.com' <eblake@redhat.com>
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping4

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Monday, April 3, 2023 17:42
To: 'qemu-devel@nongnu.org' <qemu-devel@nongnu.org>
Cc: 'quintela@redhat.com' <quintela@redhat.com>; 'dgilbert@redhat.com' <dgilbert@redhat.com>
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping3

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Monday, March 27, 2023 17:09
To: 'qemu-devel@nongnu.org' <qemu-devel@nongnu.org>
Cc: 'quintela@redhat.com' <quintela@redhat.com>; 'dgilbert@redhat.com' <dgilbert@redhat.com>
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping2

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Friday, March 17, 2023 16:29
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com; dgilbert@redhat.com
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Tuesday, February 28, 2023 16:16
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com; dgilbert@redhat.com; Gudkov Andrei <gudkov.andrei@huawei.com>
Subject: [PATCH 0/2] Migration time prediction using calc-dirty-rate
Re: [PATCH 0/2] Migration time prediction using calc-dirty-rate
Posted by Daniel P. Berrangé 1 year ago
Juan,

This series could use some feedback from the migration maintainer
POV. I think it looks like a valuable idea to take which could
significantly help mgmt apps plan migration.

Daniel

On Tue, Apr 18, 2023 at 01:25:08PM +0000, Gudkov Andrei via wrote:
> ping5
> 
> https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/
> 
> -----Original Message-----
> From: Gudkov Andrei 
> Sent: Monday, April 10, 2023 18:19
> To: 'qemu-devel@nongnu.org' <qemu-devel@nongnu.org>
> Cc: 'quintela@redhat.com' <quintela@redhat.com>; 'dgilbert@redhat.com' <dgilbert@redhat.com>; 'jsnow@redhat.com' <jsnow@redhat.com>; 'eblake@redhat.com' <eblake@redhat.com>
> Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
> 
> ping4
> 
> https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/
> 
> -----Original Message-----
> From: Gudkov Andrei 
> Sent: Monday, April 3, 2023 17:42
> To: 'qemu-devel@nongnu.org' <qemu-devel@nongnu.org>
> Cc: 'quintela@redhat.com' <quintela@redhat.com>; 'dgilbert@redhat.com' <dgilbert@redhat.com>
> Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
> 
> ping3
> 
> https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/
> 
> -----Original Message-----
> From: Gudkov Andrei 
> Sent: Monday, March 27, 2023 17:09
> To: 'qemu-devel@nongnu.org' <qemu-devel@nongnu.org>
> Cc: 'quintela@redhat.com' <quintela@redhat.com>; 'dgilbert@redhat.com' <dgilbert@redhat.com>
> Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
> 
> ping2
> 
> https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/
> 
> -----Original Message-----
> From: Gudkov Andrei 
> Sent: Friday, March 17, 2023 16:29
> To: qemu-devel@nongnu.org
> Cc: quintela@redhat.com; dgilbert@redhat.com
> Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
> 
> ping
> 
> https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/
> 
> -----Original Message-----
> From: Gudkov Andrei 
> Sent: Tuesday, February 28, 2023 16:16
> To: qemu-devel@nongnu.org
> Cc: quintela@redhat.com; dgilbert@redhat.com; Gudkov Andrei <gudkov.andrei@huawei.com>
> Subject: [PATCH 0/2] Migration time prediction using calc-dirty-rate
> 
> 
> 

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
Re: [PATCH 0/2] Migration time prediction using calc-dirty-rate
Posted by Daniel P. Berrangé 1 year ago
On Tue, Feb 28, 2023 at 04:16:01PM +0300, Andrei Gudkov via wrote:
> Summary of calc-dirty-rate changes:
> 
> 1. The most important change is that now calc-dirty-rate produces
>    a *vector* of dirty page measurements for progressively increasing time
>    periods: 125ms, 250, 500, 750, 1000, 1500, .., up to specified calc-time.
>    The motivation behind such change is that number of dirtied pages as
>    a function of time starting from "clean state" (new migration iteration)
>    is far from linear. Shape of this function depends on the workload type
>    and intensity. Measuring number of dirty pages at progressively
>    increasing periods allows to reconstruct this function using piece-wise
>    interpolation.
> 
> 2. New metric added -- number of all-zero pages.
>    Predictor needs to distinguish between number of zero and non-zero pages
>    because during migration only 8 byte header is placed on the wire for
>    all-zero page.
> 
> 3. Hashing function was changed from CRC32 to xxHash.
>    This reduces overhead of sampling by ~10 times, which is important since
>    now some of the measurement periods are sub-second.

Very good !

> 
> 4. Other trivial metrics were added for convenience: total number
>    of VM pages, number of sampled pages, page size.
> 
> 
> After these changes output from calc-dirty-rate looks like this:
> 
> {
>   "page-size": 4096,
>   "periods": [125, 250, 375, 500, 750, 1000, 1500,
>               2000, 3000, 4001, 6000, 8000, 10000,
>               15000, 20000, 25000, 30000, 35000,
>               40000, 45000, 50000, 60000],
>   "status": "measured",
>   "sample-pages": 512,
>   "dirty-rate": 98,
>   "mode": "page-sampling",
>   "n-dirty-pages": [33, 78, 119, 151, 217, 236, 293, 336,
>                     425, 505, 620, 756, 898, 1204, 1457,
>                     1723, 1934, 2141, 2328, 2522, 2675, 2958],
>   "n-sampled-pages": 16392,
>   "n-zero-pages": 10060,
>   "n-total-pages": 8392704,
>   "start-time": 2916750,
>   "calc-time": 60
> }

Ok, so "periods" and "n-dirty-pages" pages arrays correlate with
each other.

> 
> Passing this data into prediction script, we get the following estimations:
> 
> Downtime> |    125ms |    250ms |    500ms |   1000ms |   5000ms |    unlim
> ---------------------------------------------------------------------------
>  100 Mbps |        - |        - |        - |        - |        - |   16m59s  
>    1 Gbps |        - |        - |        - |        - |        - |    1m40s
>    2 Gbps |        - |        - |        - |        - |    1m41s |      50s  
>  2.5 Gbps |        - |        - |        - |        - |    1m07s |      40s
>    5 Gbps |      48s |      46s |      31s |      28s |      25s |      20s
>   10 Gbps |      13s |      12s |      12s |      12s |      12s |      10s
>   25 Gbps |       5s |       5s |       5s |       5s |       4s |       4s
>   40 Gbps |       3s |       3s |       3s |       3s |       3s |       3s

This is fascinating and really helpful as an idea. It so nicely
shows the when it is not even worth bothering to try to start the
migrate unless you're willing to put up with large (5 sec) downtime.
or use autoconverge/post-copy.

I wonder if the calc-dirty-rate measurements also give enough info
to predict the likely number/duration of async page fetches needed
during post-copy phase ? Or does this give enough info to predict
how far down auto-converge should throttle the guest to enable
convergance.

> Quality of prediction was tested with YCSB benchmark. Memcached instance
> was installed into 32GiB VM, and a client generated a stream of requests.
> Between experiments we varied request size distribution, number of threads,
> and location of the client (inside or outside the VM).
> After short preheat phase, we measured calc-dirty-rate:
> 1. {"execute": "calc-dirty-rate", "arguments":{"calc-time":60}}
> 2. Wait 60 seconds
> 3. Collect results with {"execute": "query-dirty-rate"}
> 
> Afterwards we tried to migrate VM after randomly selecting max downtime
> and bandwidth limit. Typical prediction error is 6-7%, with only 180 out
> of 5779 experiments failing badly: prediction error >=25% or incorrectly
> predicting migration success when in fact it didn't converge.

Nice results


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
Posted by Gudkov Andrei via 1 year ago
Thank you for the review. I submitted new version of the patch:
https://patchew.org/QEMU/cover.1682598010.git.gudkov.andrei@huawei.com/

> -----Original Message-----
> From: Daniel P. Berrangé [mailto:berrange@redhat.com]
> Sent: Tuesday, April 18, 2023 20:18
> To: Gudkov Andrei <gudkov.andrei@huawei.com>
> Cc: qemu-devel@nongnu.org; quintela@redhat.com; dgilbert@redhat.com
> Subject: Re: [PATCH 0/2] Migration time prediction using calc-dirty-rate
> 
> On Tue, Feb 28, 2023 at 04:16:01PM +0300, Andrei Gudkov via wrote:
> > Summary of calc-dirty-rate changes:
> >
> > 1. The most important change is that now calc-dirty-rate produces
> >    a *vector* of dirty page measurements for progressively increasing time
> >    periods: 125ms, 250, 500, 750, 1000, 1500, .., up to specified calc-time.
> >    The motivation behind such change is that number of dirtied pages as
> >    a function of time starting from "clean state" (new migration iteration)
> >    is far from linear. Shape of this function depends on the workload type
> >    and intensity. Measuring number of dirty pages at progressively
> >    increasing periods allows to reconstruct this function using piece-wise
> >    interpolation.
> >
> > 2. New metric added -- number of all-zero pages.
> >    Predictor needs to distinguish between number of zero and non-zero pages
> >    because during migration only 8 byte header is placed on the wire for
> >    all-zero page.
> >
> > 3. Hashing function was changed from CRC32 to xxHash.
> >    This reduces overhead of sampling by ~10 times, which is important since
> >    now some of the measurement periods are sub-second.
> 
> Very good !
> 
> >
> > 4. Other trivial metrics were added for convenience: total number
> >    of VM pages, number of sampled pages, page size.
> >
> >
> > After these changes output from calc-dirty-rate looks like this:
> >
> > {
> >   "page-size": 4096,
> >   "periods": [125, 250, 375, 500, 750, 1000, 1500,
> >               2000, 3000, 4001, 6000, 8000, 10000,
> >               15000, 20000, 25000, 30000, 35000,
> >               40000, 45000, 50000, 60000],
> >   "status": "measured",
> >   "sample-pages": 512,
> >   "dirty-rate": 98,
> >   "mode": "page-sampling",
> >   "n-dirty-pages": [33, 78, 119, 151, 217, 236, 293, 336,
> >                     425, 505, 620, 756, 898, 1204, 1457,
> >                     1723, 1934, 2141, 2328, 2522, 2675, 2958],
> >   "n-sampled-pages": 16392,
> >   "n-zero-pages": 10060,
> >   "n-total-pages": 8392704,
> >   "start-time": 2916750,
> >   "calc-time": 60
> > }
> 
> Ok, so "periods" and "n-dirty-pages" pages arrays correlate with
> each other.
> 
> >
> > Passing this data into prediction script, we get the following estimations:
> >
> > Downtime> |    125ms |    250ms |    500ms |   1000ms |   5000ms |    unlim
> > ---------------------------------------------------------------------------
> >  100 Mbps |        - |        - |        - |        - |        - |   16m59s
> >    1 Gbps |        - |        - |        - |        - |        - |    1m40s
> >    2 Gbps |        - |        - |        - |        - |    1m41s |      50s
> >  2.5 Gbps |        - |        - |        - |        - |    1m07s |      40s
> >    5 Gbps |      48s |      46s |      31s |      28s |      25s |      20s
> >   10 Gbps |      13s |      12s |      12s |      12s |      12s |      10s
> >   25 Gbps |       5s |       5s |       5s |       5s |       4s |       4s
> >   40 Gbps |       3s |       3s |       3s |       3s |       3s |       3s
> 
> This is fascinating and really helpful as an idea. It so nicely
> shows the when it is not even worth bothering to try to start the
> migrate unless you're willing to put up with large (5 sec) downtime.
> or use autoconverge/post-copy.
> 
> I wonder if the calc-dirty-rate measurements also give enough info
> to predict the likely number/duration of async page fetches needed
> during post-copy phase ? Or does this give enough info to predict
> how far down auto-converge should throttle the guest to enable
> convergance.

I also was thinking about supporting more migration features.
Currently my understanding is the following:

1. It *should* be possible to support throttling directly inside the
   prediction script without any changes to calc-dirty-rate. Maybe we can
   suggest the level of throttling required to achieve target downtime.

2. Support for compression would be harder because we would have to know
   average compression ratio and compression speed. This would require
   more changes to calc-dirty-rate.

3. To support post-copy, we would need to know network characteristics, namely
   latency and jitter. Both can be quite unstable unless source and target
   hosts are located very close in network topology.

> 
> > Quality of prediction was tested with YCSB benchmark. Memcached instance
> > was installed into 32GiB VM, and a client generated a stream of requests.
> > Between experiments we varied request size distribution, number of threads,
> > and location of the client (inside or outside the VM).
> > After short preheat phase, we measured calc-dirty-rate:
> > 1. {"execute": "calc-dirty-rate", "arguments":{"calc-time":60}}
> > 2. Wait 60 seconds
> > 3. Collect results with {"execute": "query-dirty-rate"}
> >
> > Afterwards we tried to migrate VM after randomly selecting max downtime
> > and bandwidth limit. Typical prediction error is 6-7%, with only 180 out
> > of 5779 experiments failing badly: prediction error >=25% or incorrectly
> > predicting migration success when in fact it didn't converge.
> 
> Nice results
> 
> 
> With regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
Posted by Gudkov Andrei via 1 year ago
ping4

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Monday, April 3, 2023 17:42
To: 'qemu-devel@nongnu.org' <qemu-devel@nongnu.org>
Cc: 'quintela@redhat.com' <quintela@redhat.com>; 'dgilbert@redhat.com' <dgilbert@redhat.com>
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping3

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Monday, March 27, 2023 17:09
To: 'qemu-devel@nongnu.org' <qemu-devel@nongnu.org>
Cc: 'quintela@redhat.com' <quintela@redhat.com>; 'dgilbert@redhat.com' <dgilbert@redhat.com>
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping2

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Friday, March 17, 2023 16:29
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com; dgilbert@redhat.com
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Tuesday, February 28, 2023 16:16
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com; dgilbert@redhat.com; Gudkov Andrei <gudkov.andrei@huawei.com>
Subject: [PATCH 0/2] Migration time prediction using calc-dirty-rate
RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
Posted by Gudkov Andrei via 1 year, 1 month ago
ping3

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Monday, March 27, 2023 17:09
To: 'qemu-devel@nongnu.org' <qemu-devel@nongnu.org>
Cc: 'quintela@redhat.com' <quintela@redhat.com>; 'dgilbert@redhat.com' <dgilbert@redhat.com>
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping2

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Friday, March 17, 2023 16:29
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com; dgilbert@redhat.com
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Tuesday, February 28, 2023 16:16
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com; dgilbert@redhat.com; Gudkov Andrei <gudkov.andrei@huawei.com>
Subject: [PATCH 0/2] Migration time prediction using calc-dirty-rate
RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
Posted by Gudkov Andrei via 1 year, 1 month ago
ping2

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Friday, March 17, 2023 16:29
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com; dgilbert@redhat.com
Subject: RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate

ping

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Tuesday, February 28, 2023 16:16
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com; dgilbert@redhat.com; Gudkov Andrei <gudkov.andrei@huawei.com>
Subject: [PATCH 0/2] Migration time prediction using calc-dirty-rate
RE: [PATCH 0/2] Migration time prediction using calc-dirty-rate
Posted by Gudkov Andrei via 1 year, 1 month ago
ping

https://patchew.org/QEMU/cover.1677589218.git.gudkov.andrei@huawei.com/

-----Original Message-----
From: Gudkov Andrei 
Sent: Tuesday, February 28, 2023 16:16
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com; dgilbert@redhat.com; Gudkov Andrei <gudkov.andrei@huawei.com>
Subject: [PATCH 0/2] Migration time prediction using calc-dirty-rate

The overall goal of this patch is to be able to predict time it would
take to migrate VM in precopy mode based on max allowed downtime,
network bandwidth, and metrics collected with "calc-dirty-rate".
Predictor itself is a simple python script that closely follows iterations
of the migration algorithm: compute how long it would take to copy
dirty pages, estimate number of pages dirtied by VM from the beginning
of the last iteration; repeat all over again until estimated iteration time
fits max allowed downtime. However, to get reasonable accuracy, predictor
requires more metrics, which have been implemented into "calc-dirty-rate".

Summary of calc-dirty-rate changes:

1. The most important change is that now calc-dirty-rate produces
   a *vector* of dirty page measurements for progressively increasing time
   periods: 125ms, 250, 500, 750, 1000, 1500, .., up to specified calc-time.
   The motivation behind such change is that number of dirtied pages as
   a function of time starting from "clean state" (new migration iteration)
   is far from linear. Shape of this function depends on the workload type
   and intensity. Measuring number of dirty pages at progressively
   increasing periods allows to reconstruct this function using piece-wise
   interpolation.

2. New metric added -- number of all-zero pages.
   Predictor needs to distinguish between number of zero and non-zero pages
   because during migration only 8 byte header is placed on the wire for
   all-zero page.

3. Hashing function was changed from CRC32 to xxHash.
   This reduces overhead of sampling by ~10 times, which is important since
   now some of the measurement periods are sub-second.

4. Other trivial metrics were added for convenience: total number
   of VM pages, number of sampled pages, page size.


After these changes output from calc-dirty-rate looks like this:

{
  "page-size": 4096,
  "periods": [125, 250, 375, 500, 750, 1000, 1500,
              2000, 3000, 4001, 6000, 8000, 10000,
              15000, 20000, 25000, 30000, 35000,
              40000, 45000, 50000, 60000],
  "status": "measured",
  "sample-pages": 512,
  "dirty-rate": 98,
  "mode": "page-sampling",
  "n-dirty-pages": [33, 78, 119, 151, 217, 236, 293, 336,
                    425, 505, 620, 756, 898, 1204, 1457,
                    1723, 1934, 2141, 2328, 2522, 2675, 2958],
  "n-sampled-pages": 16392,
  "n-zero-pages": 10060,
  "n-total-pages": 8392704,
  "start-time": 2916750,
  "calc-time": 60
}

Passing this data into prediction script, we get the following estimations:

Downtime> |    125ms |    250ms |    500ms |   1000ms |   5000ms |    unlim
---------------------------------------------------------------------------
 100 Mbps |        - |        - |        - |        - |        - |   16m59s  
   1 Gbps |        - |        - |        - |        - |        - |    1m40s
   2 Gbps |        - |        - |        - |        - |    1m41s |      50s  
 2.5 Gbps |        - |        - |        - |        - |    1m07s |      40s
   5 Gbps |      48s |      46s |      31s |      28s |      25s |      20s
  10 Gbps |      13s |      12s |      12s |      12s |      12s |      10s
  25 Gbps |       5s |       5s |       5s |       5s |       4s |       4s
  40 Gbps |       3s |       3s |       3s |       3s |       3s |       3s


Quality of prediction was tested with YCSB benchmark. Memcached instance
was installed into 32GiB VM, and a client generated a stream of requests.
Between experiments we varied request size distribution, number of threads,
and location of the client (inside or outside the VM).
After short preheat phase, we measured calc-dirty-rate:
1. {"execute": "calc-dirty-rate", "arguments":{"calc-time":60}}
2. Wait 60 seconds
3. Collect results with {"execute": "query-dirty-rate"}

Afterwards we tried to migrate VM after randomly selecting max downtime
and bandwidth limit. Typical prediction error is 6-7%, with only 180 out
of 5779 experiments failing badly: prediction error >=25% or incorrectly
predicting migration success when in fact it didn't converge.


Andrei Gudkov (2):
  migration/calc-dirty-rate: new metrics in sampling mode
  migration/calc-dirty-rate: tool to predict migration time

 MAINTAINERS                  |   1 +
 migration/dirtyrate.c        | 219 +++++++++++++++++++++------
 migration/dirtyrate.h        |  26 +++-
 qapi/migration.json          |  25 ++++
 scripts/predict_migration.py | 283 +++++++++++++++++++++++++++++++++++
 5 files changed, 502 insertions(+), 52 deletions(-)
 create mode 100644 scripts/predict_migration.py

-- 
2.30.2