I haven't seend any problem about using 64 or 128. And it make much
less contention on the locks. Just change it.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/migration/migration.c b/migration/migration.c
index ef1d53cde2..f673486679 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -81,7 +81,7 @@
/* The delay time (in ms) between two COLO checkpoints */
#define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100)
#define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2
-#define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16
+#define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 128
/* Background transfer rate for postcopy, 0 means unlimited, note
* that page requests can still exceed this limit.
--
2.20.1
On Wed, Feb 06, 2019 at 02:23:28PM +0100, Juan Quintela wrote: > I haven't seend any problem about using 64 or 128. And it make much > less contention on the locks. Just change it. Isn't there a issue with having a fixed page count given the very different default page sizes across architectures ? x86 is 4kb pages, while ppc64 uses 64kb pages IIUC. This would mean current value of 64 pages, would correspond to 1/4 MB on x86, and 4 MB on ppc64. The new value would be 1/2 MB on x86 and 8 MB on ppc64. Should we instead be measuring this tunable in units that are independant of page size ? eg meansure in KB, with a requirement that the value is a multiple of the page size. Then set the default to 512 KB ? > > Signed-off-by: Juan Quintela <quintela@redhat.com> > --- > migration/migration.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/migration/migration.c b/migration/migration.c > index ef1d53cde2..f673486679 100644 > --- a/migration/migration.c > +++ b/migration/migration.c > @@ -81,7 +81,7 @@ > /* The delay time (in ms) between two COLO checkpoints */ > #define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100) > #define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2 > -#define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 16 > +#define DEFAULT_MIGRATE_MULTIFD_PAGE_COUNT 128 > > /* Background transfer rate for postcopy, 0 means unlimited, note > * that page requests can still exceed this limit. > -- > 2.20.1 > > Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
Daniel P. Berrangé <berrange@redhat.com> wrote: > On Wed, Feb 06, 2019 at 02:23:28PM +0100, Juan Quintela wrote: >> I haven't seend any problem about using 64 or 128. And it make much >> less contention on the locks. Just change it. > > Isn't there a issue with having a fixed page count given the > very different default page sizes across architectures ? > > x86 is 4kb pages, while ppc64 uses 64kb pages IIUC. > > This would mean current value of 64 pages, would correspond > to 1/4 MB on x86, and 4 MB on ppc64. The new value would > be 1/2 MB on x86 and 8 MB on ppc64. I saw no difference (on x86 between 64 and 128 pages). Bigger pages means half the contention on the locks and better for compression (see next series). > Should we instead be measuring this tunable in units that > are independant of page size ? eg meansure in KB, with a > requirement that the value is a multiple of the page size. > Then set the default to 512 KB ? See next patch, I just dropped the tunable altogether. Libvirt don't want to support it (difficult to explain), and in the past you asked me to choose a sane value and live with it O:-) It was good for testing, though. Once there, is there a good value for a network packet? I put it in pages because it facilitates the coding, but doing a: CONSTANT/qemu_target_page_size() is not going to complicate anything either. Later, Juan.
Daniel P. Berrangé <berrange@redhat.com> wrote: > On Wed, Feb 06, 2019 at 02:23:28PM +0100, Juan Quintela wrote: >> I haven't seend any problem about using 64 or 128. And it make much >> less contention on the locks. Just change it. > > Isn't there a issue with having a fixed page count given the > very different default page sizes across architectures ? > > x86 is 4kb pages, while ppc64 uses 64kb pages IIUC. > > This would mean current value of 64 pages, would correspond > to 1/4 MB on x86, and 4 MB on ppc64. The new value would > be 1/2 MB on x86 and 8 MB on ppc64. I saw no difference (on x86 between 64 and 128 pages). Bigger pages means half the contention on the locks and better for compression (see next series). > Should we instead be measuring this tunable in units that > are independant of page size ? eg meansure in KB, with a > requirement that the value is a multiple of the page size. > Then set the default to 512 KB ? See next patch, I just dropped the tunable altogether. Libvirt don't want to support it (difficult to explain), and in the past you asked me to choose a sane value and live with it O:-) It was good for testing, though. Once there, is there a good value for a network packet? I put it in pages because it facilitates the coding, but doing a: CONSTANT/qemu_target_page_size() is not going to complicate anything either. Later, Juan.
On Thu, Feb 07, 2019 at 01:13:51PM +0100, Juan Quintela wrote: > Daniel P. Berrangé <berrange@redhat.com> wrote: > > On Wed, Feb 06, 2019 at 02:23:28PM +0100, Juan Quintela wrote: > >> I haven't seend any problem about using 64 or 128. And it make much > >> less contention on the locks. Just change it. > > > > Isn't there a issue with having a fixed page count given the > > very different default page sizes across architectures ? > > > > x86 is 4kb pages, while ppc64 uses 64kb pages IIUC. > > > > This would mean current value of 64 pages, would correspond > > to 1/4 MB on x86, and 4 MB on ppc64. The new value would > > be 1/2 MB on x86 and 8 MB on ppc64. > > I saw no difference (on x86 between 64 and 128 pages). Bigger pages > means half the contention on the locks and better for compression (see > next series). 1/4 MB -> 1/2 MB is not all that significant a change, but 1/2 MB vs 8 MB is very significant. I wouldn't be surprised if this difference in values results in rather different performance characteristics for multifd migrate between x86 and ppc64. > > Should we instead be measuring this tunable in units that > > are independant of page size ? eg meansure in KB, with a > > requirement that the value is a multiple of the page size. > > Then set the default to 512 KB ? > > See next patch, I just dropped the tunable altogether. Libvirt don't > want to support it (difficult to explain), and in the past you asked me > to choose a sane value and live with it O:-) > It was good for testing, though. Yep, I think its good if QEMU choose a sane value. I'm just wondering whether the value chosen is actually suitable for non-x86 architectures. > Once there, is there a good value for a network packet? I don't have any particular suggestion here. Probably would have to look at real performance measurements of migration vs guest workload to understand if we've got a good size. > I put it in pages because it facilitates the coding, but doing a: > CONSTANT/qemu_target_page_size() is not going to complicate anything > either. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
© 2016 - 2026 Red Hat, Inc.