[PULL 00/20] Migration 20230420 patches

Juan Quintela posted 20 patches 1 year ago
Failed in applying to current master (apply log)
There is a newer version of this series
include/qemu/mmap-alloc.h |   7 ++
migration/migration.c     | 167 +++++++++++++++++---------------------
migration/migration.h     |   2 +-
migration/multifd.c       |  10 +--
migration/postcopy-ram.c  |  36 ++++++--
migration/ram.c           |  73 ++++++++---------
migration/ram.h           |  34 ++++----
migration/rdma.c          |   4 +-
migration/savevm.c        |   6 +-
softmmu/vl.c              |   9 +-
util/mmap-alloc.c         |  28 +++++++
11 files changed, 209 insertions(+), 167 deletions(-)
[PULL 00/20] Migration 20230420 patches
Posted by Juan Quintela 1 year ago
The following changes since commit 2d82c32b2ceaca3dc3da5e36e10976f34bfcb598:

  Open 8.1 development tree (2023-04-20 10:05:25 +0100)

are available in the Git repository at:

  https://gitlab.com/juan.quintela/qemu.git tags/migration-20230420-pull-request

for you to fetch changes up to cdf07846e6fe07a2e20c93eed5902114dc1d3dcf:

  migration: Pass migrate_caps_check() the old and new caps (2023-04-20 15:10:58 +0200)

----------------------------------------------------------------
Migration Pull request

This series include everything reviewed for migration:

- fix for disk stop/start (eric)
- detect filesystem of hostmem (peter)
- rename qatomic_mb_read (paolo)
- whitespace cleanup (李皆俊)
  I hope copy and paste work for the name O:-)
- atomic_counters series (juan)
- two first patches of capabilities (juan)

Please apply,

----------------------------------------------------------------

Eric Blake (1):
  migration: Handle block device inactivation failures better

Juan Quintela (14):
  migration: Merge ram_counters and ram_atomic_counters
  migration: Update atomic stats out of the mutex
  migration: Make multifd_bytes atomic
  migration: Make dirty_sync_missed_zero_copy atomic
  migration: Make precopy_bytes atomic
  migration: Make downtime_bytes atomic
  migration: Make dirty_sync_count atomic
  migration: Make postcopy_requests atomic
  migration: Make dirty_pages_rate atomic
  migration: Make dirty_bytes_last_sync atomic
  migration: Rename duplicate to zero_pages
  migration: Rename normal to normal_pages
  migration: rename enabled_capabilities to capabilities
  migration: Pass migrate_caps_check() the old and new caps

Paolo Bonzini (1):
  postcopy-ram: do not use qatomic_mb_read

Peter Xu (3):
  util/mmap-alloc: qemu_fd_getfs()
  vl.c: Create late backends before migration object
  migration/postcopy: Detect file system on dest host

李皆俊 (1):
  migration: remove extra whitespace character for code style

 include/qemu/mmap-alloc.h |   7 ++
 migration/migration.c     | 167 +++++++++++++++++---------------------
 migration/migration.h     |   2 +-
 migration/multifd.c       |  10 +--
 migration/postcopy-ram.c  |  36 ++++++--
 migration/ram.c           |  73 ++++++++---------
 migration/ram.h           |  34 ++++----
 migration/rdma.c          |   4 +-
 migration/savevm.c        |   6 +-
 softmmu/vl.c              |   9 +-
 util/mmap-alloc.c         |  28 +++++++
 11 files changed, 209 insertions(+), 167 deletions(-)

-- 
2.39.2


Re: [PULL 00/20] Migration 20230420 patches
Posted by Richard Henderson 1 year ago
On 4/20/23 14:17, Juan Quintela wrote:
> The following changes since commit 2d82c32b2ceaca3dc3da5e36e10976f34bfcb598:
> 
>    Open 8.1 development tree (2023-04-20 10:05:25 +0100)
> 
> are available in the Git repository at:
> 
>    https://gitlab.com/juan.quintela/qemu.git  tags/migration-20230420-pull-request
> 
> for you to fetch changes up to cdf07846e6fe07a2e20c93eed5902114dc1d3dcf:
> 
>    migration: Pass migrate_caps_check() the old and new caps (2023-04-20 15:10:58 +0200)
> 
> ----------------------------------------------------------------
> Migration Pull request
> 
> This series include everything reviewed for migration:
> 
> - fix for disk stop/start (eric)
> - detect filesystem of hostmem (peter)
> - rename qatomic_mb_read (paolo)
> - whitespace cleanup (李皆俊)
>    I hope copy and paste work for the name O:-)
> - atomic_counters series (juan)
> - two first patches of capabilities (juan)
> 
> Please apply,

Fails CI:
https://gitlab.com/qemu-project/qemu/-/jobs/4159279870#L2896

/usr/lib/gcc-cross/mipsel-linux-gnu/10/../../../../mipsel-linux-gnu/bin/ld: 
libcommon.fa.p/migration_migration.c.o: undefined reference to symbol 
'__atomic_load_8@@LIBATOMIC_1.0'

You're using an atomic 8-byte operation on a host that doesn't support it.  Did you use 
qatomic_read__nocheck instead of qatomic_read to try and get around a build failure on 
i686?  The check is there for a reason...


r~

Re: [PULL 00/20] Migration 20230420 patches
Posted by Juan Quintela 1 year ago
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 4/20/23 14:17, Juan Quintela wrote:
>> The following changes since commit 2d82c32b2ceaca3dc3da5e36e10976f34bfcb598:
>>    Open 8.1 development tree (2023-04-20 10:05:25 +0100)
>> are available in the Git repository at:
>>    https://gitlab.com/juan.quintela/qemu.git
>> tags/migration-20230420-pull-request
>> for you to fetch changes up to
>> cdf07846e6fe07a2e20c93eed5902114dc1d3dcf:
>>    migration: Pass migrate_caps_check() the old and new caps
>> (2023-04-20 15:10:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request
>> This series include everything reviewed for migration:
>> - fix for disk stop/start (eric)
>> - detect filesystem of hostmem (peter)
>> - rename qatomic_mb_read (paolo)
>> - whitespace cleanup (李皆俊)
>>    I hope copy and paste work for the name O:-)
>> - atomic_counters series (juan)
>> - two first patches of capabilities (juan)
>> Please apply,
>
> Fails CI:
> https://gitlab.com/qemu-project/qemu/-/jobs/4159279870#L2896
>
> /usr/lib/gcc-cross/mipsel-linux-gnu/10/../../../../mipsel-linux-gnu/bin/ld:
> libcommon.fa.p/migration_migration.c.o: undefined reference to symbol
> '__atomic_load_8@@LIBATOMIC_1.0'

Hi Richard

First of all, I have no doubt that you know better that me in this
regard (*).

Once told that, it looks like one case of "my toolchain is better than
yours":

$ ls qemu-system-mips
qemu-system-mips        qemu-system-mips64el.p/ qemu-system-mipsel.p/
qemu-system-mips64      qemu-system-mips64.p/   qemu-system-mips.p/
qemu-system-mips64el    qemu-system-mipsel

This is Fedora37 with updates.

There are two posibilities here that came to mind, in order of
probability;
- myself with:

-    if (ram_counters.dirty_pages_rate && transferred > 10000) {
+    if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
+        transferred > 10000) {

- paolo:

 PostcopyState  postcopy_state_get(void)
 {
-    return qatomic_mb_read(&incoming_postcopy_state);
+    return qatomic_load_acquire(&incoming_postcopy_state);
 }

> You're using an atomic 8-byte operation on a host that doesn't support
> it.  Did you use qatomic_read__nocheck instead of qatomic_read to try
> and get around a build failure on i686?  The check is there for a
> reason...

No, I am changing all ram_counters values to atomic.  Almost all of them
move from [u]int64_t to Stat64.  Notice that I don't care about 63 to
64bits, and anyways I think it was an error that they were int64_t on
the frist place (blame the old days qapi whet it didn't have unsigned
types).

But it don't exist a stat64_set() function.  The most similar thing that
appears here is stat64_init(), but it is cheating about not being atomic
at all.

Almost all ram_counters values are ok with stat64_add() and stat64_get()
operations.  But some of them, we need to reset them to zero (or
someother value, but that would not be complicated).

(*) And here is where it comes the call sentence from the 1st paragraph,
see how the stat64_get() gets implemented for the !CONFIG_ATOMIC64, I
didn't even try to write a stat64_set() on my own.

This is one example of the use that I had:

     if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
         transferred > 10000) {
-        s->expected_downtime = ram_counters.remaining / bandwidth;
+        s->expected_downtime =
+            qatomic_read__nocheck(&ram_counters.dirty_bytes_last_sync) /
+                                  bandwidth;
     }
 
     qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/ram.c b/migration/ram.c
index 7400abf5e1..7bbaf8cd86 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1224,7 +1224,8 @@ static void migration_bitmap_sync(RAMState *rs)
         RAMBLOCK_FOREACH_NOT_IGNORED(block) {
             ramblock_sync_dirty_bitmap(rs, block);
         }
-        ram_counters.remaining = ram_bytes_remaining();
+        qatomic_set__nocheck(&ram_counters.dirty_bytes_last_sync,
+                             ram_bytes_remaining());
:


and why I used qatomic_*__nocheck() instead of the proper operations?
Because reading this:

#define qatomic_read__nocheck(ptr) \
    __atomic_load_n(ptr, __ATOMIC_RELAXED)

#define qatomic_read(ptr)                              \
    ({                                                 \
    qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
    qatomic_read__nocheck(ptr);                        \
    })

#define qatomic_set__nocheck(ptr, i) \
    __atomic_store_n(ptr, i, __ATOMIC_RELAXED)

#define qatomic_set(ptr, i)  do {                      \
    qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
    qatomic_set__nocheck(ptr, i);                      \
} while(0)

I was complely sure that we will never get the qemu_build_assert().

I know, I know.

And now that I have explained myself, what is the correct way of doing
this?

I declared the value as:

+    aligned_uint64_t dirty_bytes_last_sync;
-    int64_t remaining;

I just want to make sure that *all* ram_counters are atomic and then I
can use them from any thread.  All the counters that use stat64 already
are.  But for this two to work, I would need to have a way to set and
old value.

And once that we are here, I would like ta have:

stat64_inc(): just add 1, I know, I can create a macro.

and

stat64_reset(): as its name says, it returns the value to zero.

I still miss a couple of stats in migration, where I need to reset them
to zero from time to time:

./ram.c:380:    uint64_t bytes_xfer_prev;
./ram.c:747:    rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
./ram.c:1183:        stat64_get(&ram_counters.transferred) - rs->bytes_xfer_prev;
./ram.c:1247:        rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);

You can clame that this operation happens always on the migration
thread, but I have found that it is more difficult to document which
ones are atomic and which not, that make all of them atomic.  This
variable are get/set once a second, so performance is not one of the
issues.

And:

./ram.c:382:    uint64_t num_dirty_pages_period;
./ram.c:746:    rs->num_dirty_pages_period = 0;
./ram.c:1095:    rs->num_dirty_pages_period += new_dirty_pages;
./ram.c:1133:                         rs->num_dirty_pages_period * 1000 /
./ram.c:1184:    uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
./ram.c:1232:    trace_migration_bitmap_sync_end(rs->num_dirty_pages_period);
./ram.c:1246:        rs->num_dirty_pages_period = 0;

The problem here is that we reset the value every second, but for
everything else it is an stat64.

Thanks, Juan.
Re: [PULL 00/20] Migration 20230420 patches
Posted by Richard Henderson 1 year ago
On 4/22/23 10:21, Juan Quintela wrote:
> Richard Henderson <richard.henderson@linaro.org> wrote:
>> On 4/20/23 14:17, Juan Quintela wrote:
>>> The following changes since commit 2d82c32b2ceaca3dc3da5e36e10976f34bfcb598:
>>>     Open 8.1 development tree (2023-04-20 10:05:25 +0100)
>>> are available in the Git repository at:
>>>     https://gitlab.com/juan.quintela/qemu.git
>>> tags/migration-20230420-pull-request
>>> for you to fetch changes up to
>>> cdf07846e6fe07a2e20c93eed5902114dc1d3dcf:
>>>     migration: Pass migrate_caps_check() the old and new caps
>>> (2023-04-20 15:10:58 +0200)
>>> ----------------------------------------------------------------
>>> Migration Pull request
>>> This series include everything reviewed for migration:
>>> - fix for disk stop/start (eric)
>>> - detect filesystem of hostmem (peter)
>>> - rename qatomic_mb_read (paolo)
>>> - whitespace cleanup (李皆俊)
>>>     I hope copy and paste work for the name O:-)
>>> - atomic_counters series (juan)
>>> - two first patches of capabilities (juan)
>>> Please apply,
>>
>> Fails CI:
>> https://gitlab.com/qemu-project/qemu/-/jobs/4159279870#L2896
>>
>> /usr/lib/gcc-cross/mipsel-linux-gnu/10/../../../../mipsel-linux-gnu/bin/ld:
>> libcommon.fa.p/migration_migration.c.o: undefined reference to symbol
>> '__atomic_load_8@@LIBATOMIC_1.0'
> 
> Hi Richard
> 
> First of all, I have no doubt that you know better that me in this
> regard (*).
> 
> Once told that, it looks like one case of "my toolchain is better than
> yours":
> 
> $ ls qemu-system-mips
> qemu-system-mips        qemu-system-mips64el.p/ qemu-system-mipsel.p/
> qemu-system-mips64      qemu-system-mips64.p/   qemu-system-mips.p/
> qemu-system-mips64el    qemu-system-mipsel
> 
> This is Fedora37 with updates.

I'm sure it's not true that "my toolchain is better", because mips32 simply does not have 
the ability.  (And of course mips64 does, but that's a different test.)

I'll note that mips32 and armv6 (that is, *not* debian's armv7 based armhf distro) are the 
only hosts we have that don't have an atomic 8-byte operation.


> There are two posibilities here that came to mind, in order of
> probability;
> - myself with:
> 
> -    if (ram_counters.dirty_pages_rate && transferred > 10000) {
> +    if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
> +        transferred > 10000) {

I think it's this one...

> - paolo:
> 
>   PostcopyState  postcopy_state_get(void)
>   {
> -    return qatomic_mb_read(&incoming_postcopy_state);
> +    return qatomic_load_acquire(&incoming_postcopy_state);

... because this one was already atomic, with different barriers.

> and why I used qatomic_*__nocheck() instead of the proper operations?
> Because reading this:
> 
> #define qatomic_read__nocheck(ptr) \
>      __atomic_load_n(ptr, __ATOMIC_RELAXED)
> 
> #define qatomic_read(ptr)                              \
>      ({                                                 \
>      qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
>      qatomic_read__nocheck(ptr);                        \
>      })
> 
> #define qatomic_set__nocheck(ptr, i) \
>      __atomic_store_n(ptr, i, __ATOMIC_RELAXED)
> 
> #define qatomic_set(ptr, i)  do {                      \
>      qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
>      qatomic_set__nocheck(ptr, i);                      \
> } while(0)
> 
> I was complely sure that we will never get the qemu_build_assert().
> 
> I know, I know.

:-)

> And now that I have explained myself, what is the correct way of doing
> this?
> 
> I declared the value as:
> 
> +    aligned_uint64_t dirty_bytes_last_sync;
> -    int64_t remaining;
> 
> I just want to make sure that *all* ram_counters are atomic and then I
> can use them from any thread.  All the counters that use stat64 already
> are.  But for this two to work, I would need to have a way to set and
> old value.
> 
> And once that we are here, I would like ta have:
> 
> stat64_inc(): just add 1, I know, I can create a macro.
> 
> and
> 
> stat64_reset(): as its name says, it returns the value to zero.
> 
> I still miss a couple of stats in migration, where I need to reset them
> to zero from time to time:

How critical are the statistics?  Are they integral to the algorithm or are they merely 
for diagnostics and user display?  What happens they're not atomic and we do race?

If we really need atomicity, then the only answer is a mutex or spinlock.

> ./ram.c:380:    uint64_t bytes_xfer_prev;
> ./ram.c:747:    rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
> ./ram.c:1183:        stat64_get(&ram_counters.transferred) - rs->bytes_xfer_prev;
> ./ram.c:1247:        rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
> 
> You can clame that this operation happens always on the migration
> thread, but I have found that it is more difficult to document which
> ones are atomic and which not, that make all of them atomic.  This
> variable are get/set once a second, so performance is not one of the
> issues.

For access once per second, it sounds like a spinlock would be fine.


r~

Re: [PULL 00/20] Migration 20230420 patches
Posted by Juan Quintela 1 year ago
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 4/22/23 10:21, Juan Quintela wrote:
>> Richard Henderson <richard.henderson@linaro.org> wrote:
>>> On 4/20/23 14:17, Juan Quintela wrote:

>> Hi Richard
>> First of all, I have no doubt that you know better that me in this
>> regard (*).
>> Once told that, it looks like one case of "my toolchain is better
>> than
>> yours":

Quotes were here for a reason O:-)

>> $ ls qemu-system-mips
>> qemu-system-mips        qemu-system-mips64el.p/ qemu-system-mipsel.p/
>> qemu-system-mips64      qemu-system-mips64.p/   qemu-system-mips.p/
>> qemu-system-mips64el    qemu-system-mipsel
>> This is Fedora37 with updates.
>
> I'm sure it's not true that "my toolchain is better", because mips32
> simply does not have the ability.  (And of course mips64 does, but
> that's a different test.)

It was a kind of apology to say that I had really compiled mipsel.  I
compile everything that has a crosscompiler in fedora:

TARGET_DIRS=aarch64-softmmu alpha-softmmu arm-softmmu avr-softmmu
cris-softmmu hppa-softmmu i386-softmmu loongarch64-softmmu m68k-softmmu
microblazeel-softmmu microblaze-softmmu mips64el-softmmu mips64-softmmu
mipsel-softmmu mips-softmmu nios2-softmmu or1k-softmmu ppc64-softmmu
ppc-softmmu riscv32-softmmu riscv64-softmmu rx-softmmu s390x-softmmu
sh4eb-softmmu sh4-softmmu sparc64-softmmu sparc-softmmu tricore-softmmu
x86_64-softmmu xtensaeb-softmmu xtensa-softmmu aarch64_be-linux-user
aarch64-linux-user alpha-linux-user armeb-linux-user arm-linux-user
cris-linux-user hexagon-linux-user hppa-linux-user i386-linux-user
loongarch64-linux-user m68k-linux-user microblazeel-linux-user
microblaze-linux-user mips64el-linux-user mips64-linux-user
mipsel-linux-user mips-linux-user mipsn32el-linux-user
mipsn32-linux-user nios2-linux-user or1k-linux-user ppc64le-linux-user
ppc64-linux-user ppc-linux-user riscv32-linux-user riscv64-linux-user
s390x-linux-user sh4eb-linux-user sh4-linux-user sparc32plus-linux-user
sparc64-linux-user sparc-linux-user x86_64-linux-user
xtensaeb-linux-user xtensa-linux-user

And I still get this "build" failures.

> I'll note that mips32 and armv6 (that is, *not* debian's armv7 based
> armhf distro) are the only hosts we have that don't have an atomic
> 8-byte operation.

This is the kind of trouble that I don'k now what to do.  I am pretty
sure that nobody is goigng to migrate a host that has so much RAM than
needs a 64bit counter in that two architectures (or any 32 architectures
for what is worth).

A couple of minutes after sending the 1st email, I considederd sending
another one saying "my toolchain lies better than yours".

I moved the atomic operations that do the buildcheck and run make again:

$ rm -f qemu-system-mips*
$ time make

[....]

[2/5] Linking target qemu-system-mipsel
[3/5] Linking target qemu-system-mips
[4/5] Linking target qemu-system-mips64el
[5/5] Linking target qemu-system-mips64

So clearly my toolchain is lying O:-)

>> There are two posibilities here that came to mind, in order of
>> probability;
>> - myself with:
>> -    if (ram_counters.dirty_pages_rate && transferred > 10000) {
>> +    if (qatomic_read__nocheck(&ram_counters.dirty_pages_rate) &&
>> +        transferred > 10000) {
>
> I think it's this one...

O:-)

>> and why I used qatomic_*__nocheck() instead of the proper operations?
>> Because reading this:
>> #define qatomic_read__nocheck(ptr) \
>>      __atomic_load_n(ptr, __ATOMIC_RELAXED)
>> #define qatomic_read(ptr)                              \
>>      ({                                                 \
>>      qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
>>      qatomic_read__nocheck(ptr);                        \
>>      })
>> #define qatomic_set__nocheck(ptr, i) \
>>      __atomic_store_n(ptr, i, __ATOMIC_RELAXED)
>> #define qatomic_set(ptr, i)  do {                      \
>>      qemu_build_assert(sizeof(*ptr) <= ATOMIC_REG_SIZE); \
>>      qatomic_set__nocheck(ptr, i);                      \
>> } while(0)
>> I was complely sure that we will never get the qemu_build_assert().
>> I know, I know.
>
> :-)
>
>> And now that I have explained myself, what is the correct way of doing
>> this?
>> I declared the value as:
>> +    aligned_uint64_t dirty_bytes_last_sync;
>> -    int64_t remaining;
>> I just want to make sure that *all* ram_counters are atomic and then
>> I
>> can use them from any thread.  All the counters that use stat64 already
>> are.  But for this two to work, I would need to have a way to set and
>> old value.
>> And once that we are here, I would like ta have:
>> stat64_inc(): just add 1, I know, I can create a macro.
>> and
>> stat64_reset(): as its name says, it returns the value to zero.
>> I still miss a couple of stats in migration, where I need to reset
>> them
>> to zero from time to time:
>
> How critical are the statistics?  Are they integral to the algorithm
> or are they merely for diagnostics and user display?  What happens
> they're not atomic and we do race?
>
> If we really need atomicity, then the only answer is a mutex or spinlock.

I think we can extend Stat64 operations with at least stat64_reset()
operations.  What I don't want is that half of the counters need to be
updated with one spinlock and the other half with atomic operations, it
makes difficult to explain.

If I have the stat64_reset() operation, then stat64_set() becomes
stat64_reset() + stat64_add().  I put a wrapper and call it a day.  As
said, this one is not so speed critical.

Yes, other counters are speed critical, updated once for each transmited
page.  But others are only updated every time that we try to finish
migration or each iteration.  The two ones given trouble are in the last
kind.

>> ./ram.c:380:    uint64_t bytes_xfer_prev;
>> ./ram.c:747:    rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
>> ./ram.c:1183:        stat64_get(&ram_counters.transferred) - rs->bytes_xfer_prev;
>> ./ram.c:1247:        rs->bytes_xfer_prev = stat64_get(&ram_counters.transferred);
>> You can clame that this operation happens always on the migration
>> thread, but I have found that it is more difficult to document which
>> ones are atomic and which not, that make all of them atomic.  This
>> variable are get/set once a second, so performance is not one of the
>> issues.
>
> For access once per second, it sounds like a spinlock would be fine.

Ortogonality is more important to me that speed here.  I mill wait if
someone (hint, hint) cames with an implementaton of stat64_clear() that
also works for non ATOMIC64 machines O:-)

Later, Juan.
Re: [PULL 00/20] Migration 20230420 patches
Posted by Juan Quintela 1 year ago
Juan Quintela <quintela@redhat.com> wrote:
> Richard Henderson <richard.henderson@linaro.org> wrote:
>> On 4/22/23 10:21, Juan Quintela wrote:
>>> Richard Henderson <richard.henderson@linaro.org> wrote:
>>>> On 4/20/23 14:17, Juan Quintela wrote:

>> I'll note that mips32 and armv6 (that is, *not* debian's armv7 based
>> armhf distro) are the only hosts we have that don't have an atomic
>> 8-byte operation.
>
> This is the kind of trouble that I don'k now what to do.  I am pretty
> sure that nobody is goigng to migrate a host that has so much RAM than
> needs a 64bit counter in that two architectures (or any 32 architectures
> for what is worth).
>
> A couple of minutes after sending the 1st email, I considederd sending
> another one saying "my toolchain lies better than yours".
>
> I moved the atomic operations that do the buildcheck and run make again:
>
> $ rm -f qemu-system-mips*
> $ time make
>
> [....]
>
> [2/5] Linking target qemu-system-mipsel
> [3/5] Linking target qemu-system-mips
> [4/5] Linking target qemu-system-mips64el
> [5/5] Linking target qemu-system-mips64
>
> So clearly my toolchain is lying O:-)

And here I am.
Wearing a brow paper bag on my head for week.

These are emulatores for mips.  Not cross-compiled to run on MIPS.

/me hides on the hills in shame.

Later, Juan.
Re: [PULL 00/20] Migration 20230420 patches
Posted by Richard Henderson 1 year ago
On 4/23/23 10:45, Juan Quintela wrote:
> This is the kind of trouble that I don'k now what to do.  I am pretty
> sure that nobody is goigng to migrate a host that has so much RAM than
> needs a 64bit counter in that two architectures (or any 32 architectures
> for what is worth).

Does it really need to be a 64-bit counter? Should it be size_t instead?
Given that a 32-bit host can't represent more than say 2**20 pages, we shouldn't need to 
count them either.


r~