migration/migration.c | 5 +++++ migration/multifd-qatzip.c | 18 +++++++++--------- migration/multifd-zstd.c | 8 ++++---- migration/savevm.c | 8 ++++---- system/physmem.c | 2 +- tests/qtest/migration-test.c | 8 +++++--- 6 files changed, 28 insertions(+), 21 deletions(-)
The following changes since commit 2b81c046252fbfb375ad30632362fc16e6e22bd5: Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2024-09-17 14:02:18 +0100) are available in the Git repository at: https://gitlab.com/peterx/qemu.git tags/migration-20240917-pull-request for you to fetch changes up to 4ce56229087860805877075ddb29dd44578365a9: migration/multifd: Fix rb->receivedmap cleanup race (2024-09-18 14:27:39 -0400) ---------------------------------------------------------------- Migration pull request for 9.2 - Fabiano's patch to move two tests to slow tests. - Peter's patch to fix qatzip builds - Stefan's multifd-zstd fix on unsigned diff comparisons - Fea's bug fix to consistently use memattrs when map() address space - Fabiano's bug fix on multifd race condition against receivedmap ---------------------------------------------------------------- Fabiano Rosas (3): tests/qtest/migration: Move a couple of slow tests under g_test_slow migration/savevm: Remove extra load cleanup calls migration/multifd: Fix rb->receivedmap cleanup race Fea.Wang (1): softmmu/physmem.c: Keep transaction attribute in address_space_map() Peter Xu (1): migration/multifd: Fix build for qatzip Stefan Weil (1): migration/multifd: Fix loop conditions in multifd_zstd_send_prepare and multifd_zstd_recv migration/migration.c | 5 +++++ migration/multifd-qatzip.c | 18 +++++++++--------- migration/multifd-zstd.c | 8 ++++---- migration/savevm.c | 8 ++++---- system/physmem.c | 2 +- tests/qtest/migration-test.c | 8 +++++--- 6 files changed, 28 insertions(+), 21 deletions(-) -- 2.45.0
On Wed, 18 Sept 2024 at 19:31, Peter Xu <peterx@redhat.com> wrote: > > The following changes since commit 2b81c046252fbfb375ad30632362fc16e6e22bd5: > > Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2024-09-17 14:02:18 +0100) > > are available in the Git repository at: > > https://gitlab.com/peterx/qemu.git tags/migration-20240917-pull-request > > for you to fetch changes up to 4ce56229087860805877075ddb29dd44578365a9: > > migration/multifd: Fix rb->receivedmap cleanup race (2024-09-18 14:27:39 -0400) > > ---------------------------------------------------------------- > Migration pull request for 9.2 > > - Fabiano's patch to move two tests to slow tests. > - Peter's patch to fix qatzip builds > - Stefan's multifd-zstd fix on unsigned diff comparisons > - Fea's bug fix to consistently use memattrs when map() address space > - Fabiano's bug fix on multifd race condition against receivedmap > > ---------------------------------------------------------------- Applied, thanks. Thanks for looking at the issues with the migration tests. This run went through first time without my needing to retry any jobs, so fingers crossed that we have at least improved the reliability. (I have a feeling there's still something funny with the k8s runners, but that's not migration-test specific, it's just that test tends to be the longest running and so most likely to be affected.) Please update the changelog at https://wiki.qemu.org/ChangeLog/9.2 for any user-visible changes. -- PMM
On Thu, Sep 19, 2024 at 10:08:25AM +0100, Peter Maydell wrote: > Thanks for looking at the issues with the migration tests. > This run went through first time without my needing to retry any > jobs, so fingers crossed that we have at least improved the reliability. > (I have a feeling there's still something funny with the k8s runners, > but that's not migration-test specific, it's just that test tends > to be the longest running and so most likely to be affected.) Kudos all go to Fabiano for debugging the hard problem. And yes, please let either of us know if it fails again, we can either keep looking, or still can disable it when necessary (if it takes long to debug). Thanks, -- Peter Xu
On Thu, 19 Sept 2024 at 12:59, Peter Xu <peterx@redhat.com> wrote: > > On Thu, Sep 19, 2024 at 10:08:25AM +0100, Peter Maydell wrote: > > Thanks for looking at the issues with the migration tests. > > This run went through first time without my needing to retry any > > jobs, so fingers crossed that we have at least improved the reliability. > > (I have a feeling there's still something funny with the k8s runners, > > but that's not migration-test specific, it's just that test tends > > to be the longest running and so most likely to be affected.) > > Kudos all go to Fabiano for debugging the hard problem. > > And yes, please let either of us know if it fails again, we can either keep > looking, or still can disable it when necessary (if it takes long to debug). On the subject of potential races in the migration code, there's a couple of outstanding Coverity issues that might be worth looking at. If they're false-positives let me know and I can reclassify them in Coverity. CID 1527402: In migrate_fd_cleanup() Coverity thinks there's a race because we read s->to_dst_file in the "if (s->to_dst_file)" check without holding the qemu_file_lock. This might be a false-positive because the race Coverity identifies happens if two threads both call migrate_fd_cleanup() at the same time, which is probably not permitted. (But OTOH taking a mutex gets you for free any necessary memory barriers...) CID 1527413: In postcopy_pause_incoming() we read mis->postcopy_qemufile_dst without holding the postcopy_prio_thread_mutex which we use to protect the write to that field, so Coverity thinks there's a race if two threads call this function at once. (The only other migration Coverity issue is CID 1560071, which is the "better to use pstrcpy()" not-really-a-bug we discussed in another thread.) thanks -- PMM
Peter Maydell <peter.maydell@linaro.org> writes: > On Thu, 19 Sept 2024 at 12:59, Peter Xu <peterx@redhat.com> wrote: >> >> On Thu, Sep 19, 2024 at 10:08:25AM +0100, Peter Maydell wrote: >> > Thanks for looking at the issues with the migration tests. >> > This run went through first time without my needing to retry any >> > jobs, so fingers crossed that we have at least improved the reliability. >> > (I have a feeling there's still something funny with the k8s runners, >> > but that's not migration-test specific, it's just that test tends >> > to be the longest running and so most likely to be affected.) >> >> Kudos all go to Fabiano for debugging the hard problem. >> >> And yes, please let either of us know if it fails again, we can either keep >> looking, or still can disable it when necessary (if it takes long to debug). > > On the subject of potential races in the migration code, > there's a couple of outstanding Coverity issues that might > be worth looking at. If they're false-positives let me know > and I can reclassify them in Coverity. > > CID 1527402: In migrate_fd_cleanup() Coverity thinks there's > a race because we read s->to_dst_file in the "if (s->to_dst_file)" > check without holding the qemu_file_lock. This might be a > false-positive because the race Coverity identifies happens > if two threads both call migrate_fd_cleanup() at the same > time, which is probably not permitted. (But OTOH taking a > mutex gets you for free any necessary memory barriers...) Yes, we shouldn't rely on mental gymnastics to prove that there's no concurrent access. @peterx, that RH bug you showed me could very well be caused by this race, except that I don't see how fd_cleanup could race with itself. Just having the lock would probably save us time even thinking about it. > > CID 1527413: In postcopy_pause_incoming() we read > mis->postcopy_qemufile_dst without holding the > postcopy_prio_thread_mutex which we use to protect the write > to that field, so Coverity thinks there's a race if two > threads call this function at once. At first sight, it seems like a real problem. We did a good pass on these races on the source side, but the destination side hasn't been investigated yet. Unfortunately, these QEMUFile races are not trivial to fix due to several design pain points, such as: - the QEMUFile pointer validity being sometimes used to imply no error has happened before; - the various shutdown() calls that serve both as a way to kick a read() that's stuck, but also to cause some other part of the code to realise there has been an error (due to the point above); - the yank feature which has weird semantics regarding whether it operates on an iochannel or qemufile; - migrate_fd_cancel() that _can_ run concurrently with anything else; - the need to ensure the other end of migration also reacts to error/cancel on this side; > > (The only other migration Coverity issue is CID 1560071, > which is the "better to use pstrcpy()" not-really-a-bug > we discussed in another thread.) > > thanks > -- PMM
On Thu, Sep 19, 2024 at 11:05:28AM -0300, Fabiano Rosas wrote: > Peter Maydell <peter.maydell@linaro.org> writes: > > > On Thu, 19 Sept 2024 at 12:59, Peter Xu <peterx@redhat.com> wrote: > >> > >> On Thu, Sep 19, 2024 at 10:08:25AM +0100, Peter Maydell wrote: > >> > Thanks for looking at the issues with the migration tests. > >> > This run went through first time without my needing to retry any > >> > jobs, so fingers crossed that we have at least improved the reliability. > >> > (I have a feeling there's still something funny with the k8s runners, > >> > but that's not migration-test specific, it's just that test tends > >> > to be the longest running and so most likely to be affected.) > >> > >> Kudos all go to Fabiano for debugging the hard problem. > >> > >> And yes, please let either of us know if it fails again, we can either keep > >> looking, or still can disable it when necessary (if it takes long to debug). > > > > On the subject of potential races in the migration code, > > there's a couple of outstanding Coverity issues that might > > be worth looking at. If they're false-positives let me know > > and I can reclassify them in Coverity. > > > > CID 1527402: In migrate_fd_cleanup() Coverity thinks there's > > a race because we read s->to_dst_file in the "if (s->to_dst_file)" > > check without holding the qemu_file_lock. This might be a > > false-positive because the race Coverity identifies happens > > if two threads both call migrate_fd_cleanup() at the same > > time, which is probably not permitted. (But OTOH taking a > > mutex gets you for free any necessary memory barriers...) > > Yes, we shouldn't rely on mental gymnastics to prove that there's no > concurrent access. > > @peterx, that RH bug you showed me could very well be caused by this > race, except that I don't see how fd_cleanup could race with > itself. Just having the lock would probably save us time even thinking > about it. I can send a patch for this one. > > > > > CID 1527413: In postcopy_pause_incoming() we read > > mis->postcopy_qemufile_dst without holding the > > postcopy_prio_thread_mutex which we use to protect the write > > to that field, so Coverity thinks there's a race if two > > threads call this function at once. > > At first sight, it seems like a real problem. We did a good pass on > these races on the source side, but the destination side hasn't been > investigated yet. I think it's probably safe and no real race could happen. The rational being that during postcopy (aka, when ram load thread is running), this postcopy_qemufile_dst can only be modified by one thread, which is the ram listen thread itself. The other consumer of it is the fast load thread, but that thread should only read postcopy_qemufile_dst, not update. IOW, here the "if (postcopy_qemufile_dst)" is not about saying "whether anyone else can released it", instead it's about "whether preempt mode is enabled, and whether we have the fast channel at all". When it's set, it means preempt is enabled, and it will never become NULL in that case. Here I _think_ we can replace it with a migrate_postcopy_preempt() safely, because AFAICT that must be created when with it enabled. However the tricky part is the next line, which does the shutdown(): qemu_file_shutdown(mis->postcopy_qemufile_dst); That's unfortunately not replaceable, and that's so far the only way (and pretty traditional way... as you pointed out below..) that we can kick one qemufile read() out. In this specific case, the fast thread normally holds the mutex meanwhile read() on this qemufile via one qemu_get_be64() of ram_load_postcopy(), then we kick it out with the shutdown(). IOW both facts need to be true here for the fast load thread on: (1) mutex held, and (2) block at qemu_get_be64(). So there's no way we can take the mutex before the shutdown() or we deadlock.. it'll be easier if we have other way to kick the qemu_get_be64(), e.g. if it could be a generic poll() then we can have it poll on both the qemufile and another eventfd. But we're stick with the qemufile API here.. So in short: I don't have a good and simple solution for this one.. but I am 99.9999% sure it's safe. > > Unfortunately, these QEMUFile races are not trivial to fix due to > several design pain points, such as: > > - the QEMUFile pointer validity being sometimes used to imply no error > has happened before; > > - the various shutdown() calls that serve both as a way to kick a read() > that's stuck, but also to cause some other part of the code to realise > there has been an error (due to the point above); > > - the yank feature which has weird semantics regarding whether it > operates on an iochannel or qemufile; > > - migrate_fd_cancel() that _can_ run concurrently with anything else; > > - the need to ensure the other end of migration also reacts to > error/cancel on this side; Right. Ultimately it's about relying on shutdown() for kicking qemufile ops out as of now.. that _may_ not always hold locks. The other example is qmp_yank() on a migration channel, which invokes the same iochannel shutdown() via migration_yank_iochannel(). > > > > > (The only other migration Coverity issue is CID 1560071, > > which is the "better to use pstrcpy()" not-really-a-bug > > we discussed in another thread.) > > > > thanks > > -- PMM > -- Peter Xu
On Thu, Sep 19, 2024 at 12:29:43PM -0400, Peter Xu wrote: > > > CID 1527402: In migrate_fd_cleanup() Coverity thinks there's > > > a race because we read s->to_dst_file in the "if (s->to_dst_file)" > > > check without holding the qemu_file_lock. This might be a > > > false-positive because the race Coverity identifies happens > > > if two threads both call migrate_fd_cleanup() at the same > > > time, which is probably not permitted. (But OTOH taking a > > > mutex gets you for free any necessary memory barriers...) > > > > Yes, we shouldn't rely on mental gymnastics to prove that there's no > > concurrent access. > > > > @peterx, that RH bug you showed me could very well be caused by this > > race, except that I don't see how fd_cleanup could race with > > itself. Just having the lock would probably save us time even thinking > > about it. > > I can send a patch for this one. Oh btw I think that may not be the same issue.. I did observe one memory order issue only happens on aarch64 when looking at that bug, and I _feel_ like there can be more. Bandan (after his long PTO) from our team will keep looking. Since that can relatively constantly reproduce (IIRC..), we do have chance to figure it out, sooner or later. -- Peter Xu
© 2016 - 2024 Red Hat, Inc.