From nobody Sun Nov 24 05:23:55 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1726416633; cv=none; d=zohomail.com; s=zohoarc; b=U0q42uzkP5KJI1ztAS3aSDIPKE7xciE0/ArktOyQ56Gbwc9XhnpJ5kTy6acc9yMJmE3+M1OYC7SB2YfFF1nl24sfkk+FpLa/FT58tLnYzqAUWN8DXrlOQCM3Vjy8yDDqdg7ZLiXHOACnyVWHEyhuP/tE9MdctjDR+T7TQePRmtQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1726416633; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=kd3beDozenchrE206KfSrUYWzrjJyNNe06v58gw/duI=; b=aELQ2wgeYpbqOQ22/9kJoqY1fbLazYH7oV/wcqwm6PlRabfVv1dyWKt+cuD2cVZ0xEcmaYtq061gjsiRz9so4nyIOQRjzE6K+8pWe/AJkt0WGKi9oXgfoMjvEmRNDpjxKZ2OQdpLHT3XBfW5EmEQNbbFvHW+V3flYKhaBtA93uk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1726416633056322.3485712090163; Sun, 15 Sep 2024 09:10:33 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sproE-0002Vc-72; Sun, 15 Sep 2024 12:09:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sproC-0002Q7-2A for qemu-devel@nongnu.org; Sun, 15 Sep 2024 12:09:08 -0400 Received: from mail-pf1-x436.google.com ([2607:f8b0:4864:20::436]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sproA-0006Nu-5k for qemu-devel@nongnu.org; Sun, 15 Sep 2024 12:09:07 -0400 Received: by mail-pf1-x436.google.com with SMTP id d2e1a72fcca58-71911585911so3163090b3a.3 for ; Sun, 15 Sep 2024 09:09:05 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.247]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71944bb5967sm2344795b3a.182.2024.09.15.09.09.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 15 Sep 2024 09:09:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1726416544; x=1727021344; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kd3beDozenchrE206KfSrUYWzrjJyNNe06v58gw/duI=; b=yEQYOhEIvrvxI5b2FuEDxj0PiGE2MkvPbslp7bb6y+Eqt8BwlG3pYY+qX7/s2Ge36Y rRNuvllbS2fgOF5LfTzd7Crui9L8H5KhKzNe5d5zPcpcf4dIFKUAs2YyEnlu6eZNJ1sx u/Vp9FcsHn3H9ivIMfp4rlm3qrGC2zVMYtgSPZBOw3u/LXbzUJXR3Zm6unfXwC33ihuC PWSDhYDAROJWnmSdkursnWcPB+UiTTdjD83a1kVO6f92rK+l75wNVKj7BrmiSofaf/jZ W96XCHoBkzh2KhuIDCzv8JZl+2Ah8ymn/LyvFbug1nnJL+ZScKPDOr4x/tbALpQlwj8T HPWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726416544; x=1727021344; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kd3beDozenchrE206KfSrUYWzrjJyNNe06v58gw/duI=; b=CkgVUiHznIxkhUIU/67WxZ2HRjPgTKMiq3UEQbqoP4sU/OT8vQEnvdtjqJpdiiQHet vq/aX5fdro4sDsLtK6mxPJy0p+dZC8L6UQ4TxdjsJfBQEDpL8mntt2y20OWvkNFVSshg tq/H2vF7K9etFItMJhrlIxg706Uisqk1IboJo9B+H5+hsdslRQBFF1YkROUQSYEkojTT cpNqiQGks59TWHWAb4qDDE3hFSnNSCErYcx7ORRrZJ61YAeVqUAkikV9fOlMKTpPgraM HAA2Rvjy2Xg6IBzJIpJGc8sdbNKIOI4q9PqiLkv2hgk49/4NRjZ3q81CaqFFG/Nwhw0e nh+A== X-Gm-Message-State: AOJu0YzlF7l2ILGMk0BM28pH9RSGSgC5TUXqxRGSLAupH8Don48OmAnN r3N0QMlU3ZqFh5RDnGgv2Jf8uQaPUAp/dvQyYPsA/NHDb7nJpGkSGpXJ3fBq100shn8jeq1MFZU tfDxTYA== X-Google-Smtp-Source: AGHT+IHjeAWCnYdv/kn1KTYiGONkAjRum8dFk3LHq4oFOJo/ii1dXGZjlk+rwih2DURX1q2dErlLmQ== X-Received: by 2002:a05:6a00:174b:b0:714:2620:d235 with SMTP id d2e1a72fcca58-71926089bf6mr19964374b3a.16.1726416543709; Sun, 15 Sep 2024 09:09:03 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH v1 2/7] migration: Refine util functions to support background sync Date: Mon, 16 Sep 2024 00:08:45 +0800 Message-Id: X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::436; envelope-from=yong.huang@smartx.com; helo=mail-pf1-x436.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @smartx-com.20230601.gappssmtp.com) X-ZM-MESSAGEID: 1726416635207116600 Content-Type: text/plain; charset="utf-8" Supply the migration_bitmap_sync function along with the background argument. Introduce the sync_mode global variable to track the sync mode and support background sync while keeping backward compatibility. Signed-off-by: Hyman Huang --- include/exec/ram_addr.h | 107 +++++++++++++++++++++++++++++++++++++--- migration/ram.c | 53 ++++++++++++++++---- 2 files changed, 144 insertions(+), 16 deletions(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 891c44cf2d..d0d123ac60 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -472,17 +472,68 @@ static inline void cpu_physical_memory_clear_dirty_ra= nge(ram_addr_t start, cpu_physical_memory_test_and_clear_dirty(start, length, DIRTY_MEMORY_C= ODE); } =20 +static void ramblock_clear_iter_bmap(RAMBlock *rb, + ram_addr_t start, + ram_addr_t length) +{ + ram_addr_t addr; + unsigned long *bmap =3D rb->bmap; + unsigned long *shadow_bmap =3D rb->shadow_bmap; + unsigned long *iter_bmap =3D rb->iter_bmap; + + for (addr =3D 0; addr < length; addr +=3D TARGET_PAGE_SIZE) { + long k =3D (start + addr) >> TARGET_PAGE_BITS; + if (test_bit(k, shadow_bmap) && !test_bit(k, bmap)) { + /* Page has been sent, clear the iter bmap */ + clear_bit(k, iter_bmap); + } + } +} + +static void ramblock_update_iter_bmap(RAMBlock *rb, + ram_addr_t start, + ram_addr_t length) +{ + ram_addr_t addr; + unsigned long *bmap =3D rb->bmap; + unsigned long *iter_bmap =3D rb->iter_bmap; + + for (addr =3D 0; addr < length; addr +=3D TARGET_PAGE_SIZE) { + long k =3D (start + addr) >> TARGET_PAGE_BITS; + if (test_bit(k, iter_bmap)) { + if (!test_bit(k, bmap)) { + set_bit(k, bmap); + rb->iter_dirty_pages++; + } + } + } +} =20 /* Called with RCU critical section */ static inline uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, ram_addr_t start, - ram_addr_t length) + ram_addr_t length, + unsigned int flag) { ram_addr_t addr; unsigned long word =3D BIT_WORD((start + rb->offset) >> TARGET_PAGE_BI= TS); uint64_t num_dirty =3D 0; unsigned long *dest =3D rb->bmap; + unsigned long *shadow_bmap =3D rb->shadow_bmap; + unsigned long *iter_bmap =3D rb->iter_bmap; + + assert(flag && !(flag & (~RAMBLOCK_SYN_MASK))); + + /* + * We must remove the sent dirty page from the iter_bmap in order to + * minimize redundant page transfers if background sync has appeared + * during this iteration. + */ + if (rb->background_sync_shown_up && + (flag & (RAMBLOCK_SYN_MODERN_ITER | RAMBLOCK_SYN_MODERN_BACKGROUND= ))) { + ramblock_clear_iter_bmap(rb, start, length); + } =20 /* start address and length is aligned at the start of a word? */ if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) =3D=3D @@ -503,8 +554,20 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBloc= k *rb, if (src[idx][offset]) { unsigned long bits =3D qatomic_xchg(&src[idx][offset], 0); unsigned long new_dirty; + if (flag & (RAMBLOCK_SYN_MODERN_ITER | + RAMBLOCK_SYN_MODERN_BACKGROUND)) { + /* Back-up bmap for the next iteration */ + iter_bmap[k] |=3D bits; + if (flag =3D=3D RAMBLOCK_SYN_MODERN_BACKGROUND) { + /* Back-up bmap to detect pages has been sent */ + shadow_bmap[k] =3D dest[k]; + } + } new_dirty =3D ~dest[k]; - dest[k] |=3D bits; + if (flag =3D=3D RAMBLOCK_SYN_LEGACY_ITER) { + dest[k] |=3D bits; + } + new_dirty &=3D bits; num_dirty +=3D ctpopl(new_dirty); } @@ -534,18 +597,50 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlo= ck *rb, ram_addr_t offset =3D rb->offset; =20 for (addr =3D 0; addr < length; addr +=3D TARGET_PAGE_SIZE) { - if (cpu_physical_memory_test_and_clear_dirty( + bool dirty =3D false; + long k =3D (start + addr) >> TARGET_PAGE_BITS; + if (flag =3D=3D RAMBLOCK_SYN_MODERN_BACKGROUND) { + if (test_bit(k, dest)) { + /* Back-up bmap to detect pages has been sent */ + set_bit(k, shadow_bmap); + } + } + + dirty =3D cpu_physical_memory_test_and_clear_dirty( start + addr + offset, TARGET_PAGE_SIZE, - DIRTY_MEMORY_MIGRATION)) { - long k =3D (start + addr) >> TARGET_PAGE_BITS; - if (!test_and_set_bit(k, dest)) { + DIRTY_MEMORY_MIGRATION); + + if (flag =3D=3D RAMBLOCK_SYN_LEGACY_ITER) { + if (dirty && !test_and_set_bit(k, dest)) { num_dirty++; } + } else { + if (dirty) { + if (!test_bit(k, dest)) { + num_dirty++; + } + /* Back-up bmap for the next iteration */ + set_bit(k, iter_bmap); + } } } } =20 + /* + * We have to re-fetch dirty pages from the iter_bmap one by one. + * It's possible that not all of the dirty pages that meant to + * send in the current iteration are included in the bitmap + * that the current sync retrieved from the KVM. + */ + if (flag =3D=3D RAMBLOCK_SYN_MODERN_ITER) { + ramblock_update_iter_bmap(rb, start, length); + } + + if (flag =3D=3D RAMBLOCK_SYN_MODERN_BACKGROUND) { + rb->background_sync_shown_up =3D true; + } + return num_dirty; } #endif diff --git a/migration/ram.c b/migration/ram.c index f29faa82d6..e205806a5f 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -112,6 +112,8 @@ =20 XBZRLECacheStats xbzrle_counters; =20 +static RAMBlockSynMode sync_mode =3D RAMBLOCK_SYN_LEGACY; + /* used by the search for pages to send */ struct PageSearchStatus { /* The migration channel used for a specific host page */ @@ -912,13 +914,42 @@ bool ramblock_page_is_discarded(RAMBlock *rb, ram_add= r_t start) return false; } =20 +static void ramblock_reset_iter_stats(RAMBlock *rb) +{ + bitmap_clear(rb->shadow_bmap, 0, rb->used_length >> TARGET_PAGE_BITS); + bitmap_clear(rb->iter_bmap, 0, rb->used_length >> TARGET_PAGE_BITS); + rb->iter_dirty_pages =3D 0; + rb->background_sync_shown_up =3D false; +} + /* Called with RCU critical section */ -static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb) +static void ramblock_sync_dirty_bitmap(RAMState *rs, + RAMBlock *rb, + bool background) { - uint64_t new_dirty_pages =3D - cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length); + uint64_t new_dirty_pages; + unsigned int flag =3D RAMBLOCK_SYN_LEGACY_ITER; + + if (sync_mode =3D=3D RAMBLOCK_SYN_MODERN) { + if (background) { + flag =3D RAMBLOCK_SYN_MODERN_BACKGROUND; + } else { + flag =3D RAMBLOCK_SYN_MODERN_ITER; + } + } + + new_dirty_pages =3D + cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length, flag= ); + + if (flag & (RAMBLOCK_SYN_LEGACY_ITER | RAMBLOCK_SYN_MODERN_ITER)) { + if (flag =3D=3D RAMBLOCK_SYN_LEGACY_ITER) { + rs->migration_dirty_pages +=3D new_dirty_pages; + } else { + rs->migration_dirty_pages +=3D rb->iter_dirty_pages; + ramblock_reset_iter_stats(rb); + } + } =20 - rs->migration_dirty_pages +=3D new_dirty_pages; rs->num_dirty_pages_period +=3D new_dirty_pages; } =20 @@ -1041,7 +1072,9 @@ static void migration_trigger_throttle(RAMState *rs) } } =20 -static void migration_bitmap_sync(RAMState *rs, bool last_stage) +static void migration_bitmap_sync(RAMState *rs, + bool last_stage, + bool background) { RAMBlock *block; int64_t end_time; @@ -1058,7 +1091,7 @@ static void migration_bitmap_sync(RAMState *rs, bool = last_stage) WITH_QEMU_LOCK_GUARD(&rs->bitmap_mutex) { WITH_RCU_READ_LOCK_GUARD() { RAMBLOCK_FOREACH_NOT_IGNORED(block) { - ramblock_sync_dirty_bitmap(rs, block); + ramblock_sync_dirty_bitmap(rs, block, background); } stat64_set(&mig_stats.dirty_bytes_last_sync, ram_bytes_remaini= ng()); } @@ -1101,7 +1134,7 @@ static void migration_bitmap_sync_precopy(RAMState *r= s, bool last_stage) local_err =3D NULL; } =20 - migration_bitmap_sync(rs, last_stage); + migration_bitmap_sync(rs, last_stage, false); =20 if (precopy_notify(PRECOPY_NOTIFY_AFTER_BITMAP_SYNC, &local_err)) { error_report_err(local_err); @@ -2594,7 +2627,7 @@ void ram_postcopy_send_discard_bitmap(MigrationState = *ms) RCU_READ_LOCK_GUARD(); =20 /* This should be our last sync, the src is now paused */ - migration_bitmap_sync(rs, false); + migration_bitmap_sync(rs, false, false); =20 /* Easiest way to make sure we don't resume in the middle of a host-pa= ge */ rs->pss[RAM_CHANNEL_PRECOPY].last_sent_block =3D NULL; @@ -3581,7 +3614,7 @@ void colo_incoming_start_dirty_log(void) memory_global_dirty_log_sync(false); WITH_RCU_READ_LOCK_GUARD() { RAMBLOCK_FOREACH_NOT_IGNORED(block) { - ramblock_sync_dirty_bitmap(ram_state, block); + ramblock_sync_dirty_bitmap(ram_state, block, false); /* Discard this dirty bitmap record */ bitmap_zero(block->bmap, block->max_length >> TARGET_PAGE_BITS= ); } @@ -3862,7 +3895,7 @@ void colo_flush_ram_cache(void) qemu_mutex_lock(&ram_state->bitmap_mutex); WITH_RCU_READ_LOCK_GUARD() { RAMBLOCK_FOREACH_NOT_IGNORED(block) { - ramblock_sync_dirty_bitmap(ram_state, block); + ramblock_sync_dirty_bitmap(ram_state, block, false); } } =20 --=20 2.39.1