From nobody Fri Apr 19 18:15:50 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=gmail.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1523890928775474.03368134322034; Mon, 16 Apr 2018 08:02:08 -0700 (PDT) Received: from localhost ([::1]:51635 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f85dq-0006Zb-JP for importer@patchew.org; Mon, 16 Apr 2018 11:02:02 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48740) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f85cT-0005qN-Sy for qemu-devel@nongnu.org; Mon, 16 Apr 2018 11:00:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f85cQ-00036M-4n for qemu-devel@nongnu.org; Mon, 16 Apr 2018 11:00:37 -0400 Received: from mail-pl0-x244.google.com ([2607:f8b0:400e:c01::244]:43260) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1f85cP-00035L-Rv for qemu-devel@nongnu.org; Mon, 16 Apr 2018 11:00:34 -0400 Received: by mail-pl0-x244.google.com with SMTP id a39-v6so10170190pla.10 for ; Mon, 16 Apr 2018 08:00:33 -0700 (PDT) Received: from localhost ([199.245.57.242]) by smtp.gmail.com with ESMTPSA id d8sm25718489pfh.177.2018.04.16.08.00.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Apr 2018 08:00:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YxoOvn/pvlmEqxZCe7ShiTxwHSRAYKZhwwRbGYS5l+M=; b=c80VSuF82Xcfn/a4y5kCvMBubC9RzhX0NOgdhK5ZJ/J4+yorGbEtKpqjQRxJSSHQG8 ktdjraFDlr1irRS7QtWEUwy4AE/LtJcHvLOp/R8Cj1eIpxmOEglVqMaJYYmpWtcFJEUa FA1mPvWd0ScB9SkRKzLNGq5WzTYFv+C9BXYmdEEh4EH+YNu3in/x5Bo3v+raIz/0XVhS tOYhR2830/xyJX8yKAjBN0QEegkZwChyrtjArKEKpI0Kut9X4rEjFFcDPLndTFc10BC2 Dbo9UbAvwkJk+ja0IzY+29dI39cCewgbpt3IycMDlbT0tc5K2bfrZnebu++n1e6z4fyr zi2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YxoOvn/pvlmEqxZCe7ShiTxwHSRAYKZhwwRbGYS5l+M=; b=I545b8DsWVHMbaIABOdUP2IFcYHdVaiHtuvHu0aTIgepCa8sTuIztbr54XgVvoifQT +EjR5Kk4auy/xoy4Izzh7CxixxqO07BaTwXMadi/YtNrnjyzZY3ZDsri5EVEaix2HDRn nVtJB9o9Opognx6h+k9vIxg1MjL6mI09tvl+CGCFuZTEgXdy66RdSqnAPII+NCVvsLbi l8H1Qnb1eUVIW0IxMUw1Jx/P2qNVtdZoLvidXWumL+HNJw5q+ThiejfJzul27cAY8/+m WDnqJY69MZMxWOk8XhODZ5ffKu9R5uUnDy71+dpXFosI0KMKk7BDAzFpuWCF/3bKmJzD cyTQ== X-Gm-Message-State: ALQs6tACJsMxZX+So55pJNdtR/1YHCkijApfcmJIJoIvCKEnu8Q8iSkq CGhhyTwTOVoFfj+yrf7H+Lo= X-Google-Smtp-Source: AIpwx4+cZlpHmdEpa/vQyNbP2bsfJ0WYXEBJP69LOla/7VWychCGJ6iRbSOjYxb1ZMM/xEQuVTMrJw== X-Received: by 2002:a17:902:8646:: with SMTP id y6-v6mr15514095plt.86.1523890832312; Mon, 16 Apr 2018 08:00:32 -0700 (PDT) From: Lai Jiangshan To: Date: Mon, 16 Apr 2018 23:00:11 +0800 Message-Id: <20180416150011.55916-1-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c01::244 Subject: [Qemu-devel] [PATCH V5] migration: add capability to bypass the shared memory X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peng Tao , Samuel Ortiz , Juan Quintela , qemu-devel@nongnu.org, "James O . D . Hunt" , Xu Wang , Lai Jiangshan , "Dr. David Alan Gilbert" , Markus Armbruster , Sebastien Boeuf , Xiao Guangrong , Xiao Guangrong Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 1) What's this When the migration capability 'bypass-shared-memory' is set, the shared memory will be bypassed when migration. It is the key feature to enable several excellent features for the qemu, such as qemu-local-migration, qemu-live-update, extremely-fast-save-restore, vm-template, vm-fast-live-clone, yet-another-post-copy-migration, etc.. The philosophy behind this key feature, including the resulting advanced key features, is that a part of the memory management is separated out from the qemu, and let the other toolkits such as libvirt, kata-containers (https://github.com/kata-containers) runv(https://github.com/hyperhq/runv/) or some multiple cooperative qemu commands directly access to it, manage it, provide features on it. 2) Status in real world The hyperhq(http://hyper.sh http://hypercontainer.io/) introduced the feature vm-template(vm-fast-live-clone) to the hyper container for several years, it works perfect. (see https://github.com/hyperhq/runv/pull/297). The feature vm-template makes the containers(VMs) can be started in 130ms and save 80M memory for every container(VM). So that the hyper containers are fast and high-density as normal containers. kata-containers project (https://github.com/kata-containers) which was launched by hyper, intel and friends and which descended from runv (and clear-container) should have this feature enabled. Unfortunately, due to the code confliction between runv&cc, this feature was temporary disabled and it is being brought back by hyper and intel team. 3) How to use and bring up advanced features. In current qemu command line, shared memory has to be configured via memory-object. a) feature: qemu-local-migration, qemu-live-update Set the mem-path on the tmpfs and set share=3Don for it when start the vm. example: -object \ memory-backend-file,id=3Dmem,size=3D128M,mem-path=3D/dev/shm/memory,share= =3Don \ -numa node,nodeid=3D0,cpus=3D0-7,memdev=3Dmem when you want to migrate the vm locally (after fixed a security bug of the qemu-binary, or other reason), you can start a new qemu with the same command line and -incoming, then you can migrate the vm from the old qemu to the new qemu with the migration capability 'bypass-shared-memory' set. The migration will migrate the device-state *ONLY*, the memory is the origin memory backed by tmpfs file. b) feature: extremely-fast-save-restore the same above, but the mem-path is on the persistent file system. c) feature: vm-template, vm-fast-live-clone the template vm is started as 1), and paused when the guest reaches the template point(example: the guest app is ready), then the template vm is saved. (the qemu process of the template can be killed now, because we need only the memory and the device state files (in tmpfs)). Then we can launch one or multiple VMs base on the template vm states, the new VMs are started without the =E2=80=9Cshare=3Don=E2=80=9D, all the n= ew VMs share the initial memory from the memory file, they save a lot of memory. all the new VMs start from the template point, the guest app can go to work quickly. The new VM booted from template vm can=E2=80=99t become template again, if you need this unusual chained-template feature, you can write a cloneable-tmpfs kernel module for it. The libvirt toolkit can=E2=80=99t manage vm-template currently, in the hyperhq/runv, we use qemu wrapper script to do it. I hope someone add =E2=80=9Clibvrit managed template=E2=80=9D feature to libvirt. d) feature: yet-another-post-copy-migration It is a possible feature, no toolkit can do it well now. Using nbd server/client on the memory file is reluctantly Ok but inconvenient. A special feature for tmpfs might be needed to fully complete this feature. No one need yet another post copy migration method, but it is possible when some crazy man need it. Cc: Juan Quintela Cc: "Dr. David Alan Gilbert" Cc: Eric Blake Cc: Markus Armbruster Cc: Samuel Ortiz Cc: Sebastien Boeuf Cc: James O. D. Hunt Cc: Xu Wang Cc: Peng Tao Cc: Xiao Guangrong Cc: Xiao Guangrong Signed-off-by: Lai Jiangshan --- Changes in V5: check cappability conflict in migrate_caps_check() Changes in V4: fixes checkpatch.pl errors Changes in V3: rebased on upstream master update the available version of the capability to v2.13 Changes in V2: rebased on 2.11.1 migration/migration.c | 22 ++++++++++++++++++++++ migration/migration.h | 1 + migration/ram.c | 27 ++++++++++++++++++--------- qapi/migration.json | 6 +++++- 4 files changed, 46 insertions(+), 10 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index 52a5092add..110b40f6d4 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -736,6 +736,19 @@ static bool migrate_caps_check(bool *cap_list, return false; } =20 + if (cap_list[MIGRATION_CAPABILITY_BYPASS_SHARED_MEMORY]) { + /* Bypass and postcopy are quite conflicting ways + * to get memory in the destination. And there + * is not code to discriminate the differences and + * handle the conflicts currently. It should be possible + * to fix, but it is generally useless when both ways + * are used together. + */ + error_setg(errp, "Bypass is not currently compatible " + "with postcopy"); + return false; + } + /* This check is reasonably expensive, so only when it's being * set the first time, also it's only the destination that needs * special support. @@ -1509,6 +1522,15 @@ bool migrate_release_ram(void) return s->enabled_capabilities[MIGRATION_CAPABILITY_RELEASE_RAM]; } =20 +bool migrate_bypass_shared_memory(void) +{ + MigrationState *s; + + s =3D migrate_get_current(); + + return s->enabled_capabilities[MIGRATION_CAPABILITY_BYPASS_SHARED_MEMO= RY]; +} + bool migrate_postcopy_ram(void) { MigrationState *s; diff --git a/migration/migration.h b/migration/migration.h index 8d2f320c48..cfd2513ef0 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -206,6 +206,7 @@ MigrationState *migrate_get_current(void); =20 bool migrate_postcopy(void); =20 +bool migrate_bypass_shared_memory(void); bool migrate_release_ram(void); bool migrate_postcopy_ram(void); bool migrate_zero_blocks(void); diff --git a/migration/ram.c b/migration/ram.c index 0e90efa092..bca170c386 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -780,6 +780,11 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs= , RAMBlock *rb, unsigned long *bitmap =3D rb->bmap; unsigned long next; =20 + /* when this ramblock is requested bypassing */ + if (!bitmap) { + return size; + } + if (rs->ram_bulk_stage && start > 0) { next =3D start + 1; } else { @@ -850,7 +855,9 @@ static void migration_bitmap_sync(RAMState *rs) qemu_mutex_lock(&rs->bitmap_mutex); rcu_read_lock(); RAMBLOCK_FOREACH(block) { - migration_bitmap_sync_range(rs, block, 0, block->used_length); + if (!migrate_bypass_shared_memory() || !qemu_ram_is_shared(block))= { + migration_bitmap_sync_range(rs, block, 0, block->used_length); + } } rcu_read_unlock(); qemu_mutex_unlock(&rs->bitmap_mutex); @@ -2132,18 +2139,12 @@ static int ram_state_init(RAMState **rsp) qemu_mutex_init(&(*rsp)->src_page_req_mutex); QSIMPLEQ_INIT(&(*rsp)->src_page_requests); =20 - /* - * Count the total number of pages used by ram blocks not including any - * gaps due to alignment or unplugs. - */ - (*rsp)->migration_dirty_pages =3D ram_bytes_total() >> TARGET_PAGE_BIT= S; - ram_state_reset(*rsp); =20 return 0; } =20 -static void ram_list_init_bitmaps(void) +static void ram_list_init_bitmaps(RAMState *rs) { RAMBlock *block; unsigned long pages; @@ -2151,9 +2152,17 @@ static void ram_list_init_bitmaps(void) /* Skip setting bitmap if there is no RAM */ if (ram_bytes_total()) { QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { + if (migrate_bypass_shared_memory() && qemu_ram_is_shared(block= )) { + continue; + } pages =3D block->max_length >> TARGET_PAGE_BITS; block->bmap =3D bitmap_new(pages); bitmap_set(block->bmap, 0, pages); + /* + * Count the total number of pages used by ram blocks not + * including any gaps due to alignment or unplugs. + */ + rs->migration_dirty_pages +=3D pages; if (migrate_postcopy_ram()) { block->unsentmap =3D bitmap_new(pages); bitmap_set(block->unsentmap, 0, pages); @@ -2169,7 +2178,7 @@ static void ram_init_bitmaps(RAMState *rs) qemu_mutex_lock_ramlist(); rcu_read_lock(); =20 - ram_list_init_bitmaps(); + ram_list_init_bitmaps(rs); memory_global_dirty_log_start(); migration_bitmap_sync(rs); =20 diff --git a/qapi/migration.json b/qapi/migration.json index 9d0bf82cf4..45326480bd 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -357,13 +357,17 @@ # @dirty-bitmaps: If enabled, QEMU will migrate named dirty bitmaps. # (since 2.12) # +# @bypass-shared-memory: the shared memory region will be bypassed on migr= ation. +# This feature allows the memory region to be reused by new qemu(= s) +# or be migrated separately. (since 2.13) +# # Since: 1.2 ## { 'enum': 'MigrationCapability', 'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks', 'compress', 'events', 'postcopy-ram', 'x-colo', 'release-ram', 'block', 'return-path', 'pause-before-switchover', 'x-multifd', - 'dirty-bitmaps' ] } + 'dirty-bitmaps', 'bypass-shared-memory' ] } =20 ## # @MigrationCapabilityStatus: --=20 2.15.1 (Apple Git-101)