From nobody Sat May 4 22:21:51 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1587491288; cv=none; d=zohomail.com; s=zohoarc; b=ihBKqXX6cFnJctbm2bTNoPPhPNnG3CGOMImbyOsRqvKskuLFDkMaREKo98YiortTQ9ia7MEWo0SFKeSpzTVLpSlyi8J2/EYOnDkivo1ersgnqPH5HPHOxWSTYBiJ/p7eM77Ru0pvlfVizYJgqfXWvGskBOwqxuXQq/JNAm42J6s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1587491288; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=etLbGULEBkvpCAzXigY+Dobxhu2Pd2LPbqgQW6tn8HQ=; b=nJKyaZmL8cIdthm8g4FRTPWrAHs8B/tptxlTdyL26A4Pct08bgVW+FgQYtYespCLgSFE9AGBRzt9klBmLC8mKjfuaCGWAcMcDB3d+i1Gp92bjRkSSIKpzJxfELmgoXaFITBKozsi+P81nUsNFwUOfgMBqsIdqe8KN+2qhey8xnk= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 158749128846314.163831581003478; Tue, 21 Apr 2020 10:48:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQwzp-0006vd-Am; Tue, 21 Apr 2020 17:47:45 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQwzn-0006vT-Qz for xen-devel@lists.xenproject.org; Tue, 21 Apr 2020 17:47:43 +0000 Received: from mga05.intel.com (unknown [192.55.52.43]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 31087b56-83f8-11ea-83d8-bc764e2007e4; Tue, 21 Apr 2020 17:47:40 +0000 (UTC) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2020 10:47:36 -0700 Received: from tlengyel-mobl2.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.17.85]) by FMSMGA003.fm.intel.com with ESMTP; 21 Apr 2020 10:47:35 -0700 X-Inumbo-ID: 31087b56-83f8-11ea-83d8-bc764e2007e4 IronPort-SDR: JE9TpcU4QlJWKqwxr3OFG/YzqwowviewmsH2SbADTjAZr9GfmOh29EKEvNX6DkctmvSHHrwjbk 1qASJl1YjMGA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False IronPort-SDR: wBVR3z0/sLsC3KSTsMXGmFRUIaGWcXVxAM9deXXi52CWdyeipYad+yI1YjlN/yyoemnBubAEqu cC+rF4Uig2GA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,411,1580803200"; d="scan'208";a="300680740" From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Subject: [PATCH v16 1/3] mem_sharing: fix sharability check during fork reset Date: Tue, 21 Apr 2020 10:47:23 -0700 Message-Id: <8eb756357cb6d9222ed7ec4c0af58473160361a1.1587490511.git.tamas.lengyel@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Tamas K Lengyel , Wei Liu , Andrew Cooper , George Dunlap , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Type: text/plain; charset="utf-8" When resetting a VM fork we ought to only remove pages that were allocated = for the fork during it's execution and the contents copied over from the parent. This can be determined if the page is sharable as special pages used by the fork for other purposes will not pass this test. Unfortunately during the f= ork reset loop we only partially check whether that's the case. A page's type m= ay indicate it is sharable (pass p2m_is_sharable) but that's not a sufficient check by itself. All checks that are normally performed before a page is converted to the sharable type need to be performed to avoid removing pages from the p2m that may be used for other purposes. For example, currently the reset loop also removes the vcpu info pages from the p2m, potentially putti= ng the guest into infinite page-fault loops. For this we extend the existing nominate_page and page_make_sharable functi= ons to perform a validation-only run without actually converting the page. Signed-off-by: Tamas K Lengyel --- xen/arch/x86/mm/mem_sharing.c | 79 ++++++++++++++++++++++------------- 1 file changed, 50 insertions(+), 29 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index e572e9e39d..d8ed660abb 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -633,31 +633,35 @@ unsigned int mem_sharing_get_nr_shared_mfns(void) /* Functions that change a page's type and ownership */ static int page_make_sharable(struct domain *d, struct page_info *page, - int expected_refcnt) + int expected_refcnt, + bool validate_only) { - bool_t drop_dom_ref; + int rc; + bool drop_dom_ref =3D false; =20 - spin_lock(&d->page_alloc_lock); + /* caller already has the lock when validating only */ + if ( !validate_only ) + spin_lock(&d->page_alloc_lock); =20 if ( d->is_dying ) { - spin_unlock(&d->page_alloc_lock); - return -EBUSY; + rc =3D -EBUSY; + goto out; } =20 /* Change page type and count atomically */ if ( !get_page_and_type(page, d, PGT_shared_page) ) { - spin_unlock(&d->page_alloc_lock); - return -EINVAL; + rc =3D -EINVAL; + goto out; } =20 /* Check it wasn't already sharable and undo if it was */ if ( (page->u.inuse.type_info & PGT_count_mask) !=3D 1 ) { - spin_unlock(&d->page_alloc_lock); put_page_and_type(page); - return -EEXIST; + rc =3D -EEXIST; + goto out; } =20 /* @@ -666,20 +670,31 @@ static int page_make_sharable(struct domain *d, */ if ( page->count_info !=3D (PGC_allocated | (2 + expected_refcnt)) ) { - spin_unlock(&d->page_alloc_lock); /* Return type count back to zero */ put_page_and_type(page); - return -E2BIG; + rc =3D -E2BIG; + goto out; + } + + rc =3D 0; + + if ( validate_only ) + { + put_page_and_type(page); + goto out; } =20 page_set_owner(page, dom_cow); drop_dom_ref =3D !domain_adjust_tot_pages(d, -1); page_list_del(page, &d->page_list); - spin_unlock(&d->page_alloc_lock); =20 +out: + if ( !validate_only ) + spin_unlock(&d->page_alloc_lock); if ( drop_dom_ref ) put_domain(d); - return 0; + + return rc; } =20 static int page_make_private(struct domain *d, struct page_info *page) @@ -809,8 +824,8 @@ static int debug_gref(struct domain *d, grant_ref_t ref) return debug_gfn(d, gfn); } =20 -static int nominate_page(struct domain *d, gfn_t gfn, - int expected_refcnt, shr_handle_t *phandle) +static int nominate_page(struct domain *d, gfn_t gfn, int expected_refcnt, + bool validate_only, shr_handle_t *phandle) { struct p2m_domain *hp2m =3D p2m_get_hostp2m(d); p2m_type_t p2mt; @@ -879,8 +894,8 @@ static int nominate_page(struct domain *d, gfn_t gfn, } =20 /* Try to convert the mfn to the sharable type */ - ret =3D page_make_sharable(d, page, expected_refcnt); - if ( ret ) + ret =3D page_make_sharable(d, page, expected_refcnt, validate_only); + if ( ret || validate_only ) goto out; =20 /* @@ -1392,13 +1407,13 @@ static int range_share(struct domain *d, struct dom= ain *cd, * We only break out if we run out of memory as individual pages m= ay * legitimately be unsharable and we just want to skip over those. */ - rc =3D nominate_page(d, _gfn(start), 0, &sh); + rc =3D nominate_page(d, _gfn(start), 0, false, &sh); if ( rc =3D=3D -ENOMEM ) break; =20 if ( !rc ) { - rc =3D nominate_page(cd, _gfn(start), 0, &ch); + rc =3D nominate_page(cd, _gfn(start), 0, false, &ch); if ( rc =3D=3D -ENOMEM ) break; =20 @@ -1476,7 +1491,7 @@ int mem_sharing_fork_page(struct domain *d, gfn_t gfn= , bool unsharing) /* For read-only accesses we just add a shared entry to the physma= p */ while ( parent ) { - if ( !(rc =3D nominate_page(parent, gfn, 0, &handle)) ) + if ( !(rc =3D nominate_page(parent, gfn, 0, false, &handle)) ) break; =20 parent =3D parent->parent; @@ -1773,16 +1788,22 @@ static int mem_sharing_fork_reset(struct domain *d,= struct domain *pd) spin_lock_recursive(&d->page_alloc_lock); page_list_for_each_safe(page, tmp, &d->page_list) { - p2m_type_t p2mt; - p2m_access_t p2ma; + shr_handle_t sh; mfn_t mfn =3D page_to_mfn(page); gfn_t gfn =3D mfn_to_gfn(d, mfn); =20 - mfn =3D __get_gfn_type_access(p2m, gfn_x(gfn), &p2mt, &p2ma, - 0, NULL, false); - - /* only reset pages that are sharable */ - if ( !p2m_is_sharable(p2mt) ) + /* + * We only want to remove pages from the fork here that were copied + * from the parent but could be potentially re-populated using mem= ory + * sharing after the reset. These pages all must be regular pages = with + * no extra reference held to them, thus should be possible to make + * them sharable. Unfortunately p2m_is_sharable check is not suffi= cient + * to test this as it doesn't check the page's reference count. We= thus + * check whether the page is convertable to the shared type using + * nominate_page. In case the page is already shared (ie. a share + * handle is returned) then we don't remove it. + */ + if ( (rc =3D nominate_page(d, gfn, 0, true, &sh)) || sh ) continue; =20 /* take an extra reference or just skip if can't for whatever reas= on */ @@ -1836,7 +1857,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_= sharing_op_t) arg) { shr_handle_t handle; =20 - rc =3D nominate_page(d, _gfn(mso.u.nominate.u.gfn), 0, &handle); + rc =3D nominate_page(d, _gfn(mso.u.nominate.u.gfn), 0, false, &han= dle); mso.u.nominate.handle =3D handle; } break; @@ -1851,7 +1872,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_= sharing_op_t) arg) if ( rc < 0 ) goto out; =20 - rc =3D nominate_page(d, gfn, 3, &handle); + rc =3D nominate_page(d, gfn, 3, false, &handle); mso.u.nominate.handle =3D handle; } break; --=20 2.20.1 From nobody Sat May 4 22:21:51 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1587491288; cv=none; d=zohomail.com; s=zohoarc; b=CZtndrj99zAlEqjvKX+lxTp1NAS9aT5CEHl93grWZXl/H3V8BXjqhJQGTmoYSFwd2CgccEwSdMFQqfwRo4kectxwPscd0d6ISi9iRMGePwz948T9kFv8dsrxdGdBZjhZ0olABaPiEehgC+jV+guBqWtxTjSBDAHz0eg6InvH18I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1587491288; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=tK4Psxi8lSJkYlt/C0PQ3Xg9A1/oUi/KoJajAB/FCsA=; b=M/tVa9CB0E7J/gyHHvfbI4jiE83FZVIZto5xhidlS3Yf3+xVjueUwBW2TiKUVKLnkqcagAsVNFcSjDSxffWF12n1KkGCKwCYpXuzMkCdwZSN2Mwe9S3nUvT5zTnZkgu06zAET9zwMaePh0jjrnz6/Zu6QsbuD2pFDPHqiPeB+cA= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1587491288500693.6634473465059; Tue, 21 Apr 2020 10:48:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQwzu-0006wT-24; Tue, 21 Apr 2020 17:47:50 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQwzs-0006wF-F4 for xen-devel@lists.xenproject.org; Tue, 21 Apr 2020 17:47:48 +0000 Received: from mga05.intel.com (unknown [192.55.52.43]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 33f23b36-83f8-11ea-83d8-bc764e2007e4; Tue, 21 Apr 2020 17:47:43 +0000 (UTC) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2020 10:47:38 -0700 Received: from tlengyel-mobl2.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.17.85]) by FMSMGA003.fm.intel.com with ESMTP; 21 Apr 2020 10:47:36 -0700 X-Inumbo-ID: 33f23b36-83f8-11ea-83d8-bc764e2007e4 IronPort-SDR: wof1HeqO93luk6GBbvgWUUrJjVgMNtdj56lQCpmaPrOlvbmpdkBUcSsqDPpwhVB2A3enGDJHW5 vdhh3Lla36WA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False IronPort-SDR: v+zuVKiGGBtEq1BGefKamc33QP6S5dAcn7QFqCJfNqBUp0a5boH4nGoMSt6m8EeAUuDGo2fwV5 Y1+fO29dHnPQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,411,1580803200"; d="scan'208";a="300680743" From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Subject: [PATCH v16 2/3] mem_sharing: allow forking domain with IOMMU enabled Date: Tue, 21 Apr 2020 10:47:24 -0700 Message-Id: X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Tamas K Lengyel , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , Stefano Stabellini , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Type: text/plain; charset="utf-8" The memory sharing subsystem by default doesn't allow a domain to share mem= ory if it has an IOMMU active for obvious security reasons. However, when fuzzi= ng a VM fork, the same security restrictions don't necessarily apply. While it m= akes no sense to try to create a full fork of a VM that has an IOMMU attached as= only one domain can own the pass-through device at a time, creating a shallow fo= rk without a device model is still very useful for fuzzing kernel-mode drivers. By allowing the parent VM to initialize the kernel-mode driver with a real device that's pass-through, the driver can enter into a state more suitable= for fuzzing. Some of these initialization steps are quite complex and are easie= r to perform when a real device is present. After the initialization, shallow fo= rks can be utilized for fuzzing code-segments in the device driver that don't directly interact with the device. Signed-off-by: Tamas K Lengyel Reviewed-by: Roger Pau Monn=C3=A9 --- v16: Minor fixes based on feedback --- xen/arch/x86/mm/mem_sharing.c | 20 +++++++++++++------- xen/include/public/memory.h | 4 +++- 2 files changed, 16 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index d8ed660abb..e690d2fa13 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -1445,7 +1445,8 @@ static int range_share(struct domain *d, struct domai= n *cd, return rc; } =20 -static inline int mem_sharing_control(struct domain *d, bool enable) +static inline int mem_sharing_control(struct domain *d, bool enable, + uint16_t flags) { if ( enable ) { @@ -1455,7 +1456,8 @@ static inline int mem_sharing_control(struct domain *= d, bool enable) if ( unlikely(!hap_enabled(d)) ) return -ENODEV; =20 - if ( unlikely(is_iommu_enabled(d)) ) + if ( unlikely(is_iommu_enabled(d) && + !(flags & XENMEM_FORK_WITH_IOMMU_ALLOWED)) ) return -EXDEV; } =20 @@ -1848,7 +1850,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_= sharing_op_t) arg) if ( rc ) goto out; =20 - if ( !mem_sharing_enabled(d) && (rc =3D mem_sharing_control(d, true)) ) + if ( !mem_sharing_enabled(d) && + (rc =3D mem_sharing_control(d, true, 0)) ) return rc; =20 switch ( mso.op ) @@ -2086,7 +2089,9 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_= sharing_op_t) arg) struct domain *pd; =20 rc =3D -EINVAL; - if ( mso.u.fork.pad[0] || mso.u.fork.pad[1] || mso.u.fork.pad[2] ) + if ( mso.u.fork.pad ) + goto out; + if ( mso.u.fork.flags & ~XENMEM_FORK_WITH_IOMMU_ALLOWED ) goto out; =20 rc =3D rcu_lock_live_remote_domain_by_id(mso.u.fork.parent_domain, @@ -2101,7 +2106,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_= sharing_op_t) arg) goto out; } =20 - if ( !mem_sharing_enabled(pd) && (rc =3D mem_sharing_control(pd, t= rue)) ) + if ( !mem_sharing_enabled(pd) && + (rc =3D mem_sharing_control(pd, true, mso.u.fork.flags)) ) { rcu_unlock_domain(pd); goto out; @@ -2122,7 +2128,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_= sharing_op_t) arg) struct domain *pd; =20 rc =3D -EINVAL; - if ( mso.u.fork.pad[0] || mso.u.fork.pad[1] || mso.u.fork.pad[2] ) + if ( mso.u.fork.pad || mso.u.fork.flags ) goto out; =20 rc =3D -ENOSYS; @@ -2159,7 +2165,7 @@ int mem_sharing_domctl(struct domain *d, struct xen_d= omctl_mem_sharing_op *mec) switch ( mec->op ) { case XEN_DOMCTL_MEM_SHARING_CONTROL: - rc =3D mem_sharing_control(d, mec->u.enable); + rc =3D mem_sharing_control(d, mec->u.enable, 0); break; =20 default: diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index d36d64b8dc..e56800357d 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -536,7 +536,9 @@ struct xen_mem_sharing_op { } debug; struct mem_sharing_op_fork { /* OP_FORK */ domid_t parent_domain; /* IN: parent's domain id */ - uint16_t pad[3]; /* Must be set to 0 */ +#define XENMEM_FORK_WITH_IOMMU_ALLOWED (1u << 0) + uint16_t flags; /* IN: optional settings */ + uint32_t pad; /* Must be set to 0 */ } fork; } u; }; --=20 2.20.1 From nobody Sat May 4 22:21:51 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1587491297; cv=none; d=zohomail.com; s=zohoarc; b=R9W71oO06fxGZNXc6Mpxs9TCoPmNaJYqukePVHNj39tOB9I8aN1VijJpAnRrsMsSMZPASj85BM0tc+UmmbLbQfUoIZ5+sWRH6DM0+JMQ37zLI6CwY8ozUbXRTkqhw195TvtaXggXwZLSZfaA4/06mjLk45EV25TB9SXi5F1iH5U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1587491297; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=YJ31hXS4BsMb2MZLpyWrnb+erl3PMnSBAzPFRWwpKBE=; b=TBeJawEf05vy/yemtP2BflgAdowqnVdElb+LrTKvM7SpUzO5VaObKmVQdRkQZ88y50qYd7pWSd3eWFZz+gtCxsIiiAMepKt2HeaTFxIRO1DTnL7oggBAhRFZa9H44oXTGviKl99ukStqZQm5ayDqeX2+6Cw2xiLQaRdRcR9oCes= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1587491297593917.864291283982; Tue, 21 Apr 2020 10:48:17 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQwzx-0006xZ-BC; Tue, 21 Apr 2020 17:47:53 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQwzv-0006xE-KQ for xen-devel@lists.xenproject.org; Tue, 21 Apr 2020 17:47:51 +0000 Received: from mga05.intel.com (unknown [192.55.52.43]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 345c093a-83f8-11ea-9177-12813bfff9fa; Tue, 21 Apr 2020 17:47:44 +0000 (UTC) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2020 10:47:39 -0700 Received: from tlengyel-mobl2.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.17.85]) by FMSMGA003.fm.intel.com with ESMTP; 21 Apr 2020 10:47:38 -0700 X-Inumbo-ID: 345c093a-83f8-11ea-9177-12813bfff9fa IronPort-SDR: uFFlZ9pIZc+rqPB7o5QKcO2RNerZPWnsJcB1wAVJd264ACqLsWYCNlz4XYYftJknUO30nZmxu+ F65QWKMNdWxg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False IronPort-SDR: FAIX96XgdaGaxmq/k6y2vZmz6LK7MbjbG8nVw3iGlCXnIyXN3z30IN48eDI/5t10Hx1nAtnIYW iPhHAwCg64cQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,411,1580803200"; d="scan'208";a="300680748" From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Subject: [PATCH v16 3/3] xen/tools: VM forking toolstack side Date: Tue, 21 Apr 2020 10:47:25 -0700 Message-Id: <1f411b499c98393786a91baf3b8ac0e0d5c6b109.1587490511.git.tamas.lengyel@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Anthony PERARD , Ian Jackson , Tamas K Lengyel , Wei Liu Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Type: text/plain; charset="utf-8" Add necessary bits to implement "xl fork-vm" commands. The command allows t= he user to specify how to launch the device model allowing for a late-launch m= odel in which the user can execute the fork without the device model and decide = to only later launch it. Signed-off-by: Tamas K Lengyel --- docs/man/xl.1.pod.in | 44 +++++ tools/libxc/include/xenctrl.h | 14 ++ tools/libxc/xc_memshr.c | 26 +++ tools/libxl/libxl.h | 12 ++ tools/libxl/libxl_create.c | 361 +++++++++++++++++++--------------- tools/libxl/libxl_dm.c | 2 +- tools/libxl/libxl_dom.c | 43 +++- tools/libxl/libxl_internal.h | 7 + tools/libxl/libxl_types.idl | 1 + tools/libxl/libxl_x86.c | 42 ++++ tools/xl/Makefile | 2 +- tools/xl/xl.h | 5 + tools/xl/xl_cmdtable.c | 15 ++ tools/xl/xl_forkvm.c | 149 ++++++++++++++ tools/xl/xl_vmcontrol.c | 14 ++ 15 files changed, 571 insertions(+), 166 deletions(-) create mode 100644 tools/xl/xl_forkvm.c diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in index 09339282e6..59c03c6427 100644 --- a/docs/man/xl.1.pod.in +++ b/docs/man/xl.1.pod.in @@ -708,6 +708,50 @@ above). =20 =3Dback =20 +=3Ditem B [I] I + +Create a fork of a running VM. The domain will be paused after the operat= ion +and remains paused while forks of it exist. Experimental and x86 only. +Forks can only be made of domains with HAP enabled and on Intel hardware. = The +parent domain must be created with the xl toolstack and its configuration = must +not manually define max_grant_frames, max_maptrack_frames or max_event_cha= nnels. + +B + +=3Dover 4 + +=3Ditem B<-p> + +Leave the fork paused after creating it. + +=3Ditem B<--launch-dm> + +Specify whether the device model (QEMU) should be launched for the fork. L= ate +launch allows to start the device model for an already running fork. + +=3Ditem B<-C> + +The config file to use when launching the device model. Currently require= d when +launching the device model. Most config settings MUST match the parent do= main +exactly, only change VM name, disk path and network configurations. + +=3Ditem B<-Q> + +The path to the qemu save file to use when launching the device model. Cu= rrently +required when launching the device model. + +=3Ditem B<--fork-reset> + +Perform a reset operation of an already running fork. Note that resetting= may +be less performant then creating a new fork depending on how much memory t= he +fork has deduplicated during its runtime. + +=3Ditem B<--max-vcpus> + +Specify the max-vcpus matching the parent domain when not launching the dm. + +=3Dback + =3Ditem B [I] =20 Display the number of shared pages for a specified domain. If no domain is diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index 5f25c5a6d4..0a6ff93229 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -2232,6 +2232,20 @@ int xc_memshr_range_share(xc_interface *xch, uint64_t first_gfn, uint64_t last_gfn); =20 +int xc_memshr_fork(xc_interface *xch, + uint32_t source_domain, + uint32_t client_domain, + bool allow_with_iommu); + +/* + * Note: this function is only intended to be used on short-lived forks th= at + * haven't yet aquired a lot of memory. In case the fork has a lot of memo= ry + * it is likely more performant to create a new fork with xc_memshr_fork. + * + * With VMs that have a lot of memory this call may block for a long time. + */ +int xc_memshr_fork_reset(xc_interface *xch, uint32_t forked_domain); + /* Debug calls: return the number of pages referencing the shared frame ba= cking * the input argument. Should be one or greater. * diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c index 97e2e6a8d9..2300cc7075 100644 --- a/tools/libxc/xc_memshr.c +++ b/tools/libxc/xc_memshr.c @@ -239,6 +239,32 @@ int xc_memshr_debug_gref(xc_interface *xch, return xc_memshr_memop(xch, domid, &mso); } =20 +int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid, + bool allow_with_iommu) +{ + xen_mem_sharing_op_t mso; + + memset(&mso, 0, sizeof(mso)); + + mso.op =3D XENMEM_sharing_op_fork; + mso.u.fork.parent_domain =3D pdomid; + + if ( allow_with_iommu ) + mso.u.fork.flags |=3D XENMEM_FORK_WITH_IOMMU_ALLOWED; + + return xc_memshr_memop(xch, domid, &mso); +} + +int xc_memshr_fork_reset(xc_interface *xch, uint32_t domid) +{ + xen_mem_sharing_op_t mso; + + memset(&mso, 0, sizeof(mso)); + mso.op =3D XENMEM_sharing_op_fork_reset; + + return xc_memshr_memop(xch, domid, &mso); +} + int xc_memshr_audit(xc_interface *xch) { xen_mem_sharing_op_t mso; diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index 71709dc585..d8da347d4e 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -2666,6 +2666,18 @@ int libxl_psr_get_hw_info(libxl_ctx *ctx, libxl_psr_= feat_type type, unsigned int lvl, unsigned int *nr, libxl_psr_hw_info **info); void libxl_psr_hw_info_list_free(libxl_psr_hw_info *list, unsigned int nr); + +int libxl_domain_fork_vm(libxl_ctx *ctx, uint32_t pdomid, uint32_t max_vcp= us, uint32_t *domid, + bool allow_with_iommu) + LIBXL_EXTERNAL_CALLERS_ONLY; + +int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_con= fig, + uint32_t domid, + const libxl_asyncprogress_how *aop_console= _how) + LIBXL_EXTERNAL_CALLERS_ONLY; + +int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid) + LIBXL_EXTERNAL_CALLERS_ONLY; #endif =20 /* misc */ diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c index e7cb2dbc2b..5705b6e3a5 100644 --- a/tools/libxl/libxl_create.c +++ b/tools/libxl/libxl_create.c @@ -538,12 +538,12 @@ out: return ret; } =20 -int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config, - libxl__domain_build_state *state, - uint32_t *domid, bool soft_reset) +static int libxl__domain_make_xs_entries(libxl__gc *gc, libxl_domain_confi= g *d_config, + libxl__domain_build_state *state, + uint32_t domid) { libxl_ctx *ctx =3D libxl__gc_owner(gc); - int ret, rc, nb_vm; + int rc, nb_vm; const char *dom_type; char *uuid_string; char *dom_path, *vm_path, *libxl_path; @@ -555,9 +555,6 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_conf= ig *d_config, =20 /* convenience aliases */ libxl_domain_create_info *info =3D &d_config->c_info; - libxl_domain_build_info *b_info =3D &d_config->b_info; - - assert(soft_reset || *domid =3D=3D INVALID_DOMID); =20 uuid_string =3D libxl__uuid2string(gc, info->uuid); if (!uuid_string) { @@ -565,137 +562,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_co= nfig *d_config, goto out; } =20 - if (!soft_reset) { - struct xen_domctl_createdomain create =3D { - .ssidref =3D info->ssidref, - .max_vcpus =3D b_info->max_vcpus, - .max_evtchn_port =3D b_info->event_channels, - .max_grant_frames =3D b_info->max_grant_frames, - .max_maptrack_frames =3D b_info->max_maptrack_frames, - }; - - if (info->type !=3D LIBXL_DOMAIN_TYPE_PV) { - create.flags |=3D XEN_DOMCTL_CDF_hvm; - create.flags |=3D - libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0; - create.flags |=3D - libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off; - } - - assert(info->passthrough !=3D LIBXL_PASSTHROUGH_DEFAULT); - LOG(DETAIL, "passthrough: %s", - libxl_passthrough_to_string(info->passthrough)); - - if (info->passthrough !=3D LIBXL_PASSTHROUGH_DISABLED) - create.flags |=3D XEN_DOMCTL_CDF_iommu; - - if (info->passthrough =3D=3D LIBXL_PASSTHROUGH_SYNC_PT) - create.iommu_opts |=3D XEN_DOMCTL_IOMMU_no_sharept; - - /* Ultimately, handle is an array of 16 uint8_t, same as uuid */ - libxl_uuid_copy(ctx, (libxl_uuid *)&create.handle, &info->uuid); - - ret =3D libxl__arch_domain_prepare_config(gc, d_config, &create); - if (ret < 0) { - LOGED(ERROR, *domid, "fail to get domain config"); - rc =3D ERROR_FAIL; - goto out; - } - - for (;;) { - uint32_t local_domid; - bool recent; - - if (info->domid =3D=3D RANDOM_DOMID) { - uint16_t v; - - ret =3D libxl__random_bytes(gc, (void *)&v, sizeof(v)); - if (ret < 0) - break; - - v &=3D DOMID_MASK; - if (!libxl_domid_valid_guest(v)) - continue; - - local_domid =3D v; - } else { - local_domid =3D info->domid; /* May not be valid */ - } - - ret =3D xc_domain_create(ctx->xch, &local_domid, &create); - if (ret < 0) { - /* - * If we generated a random domid and creation failed - * because that domid already exists then simply try - * again. - */ - if (errno =3D=3D EEXIST && info->domid =3D=3D RANDOM_DOMID) - continue; - - LOGED(ERROR, local_domid, "domain creation fail"); - rc =3D ERROR_FAIL; - goto out; - } - - /* A new domain now exists */ - *domid =3D local_domid; - - rc =3D libxl__is_domid_recent(gc, local_domid, &recent); - if (rc) - goto out; - - /* The domid is not recent, so we're done */ - if (!recent) - break; - - /* - * If the domid was specified then there's no point in - * trying again. - */ - if (libxl_domid_valid_guest(info->domid)) { - LOGED(ERROR, local_domid, "domain id recently used"); - rc =3D ERROR_FAIL; - goto out; - } - - /* - * The domain is recent and so cannot be used. Clear domid - * here since, if xc_domain_destroy() fails below there is - * little point calling it again in the error path. - */ - *domid =3D INVALID_DOMID; - - ret =3D xc_domain_destroy(ctx->xch, local_domid); - if (ret < 0) { - LOGED(ERROR, local_domid, "domain destroy fail"); - rc =3D ERROR_FAIL; - goto out; - } - - /* The domain was successfully destroyed, so we can try again = */ - } - - rc =3D libxl__arch_domain_save_config(gc, d_config, state, &create= ); - if (rc < 0) - goto out; - } - - /* - * If soft_reset is set the the domid will have been valid on entry. - * If it was not set then xc_domain_create() should have assigned a - * valid value. Either way, if we reach this point, domid should be - * valid. - */ - assert(libxl_domid_valid_guest(*domid)); - - ret =3D xc_cpupool_movedomain(ctx->xch, info->poolid, *domid); - if (ret < 0) { - LOGED(ERROR, *domid, "domain move fail"); - rc =3D ERROR_FAIL; - goto out; - } - - dom_path =3D libxl__xs_get_dompath(gc, *domid); + dom_path =3D libxl__xs_get_dompath(gc, domid); if (!dom_path) { rc =3D ERROR_FAIL; goto out; @@ -703,12 +570,12 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_co= nfig *d_config, =20 vm_path =3D GCSPRINTF("/vm/%s", uuid_string); if (!vm_path) { - LOGD(ERROR, *domid, "cannot allocate create paths"); + LOGD(ERROR, domid, "cannot allocate create paths"); rc =3D ERROR_FAIL; goto out; } =20 - libxl_path =3D libxl__xs_libxl_path(gc, *domid); + libxl_path =3D libxl__xs_libxl_path(gc, domid); if (!libxl_path) { rc =3D ERROR_FAIL; goto out; @@ -719,10 +586,10 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_co= nfig *d_config, =20 roperm[0].id =3D 0; roperm[0].perms =3D XS_PERM_NONE; - roperm[1].id =3D *domid; + roperm[1].id =3D domid; roperm[1].perms =3D XS_PERM_READ; =20 - rwperm[0].id =3D *domid; + rwperm[0].id =3D domid; rwperm[0].perms =3D XS_PERM_NONE; =20 retry_transaction: @@ -740,7 +607,7 @@ retry_transaction: noperm, ARRAY_SIZE(noperm)); =20 xs_write(ctx->xsh, t, GCSPRINTF("%s/vm", dom_path), vm_path, strlen(vm= _path)); - rc =3D libxl__domain_rename(gc, *domid, 0, info->name, t); + rc =3D libxl__domain_rename(gc, domid, 0, info->name, t); if (rc) goto out; =20 @@ -830,7 +697,7 @@ retry_transaction: =20 vm_list =3D libxl_list_vm(ctx, &nb_vm); if (!vm_list) { - LOGD(ERROR, *domid, "cannot get number of running guests"); + LOGD(ERROR, domid, "cannot get number of running guests"); rc =3D ERROR_FAIL; goto out; } @@ -854,7 +721,7 @@ retry_transaction: t =3D 0; goto retry_transaction; } - LOGED(ERROR, *domid, "domain creation ""xenstore transaction commi= t failed"); + LOGED(ERROR, domid, "domain creation ""xenstore transaction commit= failed"); rc =3D ERROR_FAIL; goto out; } @@ -866,6 +733,155 @@ retry_transaction: return rc; } =20 +int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config, + libxl__domain_build_state *state, + uint32_t *domid, bool soft_reset) +{ + libxl_ctx *ctx =3D libxl__gc_owner(gc); + int ret, rc; + + /* convenience aliases */ + libxl_domain_create_info *info =3D &d_config->c_info; + libxl_domain_build_info *b_info =3D &d_config->b_info; + + assert(soft_reset || *domid =3D=3D INVALID_DOMID); + + if (!soft_reset) { + struct xen_domctl_createdomain create =3D { + .ssidref =3D info->ssidref, + .max_vcpus =3D b_info->max_vcpus, + .max_evtchn_port =3D b_info->event_channels, + .max_grant_frames =3D b_info->max_grant_frames, + .max_maptrack_frames =3D b_info->max_maptrack_frames, + }; + + if (info->type !=3D LIBXL_DOMAIN_TYPE_PV) { + create.flags |=3D XEN_DOMCTL_CDF_hvm; + create.flags |=3D + libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0; + create.flags |=3D + libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off; + } + + assert(info->passthrough !=3D LIBXL_PASSTHROUGH_DEFAULT); + LOG(DETAIL, "passthrough: %s", + libxl_passthrough_to_string(info->passthrough)); + + if (info->passthrough !=3D LIBXL_PASSTHROUGH_DISABLED) + create.flags |=3D XEN_DOMCTL_CDF_iommu; + + if (info->passthrough =3D=3D LIBXL_PASSTHROUGH_SYNC_PT) + create.iommu_opts |=3D XEN_DOMCTL_IOMMU_no_sharept; + + /* Ultimately, handle is an array of 16 uint8_t, same as uuid */ + libxl_uuid_copy(ctx, (libxl_uuid *)&create.handle, &info->uuid); + + ret =3D libxl__arch_domain_prepare_config(gc, d_config, &create); + if (ret < 0) { + LOGED(ERROR, *domid, "fail to get domain config"); + rc =3D ERROR_FAIL; + goto out; + } + + for (;;) { + uint32_t local_domid; + bool recent; + + if (info->domid =3D=3D RANDOM_DOMID) { + uint16_t v; + + ret =3D libxl__random_bytes(gc, (void *)&v, sizeof(v)); + if (ret < 0) + break; + + v &=3D DOMID_MASK; + if (!libxl_domid_valid_guest(v)) + continue; + + local_domid =3D v; + } else { + local_domid =3D info->domid; /* May not be valid */ + } + + ret =3D xc_domain_create(ctx->xch, &local_domid, &create); + if (ret < 0) { + /* + * If we generated a random domid and creation failed + * because that domid already exists then simply try + * again. + */ + if (errno =3D=3D EEXIST && info->domid =3D=3D RANDOM_DOMID) + continue; + + LOGED(ERROR, local_domid, "domain creation fail"); + rc =3D ERROR_FAIL; + goto out; + } + + /* A new domain now exists */ + *domid =3D local_domid; + + rc =3D libxl__is_domid_recent(gc, local_domid, &recent); + if (rc) + goto out; + + /* The domid is not recent, so we're done */ + if (!recent) + break; + + /* + * If the domid was specified then there's no point in + * trying again. + */ + if (libxl_domid_valid_guest(info->domid)) { + LOGED(ERROR, local_domid, "domain id recently used"); + rc =3D ERROR_FAIL; + goto out; + } + + /* + * The domain is recent and so cannot be used. Clear domid + * here since, if xc_domain_destroy() fails below there is + * little point calling it again in the error path. + */ + *domid =3D INVALID_DOMID; + + ret =3D xc_domain_destroy(ctx->xch, local_domid); + if (ret < 0) { + LOGED(ERROR, local_domid, "domain destroy fail"); + rc =3D ERROR_FAIL; + goto out; + } + + /* The domain was successfully destroyed, so we can try again = */ + } + + rc =3D libxl__arch_domain_save_config(gc, d_config, state, &create= ); + if (rc < 0) + goto out; + } + + /* + * If soft_reset is set the the domid will have been valid on entry. + * If it was not set then xc_domain_create() should have assigned a + * valid value. Either way, if we reach this point, domid should be + * valid. + */ + assert(libxl_domid_valid_guest(*domid)); + + ret =3D xc_cpupool_movedomain(ctx->xch, info->poolid, *domid); + if (ret < 0) { + LOGED(ERROR, *domid, "domain move fail"); + rc =3D ERROR_FAIL; + goto out; + } + + rc =3D libxl__domain_make_xs_entries(gc, d_config, state, *domid); + +out: + return rc; +} + static int store_libxl_entry(libxl__gc *gc, uint32_t domid, libxl_domain_build_info *b_info) { @@ -1191,16 +1207,32 @@ static void initiate_domain_create(libxl__egc *egc, ret =3D libxl__domain_config_setdefault(gc,d_config,domid); if (ret) goto error_out; =20 - ret =3D libxl__domain_make(gc, d_config, &dcs->build_state, &domid, - dcs->soft_reset); - if (ret) { - LOGD(ERROR, domid, "cannot make domain: %d", ret); + if ( !d_config->dm_restore_file ) + { + ret =3D libxl__domain_make(gc, d_config, &dcs->build_state, &domid, + dcs->soft_reset); dcs->guest_domid =3D domid; + + if (ret) { + LOGD(ERROR, domid, "cannot make domain: %d", ret); + ret =3D ERROR_FAIL; + goto error_out; + } + } else if ( dcs->guest_domid !=3D INVALID_DOMID ) { + domid =3D dcs->guest_domid; + + ret =3D libxl__domain_make_xs_entries(gc, d_config, &dcs->build_st= ate, domid); + if (ret) { + LOGD(ERROR, domid, "cannot make domain: %d", ret); + ret =3D ERROR_FAIL; + goto error_out; + } + } else { + LOGD(ERROR, domid, "cannot make domain"); ret =3D ERROR_FAIL; goto error_out; } =20 - dcs->guest_domid =3D domid; dcs->sdss.dm.guest_domid =3D 0; /* means we haven't spawned */ =20 /* post-4.13 todo: move these next bits of defaulting to @@ -1236,7 +1268,7 @@ static void initiate_domain_create(libxl__egc *egc, if (ret) goto error_out; =20 - if (restore_fd >=3D 0 || dcs->soft_reset) { + if (restore_fd >=3D 0 || dcs->soft_reset || d_config->dm_restore_file)= { LOGD(DEBUG, domid, "restoring, not running bootloader"); domcreate_bootloader_done(egc, &dcs->bl, 0); } else { @@ -1312,7 +1344,16 @@ static void domcreate_bootloader_done(libxl__egc *eg= c, dcs->sdss.dm.callback =3D domcreate_devmodel_started; dcs->sdss.callback =3D domcreate_devmodel_started; =20 - if (restore_fd < 0 && !dcs->soft_reset) { + if (restore_fd < 0 && !dcs->soft_reset && !d_config->dm_restore_file) { + rc =3D libxl__domain_build(gc, d_config, domid, state); + domcreate_rebuild_done(egc, dcs, rc); + return; + } + + if ( d_config->dm_restore_file ) { + dcs->srs.dcs =3D dcs; + dcs->srs.ao =3D ao; + state->forked_vm =3D true; rc =3D libxl__domain_build(gc, d_config, domid, state); domcreate_rebuild_done(egc, dcs, rc); return; @@ -1510,6 +1551,7 @@ static void domcreate_rebuild_done(libxl__egc *egc, /* convenience aliases */ const uint32_t domid =3D dcs->guest_domid; libxl_domain_config *const d_config =3D dcs->guest_config; + libxl__domain_build_state *const state =3D &dcs->build_state; =20 if (ret) { LOGD(ERROR, domid, "cannot (re-)build domain: %d", ret); @@ -1517,6 +1559,9 @@ static void domcreate_rebuild_done(libxl__egc *egc, goto error_out; } =20 + if ( d_config->dm_restore_file ) + state->saved_state =3D GCSPRINTF("%s", d_config->dm_restore_file); + store_libxl_entry(gc, domid, &d_config->b_info); =20 libxl__multidev_begin(ao, &dcs->multidev); @@ -1947,7 +1992,7 @@ static void domain_create_cb(libxl__egc *egc, libxl__domain_create_state *dcs, int rc, uint32_t domid); =20 -static int do_domain_create(libxl_ctx *ctx, libxl_domain_config *d_config, +int libxl__do_domain_create(libxl_ctx *ctx, libxl_domain_config *d_config, uint32_t *domid, int restore_fd, int send_back= _fd, const libxl_domain_restore_params *params, const libxl_asyncop_how *ao_how, @@ -1960,6 +2005,8 @@ static int do_domain_create(libxl_ctx *ctx, libxl_dom= ain_config *d_config, GCNEW(cdcs); cdcs->dcs.ao =3D ao; cdcs->dcs.guest_config =3D d_config; + cdcs->dcs.guest_domid =3D *domid; + libxl_domain_config_init(&cdcs->dcs.guest_config_saved); libxl_domain_config_copy(ctx, &cdcs->dcs.guest_config_saved, d_config); cdcs->dcs.restore_fd =3D cdcs->dcs.libxc_fd =3D restore_fd; @@ -2204,8 +2251,8 @@ int libxl_domain_create_new(libxl_ctx *ctx, libxl_dom= ain_config *d_config, const libxl_asyncprogress_how *aop_console_how) { unset_disk_colo_restore(d_config); - return do_domain_create(ctx, d_config, domid, -1, -1, NULL, - ao_how, aop_console_how); + return libxl__do_domain_create(ctx, d_config, domid, -1, -1, NULL, + ao_how, aop_console_how); } =20 int libxl_domain_create_restore(libxl_ctx *ctx, libxl_domain_config *d_con= fig, @@ -2221,8 +2268,8 @@ int libxl_domain_create_restore(libxl_ctx *ctx, libxl= _domain_config *d_config, unset_disk_colo_restore(d_config); } =20 - return do_domain_create(ctx, d_config, domid, restore_fd, send_back_fd, - params, ao_how, aop_console_how); + return libxl__do_domain_create(ctx, d_config, domid, restore_fd, send_= back_fd, + params, ao_how, aop_console_how); } =20 int libxl_domain_soft_reset(libxl_ctx *ctx, diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c index f4007bbe50..b615f1fc88 100644 --- a/tools/libxl/libxl_dm.c +++ b/tools/libxl/libxl_dm.c @@ -2803,7 +2803,7 @@ static void device_model_spawn_outcome(libxl__egc *eg= c, =20 libxl__domain_build_state *state =3D dmss->build_state; =20 - if (state->saved_state) { + if (state->saved_state && !state->forked_vm) { ret2 =3D unlink(state->saved_state); if (ret2) { LOGED(ERROR, dmss->guest_domid, "%s: failed to remove device-m= odel state %s", diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index 71cb578923..3bc7117b99 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -249,9 +249,12 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid, libxl_domain_build_info *const info =3D &d_config->b_info; libxl_ctx *ctx =3D libxl__gc_owner(gc); char *xs_domid, *con_domid; - int rc; + int rc =3D 0; uint64_t size; =20 + if ( state->forked_vm ) + goto skip_fork; + if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) !=3D 0) { LOG(ERROR, "Couldn't set max vcpu count"); return ERROR_FAIL; @@ -362,7 +365,6 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid, } } =20 - rc =3D libxl__arch_extra_memory(gc, info, &size); if (rc < 0) { LOGE(ERROR, "Couldn't get arch extra constant memory size"); @@ -374,6 +376,11 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid, return ERROR_FAIL; } =20 + rc =3D libxl__arch_domain_create(gc, d_config, domid); + if ( rc ) + goto out; + +skip_fork: xs_domid =3D xs_read(ctx->xsh, XBT_NULL, "/tool/xenstored/domid", NULL= ); state->store_domid =3D xs_domid ? atoi(xs_domid) : 0; free(xs_domid); @@ -385,8 +392,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid, state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, state->= store_domid); state->console_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, state= ->console_domid); =20 - rc =3D libxl__arch_domain_create(gc, d_config, domid); - +out: return rc; } =20 @@ -444,6 +450,9 @@ int libxl__build_post(libxl__gc *gc, uint32_t domid, char **ents; int i, rc; =20 + if ( state->forked_vm ) + goto skip_fork; + if (info->num_vnuma_nodes && !info->num_vcpu_soft_affinity) { rc =3D set_vnuma_affinity(gc, domid, info); if (rc) @@ -466,6 +475,7 @@ int libxl__build_post(libxl__gc *gc, uint32_t domid, } } =20 +skip_fork: ents =3D libxl__calloc(gc, 12 + (info->max_vcpus * 2) + 2, sizeof(char= *)); ents[0] =3D "memory/static-max"; ents[1] =3D GCSPRINTF("%"PRId64, info->max_memkb); @@ -728,14 +738,16 @@ static int hvm_build_set_params(xc_interface *handle,= uint32_t domid, libxl_domain_build_info *info, int store_evtchn, unsigned long *store_mfn, int console_evtchn, unsigned long *console= _mfn, - domid_t store_domid, domid_t console_domid) + domid_t store_domid, domid_t console_domid, + bool forked_vm) { struct hvm_info_table *va_hvm; uint8_t *va_map, sum; uint64_t str_mfn, cons_mfn; int i; =20 - if (info->type =3D=3D LIBXL_DOMAIN_TYPE_HVM) { + if ( info->type =3D=3D LIBXL_DOMAIN_TYPE_HVM && !forked_vm ) + { va_map =3D xc_map_foreign_range(handle, domid, XC_PAGE_SIZE, PROT_READ | PROT_WRITE, HVM_INFO_PFN); @@ -1051,6 +1063,23 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid, struct xc_dom_image *dom =3D NULL; bool device_model =3D info->type =3D=3D LIBXL_DOMAIN_TYPE_HVM ? true := false; =20 + if ( state->forked_vm ) + { + rc =3D hvm_build_set_params(ctx->xch, domid, info, state->store_po= rt, + &state->store_mfn, state->console_port, + &state->console_mfn, state->store_domid, + state->console_domid, state->forked_vm); + + if ( rc ) + return rc; + + return xc_dom_gnttab_seed(ctx->xch, domid, true, + state->console_mfn, + state->store_mfn, + state->console_domid, + state->store_domid); + } + xc_dom_loginit(ctx->xch); =20 /* @@ -1175,7 +1204,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid, rc =3D hvm_build_set_params(ctx->xch, domid, info, state->store_port, &state->store_mfn, state->console_port, &state->console_mfn, state->store_domid, - state->console_domid); + state->console_domid, false); if (rc !=3D 0) { LOG(ERROR, "hvm build set params failed"); goto out; diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h index 5f39e44cb9..d05ff31e83 100644 --- a/tools/libxl/libxl_internal.h +++ b/tools/libxl/libxl_internal.h @@ -1374,6 +1374,7 @@ typedef struct { =20 char *saved_state; int dm_monitor_fd; + bool forked_vm; =20 libxl__file_reference pv_kernel; libxl__file_reference pv_ramdisk; @@ -4818,6 +4819,12 @@ _hidden int libxl__domain_pvcontrol(libxl__egc *egc, /* Check whether a domid is recent */ int libxl__is_domid_recent(libxl__gc *gc, uint32_t domid, bool *recent); =20 +_hidden int libxl__do_domain_create(libxl_ctx *ctx, libxl_domain_config *d= _config, + uint32_t *domid, int restore_fd, int s= end_back_fd, + const libxl_domain_restore_params *par= ams, + const libxl_asyncop_how *ao_how, + const libxl_asyncprogress_how *aop_con= sole_how); + #endif =20 /* diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index f7c473be74..2bb5e6319e 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -958,6 +958,7 @@ libxl_domain_config =3D Struct("domain_config", [ ("on_watchdog", libxl_action_on_shutdown), ("on_crash", libxl_action_on_shutdown), ("on_soft_reset", libxl_action_on_shutdown), + ("dm_restore_file", string, {'const': True}), ], dir=3DDIR_IN) =20 libxl_diskinfo =3D Struct("diskinfo", [ diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c index f8bc828e62..7a0bd62fa7 100644 --- a/tools/libxl/libxl_x86.c +++ b/tools/libxl/libxl_x86.c @@ -2,6 +2,7 @@ #include "libxl_arch.h" =20 #include +#include =20 int libxl__arch_domain_prepare_config(libxl__gc *gc, libxl_domain_config *d_config, @@ -842,6 +843,47 @@ int libxl__arch_passthrough_mode_setdefault(libxl__gc = *gc, return rc; } =20 +/* + * The parent domain is expected to be created with default settings for + * - max_evtch_port + * - max_grant_frames + * - max_maptrack_frames + */ +int libxl_domain_fork_vm(libxl_ctx *ctx, uint32_t pdomid, uint32_t max_vcp= us, uint32_t *domid, + bool allow_with_iommu) +{ + int rc; + struct xen_domctl_createdomain create =3D {0}; + create.flags |=3D XEN_DOMCTL_CDF_hvm; + create.flags |=3D XEN_DOMCTL_CDF_hap; + create.flags |=3D XEN_DOMCTL_CDF_oos_off; + create.arch.emulation_flags =3D (XEN_X86_EMU_ALL & ~XEN_X86_EMU_VPCI); + create.ssidref =3D SECINITSID_DOMU; + create.max_vcpus =3D max_vcpus; + create.max_evtchn_port =3D 1023; + create.max_grant_frames =3D LIBXL_MAX_GRANT_FRAMES_DEFAULT; + create.max_maptrack_frames =3D LIBXL_MAX_MAPTRACK_FRAMES_DEFAULT; + + if ( (rc =3D xc_domain_create(ctx->xch, domid, &create)) ) + return rc; + + if ( (rc =3D xc_memshr_fork(ctx->xch, pdomid, *domid, allow_with_iommu= )) ) + xc_domain_destroy(ctx->xch, *domid); + + return rc; +} + +int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_con= fig, + uint32_t domid, + const libxl_asyncprogress_how *aop_console= _how) +{ + return libxl__do_domain_create(ctx, d_config, &domid, -1, -1, 0, 0, ao= p_console_how); +} + +int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid) +{ + return xc_memshr_fork_reset(ctx->xch, domid); +} =20 /* * Local variables: diff --git a/tools/xl/Makefile b/tools/xl/Makefile index af4912e67a..073222233b 100644 --- a/tools/xl/Makefile +++ b/tools/xl/Makefile @@ -15,7 +15,7 @@ LDFLAGS +=3D $(PTHREAD_LDFLAGS) CFLAGS_XL +=3D $(CFLAGS_libxenlight) CFLAGS_XL +=3D -Wshadow =20 -XL_OBJS-$(CONFIG_X86) =3D xl_psr.o +XL_OBJS-$(CONFIG_X86) =3D xl_psr.o xl_forkvm.o XL_OBJS =3D xl.o xl_cmdtable.o xl_sxp.o xl_utils.o $(XL_OBJS-y) XL_OBJS +=3D xl_parse.o xl_cpupool.o xl_flask.o XL_OBJS +=3D xl_vtpm.o xl_block.o xl_nic.o xl_usb.o diff --git a/tools/xl/xl.h b/tools/xl/xl.h index 06569c6c4a..1105c34b15 100644 --- a/tools/xl/xl.h +++ b/tools/xl/xl.h @@ -31,6 +31,7 @@ struct cmd_spec { }; =20 struct domain_create { + uint32_t ddomid; /* fork launch dm for this domid */ int debug; int daemonize; int monitor; /* handle guest reboots etc */ @@ -45,6 +46,7 @@ struct domain_create { const char *config_file; char *extra_config; /* extra config string */ const char *restore_file; + const char *dm_restore_file; char *colo_proxy_script; bool userspace_colo_proxy; int migrate_fd; /* -1 means none */ @@ -128,6 +130,8 @@ int main_pciassignable_remove(int argc, char **argv); int main_pciassignable_list(int argc, char **argv); #ifndef LIBXL_HAVE_NO_SUSPEND_RESUME int main_restore(int argc, char **argv); +int main_fork_launch_dm(int argc, char **argv); +int main_fork_reset(int argc, char **argv); int main_migrate_receive(int argc, char **argv); int main_save(int argc, char **argv); int main_migrate(int argc, char **argv); @@ -212,6 +216,7 @@ int main_psr_cat_cbm_set(int argc, char **argv); int main_psr_cat_show(int argc, char **argv); int main_psr_mba_set(int argc, char **argv); int main_psr_mba_show(int argc, char **argv); +int main_fork_vm(int argc, char **argv); #endif int main_qemu_monitor_command(int argc, char **argv); =20 diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c index 08335394e5..ef634abf32 100644 --- a/tools/xl/xl_cmdtable.c +++ b/tools/xl/xl_cmdtable.c @@ -187,6 +187,21 @@ struct cmd_spec cmd_table[] =3D { "Restore a domain from a saved state", "- for internal use only", }, +#if defined(__i386__) || defined(__x86_64__) + { "fork-vm", + &main_fork_vm, 0, 1, + "Fork a domain from the running parent domid. Experimental. Most con= fig settings must match parent.", + "[options] ", + "-h Print this help.\n" + "-C Use config file for VM fork.\n" + "-Q Use qemu save file for VM fork.\n" + "--launch-dm Launch device model (QEMU) for VM fork= .\n" + "--fork-reset Reset VM fork.\n" + "--max-vcpus Specify max-vcpus matching the parent = domain when not launching dm\n" + "-p Do not unpause fork VM after operation= .\n" + "-d Enable debug messages.\n" + }, +#endif #endif { "dump-core", &main_dump_core, 0, 1, diff --git a/tools/xl/xl_forkvm.c b/tools/xl/xl_forkvm.c new file mode 100644 index 0000000000..d40518fc59 --- /dev/null +++ b/tools/xl/xl_forkvm.c @@ -0,0 +1,149 @@ +/* + * Copyright 2020 Intel Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include "xl.h" +#include "xl_utils.h" +#include "xl_parse.h" + +int main_fork_vm(int argc, char **argv) +{ + int rc, debug =3D 0; + uint32_t domid_in =3D INVALID_DOMID, domid_out =3D INVALID_DOMID; + int launch_dm =3D 1; + bool reset =3D 0; + bool pause =3D 0; + bool allow_iommu =3D 0; + const char *config_file =3D NULL; + const char *dm_restore_file =3D NULL; + uint32_t max_vcpus =3D 0; + + int opt; + static struct option opts[] =3D { + {"launch-dm", 1, 0, 'l'}, + {"fork-reset", 0, 0, 'r'}, + {"max-vcpus", 1, 0, 'm'}, + {"allow-iommu", 1, 0, 'i'}, + COMMON_LONG_OPTS + }; + + SWITCH_FOREACH_OPT(opt, "phdC:Q:l:rm:i", opts, "fork-vm", 1) { + case 'd': + debug =3D 1; + break; + case 'p': + pause =3D 1; + break; + case 'm': + max_vcpus =3D atoi(optarg); + break; + case 'C': + config_file =3D optarg; + break; + case 'Q': + dm_restore_file =3D optarg; + break; + case 'l': + if ( !strcmp(optarg, "no") ) + launch_dm =3D 0; + if ( !strcmp(optarg, "yes") ) + launch_dm =3D 1; + if ( !strcmp(optarg, "late") ) + launch_dm =3D 2; + break; + case 'r': + reset =3D 1; + break; + case 'i': + allow_iommu =3D 1; + break; + default: + fprintf(stderr, "Unimplemented option(s)\n"); + return EXIT_FAILURE; + } + + if (argc-optind =3D=3D 1) { + domid_in =3D atoi(argv[optind]); + } else { + help("fork-vm"); + return EXIT_FAILURE; + } + + if (launch_dm && (!config_file || !dm_restore_file)) { + fprintf(stderr, "Currently you must provide both -C and -Q options= \n"); + return EXIT_FAILURE; + } + + if (reset) { + domid_out =3D domid_in; + if (libxl_domain_fork_reset(ctx, domid_in) =3D=3D EXIT_FAILURE) + return EXIT_FAILURE; + } + + if (launch_dm =3D=3D 2 || reset) { + domid_out =3D domid_in; + rc =3D EXIT_SUCCESS; + } else { + if ( !max_vcpus ) + { + fprintf(stderr, "Currently you must parent's max_vcpu for this= option\n"); + return EXIT_FAILURE; + } + + rc =3D libxl_domain_fork_vm(ctx, domid_in, max_vcpus, &domid_out, = allow_iommu); + } + + if (rc =3D=3D EXIT_SUCCESS) { + if ( launch_dm ) { + struct domain_create dom_info; + memset(&dom_info, 0, sizeof(dom_info)); + dom_info.ddomid =3D domid_out; + dom_info.dm_restore_file =3D dm_restore_file; + dom_info.debug =3D debug; + dom_info.paused =3D pause; + dom_info.config_file =3D config_file; + dom_info.migrate_fd =3D -1; + dom_info.send_back_fd =3D -1; + rc =3D create_domain(&dom_info) < 0 ? EXIT_FAILURE : EXIT_SUCC= ESS; + } else if ( !pause ) + rc =3D libxl_domain_unpause(ctx, domid_out, NULL); + } + + if (rc =3D=3D EXIT_SUCCESS) + fprintf(stderr, "fork-vm command successfully returned domid: %u\n= ", domid_out); + else if ( domid_out !=3D INVALID_DOMID ) + libxl_domain_destroy(ctx, domid_out, 0); + + return rc; +} + +/* + * Local variables: + * mode: C + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c index 17b4514c94..c64123d0a1 100644 --- a/tools/xl/xl_vmcontrol.c +++ b/tools/xl/xl_vmcontrol.c @@ -676,6 +676,12 @@ int create_domain(struct domain_create *dom_info) =20 int restoring =3D (restore_file || (migrate_fd >=3D 0)); =20 +#if defined(__i386__) || defined(__x86_64__) + /* VM forking */ + uint32_t ddomid =3D dom_info->ddomid; // launch dm for this domain iff= set + const char *dm_restore_file =3D dom_info->dm_restore_file; +#endif + libxl_domain_config_init(&d_config); =20 if (restoring) { @@ -928,6 +934,14 @@ start: * restore/migrate-receive it again. */ restoring =3D 0; +#if defined(__i386__) || defined(__x86_64__) + } else if ( ddomid ) { + d_config.dm_restore_file =3D dm_restore_file; + ret =3D libxl_domain_fork_launch_dm(ctx, &d_config, ddomid, + autoconnect_console_how); + domid =3D ddomid; + ddomid =3D INVALID_DOMID; +#endif } else if (domid_soft_reset !=3D INVALID_DOMID) { /* Do soft reset. */ ret =3D libxl_domain_soft_reset(ctx, &d_config, domid_soft_reset, --=20 2.20.1