From nobody Mon Nov 10 22:55:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1585580648; cv=none; d=zohomail.com; s=zohoarc; b=NH7w33pwKYYSZSJ5Dhxf9hN3mI72u3ThF/YyZeQZX8MMBStfmxgmpHu883T1pow0oieORnb8o9mFm3wRlRiQzhxvqOwiVitdfOSCBPlZ4rTqd3UXwMUz+2pvmAriFqIFz65hvirj50hHeAcCh4TOgfc7kfmo5b6UBZrEVkHAoSQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1585580648; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=0WwO+ZQVd6NSLyqJOHb7LJiTM0rnn6Zz1AFw4YoeAU0=; b=noaxMMUeXDbktP+t8itD/0Lbet0avoYRDn1wbWqDP9uCxJ1NoPRvGd9XOKRdnFClDIvrF+O6S03YfIWkgERPMTuKz02jVq9inXT0YmsxDWEHmzWpcBgSaNEBvd87jf/jL/wCrTWb9mMWowbA4TILPuUur+OYHcfxOCOKwGsyvAU= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1585580648238420.0229023648908; Mon, 30 Mar 2020 08:04:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jIvvq-0000m5-IW; Mon, 30 Mar 2020 15:02:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jIvvp-0000lw-SU for xen-devel@lists.xenproject.org; Mon, 30 Mar 2020 15:02:29 +0000 Received: from mga01.intel.com (unknown [192.55.52.88]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7706aa32-7297-11ea-b4f4-bc764e2007e4; Mon, 30 Mar 2020 15:02:25 +0000 (UTC) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2020 08:02:24 -0700 Received: from yliu23-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.168.11]) by fmsmga001.fm.intel.com with ESMTP; 30 Mar 2020 08:02:22 -0700 X-Inumbo-ID: 7706aa32-7297-11ea-b4f4-bc764e2007e4 IronPort-SDR: 3jP4ez0AtUEHNnjq2ckwbmRXH7bCt5ykCU/FW5iT/HMAWr+HsXn3UUkGaEO0qjGamriJAi20Zc luQAlADtb0bA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False IronPort-SDR: 5c2y3QnOheP74TQ0Bv9k+ugPKVonIf3m8+6TniJjw3PMJqsaNVowRiFuj9dpMPkvh6XBmyuNEF 8Wd63ieuWFMA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,324,1580803200"; d="scan'208";a="359199596" From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Mon, 30 Mar 2020 08:02:09 -0700 Message-Id: X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Subject: [Xen-devel] [PATCH v13 2/3] x86/mem_sharing: reset a fork X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Tamas K Lengyel , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , Stefano Stabellini , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Implement hypercall that allows a fork to shed all memory that got allocated for it during its execution and re-load its vCPU context from the parent VM. This allows the forked VM to reset into the same state the parent VM is in a faster way then creating a new fork would be. Measurements show about a 2x speedup during normal fuzzing operations. Performance may vary depending how much memory got allocated for the forked VM. If it has been completely deduplicated from the parent VM then creating a new fork would likely be mo= re performant. Signed-off-by: Tamas K Lengyel Reviewed-by: Roger Pau Monn=C3=A9 --- v12: remove continuation & add comment back address style issues pointed out by Jan --- xen/arch/x86/mm/mem_sharing.c | 77 +++++++++++++++++++++++++++++++++++ xen/include/public/memory.h | 1 + 2 files changed, 78 insertions(+) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index faa79011c3..cc09f9c84f 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -1758,6 +1758,61 @@ static int fork(struct domain *cd, struct domain *d) return rc; } =20 +/* + * The fork reset operation is intended to be used on short-lived forks on= ly. + * There is no hypercall continuation operation implemented for this reaso= n. + * For forks that obtain a larger memory footprint it is likely going to be + * more performant to create a new fork instead of resetting an existing o= ne. + * + * TODO: In case this hypercall would become useful on forks with larger m= emory + * footprints the hypercall continuation should be implemented (or if this + * feature needs to be become "stable"). + */ +static int mem_sharing_fork_reset(struct domain *d, struct domain *pd) +{ + int rc; + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + struct page_info *page, *tmp; + + domain_pause(d); + + /* need recursive lock because we will free pages */ + spin_lock_recursive(&d->page_alloc_lock); + page_list_for_each_safe(page, tmp, &d->page_list) + { + p2m_type_t p2mt; + p2m_access_t p2ma; + mfn_t mfn =3D page_to_mfn(page); + gfn_t gfn =3D mfn_to_gfn(d, mfn); + + mfn =3D __get_gfn_type_access(p2m, gfn_x(gfn), &p2mt, &p2ma, + 0, NULL, false); + + /* only reset pages that are sharable */ + if ( !p2m_is_sharable(p2mt) ) + continue; + + /* take an extra reference or just skip if can't for whatever reas= on */ + if ( !get_page(page, d) ) + continue; + + /* forked memory is 4k, not splitting large pages so this must wor= k */ + rc =3D p2m->set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K, + p2m_invalid, p2m_access_rwx, -1); + ASSERT(!rc); + + put_page_alloc_ref(page); + put_page(page); + } + spin_unlock_recursive(&d->page_alloc_lock); + + rc =3D copy_settings(d, pd); + + domain_unpause(d); + + return rc; +} + int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg) { int rc; @@ -2048,6 +2103,28 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem= _sharing_op_t) arg) break; } =20 + case XENMEM_sharing_op_fork_reset: + { + struct domain *pd; + + rc =3D -EINVAL; + if ( mso.u.fork.pad[0] || mso.u.fork.pad[1] || mso.u.fork.pad[2] ) + goto out; + + rc =3D -ENOSYS; + if ( !d->parent ) + goto out; + + rc =3D rcu_lock_live_remote_domain_by_id(d->parent->domain_id, &pd= ); + if ( rc ) + goto out; + + rc =3D mem_sharing_fork_reset(d, pd); + + rcu_unlock_domain(pd); + break; + } + default: rc =3D -ENOSYS; break; diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index 5ee4e0da12..d36d64b8dc 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -483,6 +483,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t); #define XENMEM_sharing_op_audit 7 #define XENMEM_sharing_op_range_share 8 #define XENMEM_sharing_op_fork 9 +#define XENMEM_sharing_op_fork_reset 10 =20 #define XENMEM_SHARING_OP_S_HANDLE_INVALID (-10) #define XENMEM_SHARING_OP_C_HANDLE_INVALID (-9) --=20 2.20.1