From nobody Mon Feb 9 10:37:11 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1559205960; cv=none; d=zoho.com; s=zohoarc; b=Gxy+6qdv+xnMuZFXwl9Jh09BHfp6tiTOiF30V1VU0j1f5Bqz6oWaAT699fZYV+xvgx5n10kavRYYQmmF3itrnyAkMVoIRhwBRhJErNiWMwpK0iWpaLsWmi+TwOiD4DBiHfl4m3k8ylCUH8vyCTFJLUlaly9Xj/Jvo2Ufstiggrc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559205960; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=4n43VbsnQxu1xAMThxE0jz5WSiy7VmI0KoDwrjI3sbQ=; b=knwnlbNk4X2Tt9qWj45SQ2O/0zxlzg+lur1VYud4GgT2gsTGgddDyMiA7jJR8xqPR2gLKwy2Xv+kI3JFzwY7csAdu6qtLTR/stoD01FJL8EJAPcFcCbPn35eEzf5ww+laDivM8nVjgRf9yAEiWnezo0gB6qxr5WO+nb042t2DZ8= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 155920596037629.3780423818788; Thu, 30 May 2019 01:46:00 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWGft-0002B6-40; Thu, 30 May 2019 08:44:37 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWGfq-0002Aa-Vu for xen-devel@lists.xenproject.org; Thu, 30 May 2019 08:44:35 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 24cc5c21-82b7-11e9-8980-bc764e045a96; Thu, 30 May 2019 08:44:32 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id e27so115904lfn.6 for ; Thu, 30 May 2019 01:44:32 -0700 (PDT) Received: from aanisov-work.kyiv.epam.com (ll-22.209.223.85.sovam.net.ua. [85.223.209.22]) by smtp.gmail.com with ESMTPSA id y27sm369543lfg.33.2019.05.30.01.44.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 30 May 2019 01:44:30 -0700 (PDT) X-Inumbo-ID: 24cc5c21-82b7-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m/QSMWlX8wXGmeejl4fowyUqo8c8kVetl/n9aWRvZIc=; b=slse9J3RtzPVj/Oishp1HcDveSt13ZKtrxN/641QkJU3qMP84OcEfBymBamVe31ml/ NaZbNU7+NfxnzZLBA5GZecSA+Dj6w4v49NJXzPe0pDumDzjpNeA1mg2yd6cz+28dt4Fy EpDwgOVr4X6C84XMHMpYW/nKYnDYhW9nD/Q5dfUU+2PPtbUzYg5d2IuqjLPfOEljvAav uI/2WC5DOoiZ2n5I4yCLhyg8IrtH/7Ww1yvz1z5Exv8BKGt2dtZJzkNBDTrH4nSAd+7u syhAt9/QN8zO0Y9MSCC8Gsa7yCkFpXTDC7IEjXfY/wd3SPxFwiG+1OVqkr/KqupjC5IJ X8UA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m/QSMWlX8wXGmeejl4fowyUqo8c8kVetl/n9aWRvZIc=; b=M/LpC9+1fqORkXPjsIsreIUq2lRa6c3wXTYT6T9raKBZV4dT3YgeA86CwEnTe8QQ0K 1tIRangt0HBRHHy7wu9hAqu60WiBHXlE9ylU+47wjcF4NT9V4/+JGbic2+MdY+S3LRqi cgfMpdMuoMtc9AVcyp2lSl9OWLk1j6Sw4fL5LLyiQpHzf78ZFOGPtCgXECMoTeVrUS1I 9Pj6YDJlqU4uyVMWajqVcULtJidR5/hfv9EHvK6DfMkpcOi2ltXylaFVU64TUMbh37Fi 5DiKc7D+DdKg69hK6aWCpEqizlrX8f513ir8XcnXLv2wixzGm8N8Pys+NyUjhj7IMFro pS/w== X-Gm-Message-State: APjAAAWfpCZBqDwzwNqJK2S7apzS42ATGww2QGeRjNUSxg2q3jQwjxsC C598aZ58ux0kKH6vxbrH4rEqBEOE2yw= X-Google-Smtp-Source: APXvYqwR6EnaaeuxRYNUU5J+hvtNLCVu24L1T5d6LUUTlJpUt5vM0OOsvGIA/fDRHKvtoyxSt3qrew== X-Received: by 2002:ac2:4990:: with SMTP id f16mr1368130lfl.93.1559205870966; Thu, 30 May 2019 01:44:30 -0700 (PDT) From: Andrii Anisov To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 11:44:25 +0300 Message-Id: <1559205867-19597-2-git-send-email-andrii.anisov@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559205867-19597-1-git-send-email-andrii.anisov@gmail.com> References: <1559205867-19597-1-git-send-email-andrii.anisov@gmail.com> Subject: [Xen-devel] [RESEND PATCH v3] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Andrii Anisov , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Andrii Anisov Existing interface to register runstate are with its virtual address is prone to issues which became more obvious with KPTI enablement in guests. The nature of those issues is the fact that the guest could be interrupted by the hypervisor at any time, and there is no guarantee to have the registered virtual address translated with the currently available guest's page tables. Before the KPTI such a situation was possible in case the guest is caught in the middle of PT processing (e.g. superpage shattering). With the KPTI this happens also when the guest runs userspace, so has a pretty high probability. So it was agreed to register runstate with the guest's physical address so that its mapping is permanent from the hypervisor point of view. [1] The hypercall employs the same vcpu_register_runstate_memory_area structure for the interface, but requires a registered area to not cross a page boundary. [1] https://lists.xenproject.org/archives/html/xen-devel/2019-02/msg00416.h= tml Signed-off-by: Andrii Anisov --- xen/arch/arm/domain.c | 58 ++++++++++++++++++--- xen/arch/x86/domain.c | 99 ++++++++++++++++++++++++++++++++--- xen/arch/x86/x86_64/domain.c | 16 +++++- xen/common/domain.c | 121 +++++++++++++++++++++++++++++++++++++++= ---- xen/include/public/vcpu.h | 15 ++++++ xen/include/xen/sched.h | 28 +++++++--- 6 files changed, 306 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index ff330b3..ecedf1c 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -274,17 +274,15 @@ static void ctxt_switch_to(struct vcpu *n) virt_timer_restore(n); } =20 -/* Update per-VCPU guest runstate shared memory area (if registered). */ -static void update_runstate_area(struct vcpu *v) +static void update_runstate_by_gvaddr(struct vcpu *v) { void __user *guest_handle =3D NULL; =20 - if ( guest_handle_is_null(runstate_guest(v)) ) - return; + ASSERT(!guest_handle_is_null(runstate_guest_virt(v))); =20 if ( VM_ASSIST(v->domain, runstate_update_flag) ) { - guest_handle =3D &v->runstate_guest.p->state_entry_time + 1; + guest_handle =3D &v->runstate_guest.virt.p->state_entry_time + 1; guest_handle--; v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; __raw_copy_to_guest(guest_handle, @@ -292,7 +290,7 @@ static void update_runstate_area(struct vcpu *v) smp_wmb(); } =20 - __copy_to_guest(runstate_guest(v), &v->runstate, 1); + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); =20 if ( guest_handle ) { @@ -303,6 +301,53 @@ static void update_runstate_area(struct vcpu *v) } } =20 +static void update_runstate_by_gpaddr(struct vcpu *v) +{ + struct vcpu_runstate_info *runstate =3D + (struct vcpu_runstate_info *)v->runstate_guest.phys; + + ASSERT(runstate !=3D NULL); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |=3D XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; + } + + memcpy(v->runstate_guest.phys, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + } +} + +/* Update per-VCPU guest runstate shared memory area (if registered). */ +static void update_runstate_area(struct vcpu *v) +{ + if ( xchg(&v->runstate_in_use, 1) ) + return; + + switch ( v->runstate_guest_type ) + { + case RUNSTATE_NONE: + break; + + case RUNSTATE_VADDR: + update_runstate_by_gvaddr(v); + break; + + case RUNSTATE_PADDR: + update_runstate_by_gpaddr(v); + break; + } + + xchg(&v->runstate_in_use, 0); +} + static void schedule_tail(struct vcpu *prev) { ASSERT(prev !=3D current); @@ -998,6 +1043,7 @@ long do_arm_vcpu_op(int cmd, unsigned int vcpuid, XEN_= GUEST_HANDLE_PARAM(void) a { case VCPUOP_register_vcpu_info: case VCPUOP_register_runstate_memory_area: + case VCPUOP_register_runstate_phys_memory_area: return do_vcpu_op(cmd, vcpuid, arg); default: return -EINVAL; diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index ac960dd..fe71776 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1566,22 +1566,21 @@ void paravirt_ctxt_switch_to(struct vcpu *v) } =20 /* Update per-VCPU guest runstate shared memory area (if registered). */ -bool update_runstate_area(struct vcpu *v) +static bool update_runstate_by_gvaddr(struct vcpu *v) { bool rc; struct guest_memory_policy policy =3D { .nested_guest_mode =3D false }; void __user *guest_handle =3D NULL; =20 - if ( guest_handle_is_null(runstate_guest(v)) ) - return true; + ASSERT(!guest_handle_is_null(runstate_guest_virt(v))); =20 update_guest_memory_policy(v, &policy); =20 if ( VM_ASSIST(v->domain, runstate_update_flag) ) { guest_handle =3D has_32bit_shinfo(v->domain) - ? &v->runstate_guest.compat.p->state_entry_time + 1 - : &v->runstate_guest.native.p->state_entry_time + 1; + ? &v->runstate_guest.virt.compat.p->state_entry_time + 1 + : &v->runstate_guest.virt.native.p->state_entry_time + 1; guest_handle--; v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; __raw_copy_to_guest(guest_handle, @@ -1594,11 +1593,11 @@ bool update_runstate_area(struct vcpu *v) struct compat_vcpu_runstate_info info; =20 XLAT_vcpu_runstate_info(&info, &v->runstate); - __copy_to_guest(v->runstate_guest.compat, &info, 1); + __copy_to_guest(v->runstate_guest.virt.compat, &info, 1); rc =3D true; } else - rc =3D __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=3D + rc =3D __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1) != =3D sizeof(v->runstate); =20 if ( guest_handle ) @@ -1614,6 +1613,92 @@ bool update_runstate_area(struct vcpu *v) return rc; } =20 +static bool update_runstate_by_gpaddr_native(struct vcpu *v) +{ + struct vcpu_runstate_info *runstate =3D + (struct vcpu_runstate_info *)v->runstate_guest.phys; + + ASSERT(runstate !=3D NULL); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |=3D XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; + } + + memcpy(v->runstate_guest.phys, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + } + + return true; +} + +static bool update_runstate_by_gpaddr_compat(struct vcpu *v) +{ + struct compat_vcpu_runstate_info *runstate =3D + (struct compat_vcpu_runstate_info *)v->runstate_guest.phys; + + ASSERT(runstate !=3D NULL); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |=3D XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; + } + + { + struct compat_vcpu_runstate_info info; + XLAT_vcpu_runstate_info(&info, &v->runstate); + memcpy(v->runstate_guest.phys, &info, sizeof(info)); + } + else + memcpy(v->runstate_guest.phys, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + } + + return true; +} + +bool update_runstate_area(struct vcpu *v) +{ + bool rc =3D true; + + if ( xchg(&v->runstate_in_use, 1) ) + return rc; + + switch ( v->runstate_guest_type ) + { + case RUNSTATE_NONE: + break; + + case RUNSTATE_VADDR: + rc =3D update_runstate_by_gvaddr(v); + break; + + case RUNSTATE_PADDR: + if ( has_32bit_shinfo(v->domain) ) + rc =3D update_runstate_by_gpaddr_compat(v); + else + rc =3D update_runstate_by_gpaddr_native(v); + break; + } + + xchg(&v->runstate_in_use, 0); + return rc; +} + static void _update_runstate_area(struct vcpu *v) { if ( !update_runstate_area(v) && is_pv_vcpu(v) && diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c index c46dccc..85d0072 100644 --- a/xen/arch/x86/x86_64/domain.c +++ b/xen/arch/x86/x86_64/domain.c @@ -12,6 +12,8 @@ CHECK_vcpu_get_physid; #undef xen_vcpu_get_physid =20 +extern void discard_runstate_area(struct vcpu *v); + int arch_compat_vcpu_op( int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) @@ -35,8 +37,16 @@ arch_compat_vcpu_op( !compat_handle_okay(area.addr.h, 1) ) break; =20 + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + rc =3D 0; - guest_from_compat_handle(v->runstate_guest.compat, area.addr.h); + + guest_from_compat_handle(v->runstate_guest.virt.compat, + area.addr.h); + + v->runstate_guest_type =3D RUNSTATE_VADDR; =20 if ( v =3D=3D current ) { @@ -49,7 +59,9 @@ arch_compat_vcpu_op( vcpu_runstate_get(v, &runstate); XLAT_vcpu_runstate_info(&info, &runstate); } - __copy_to_guest(v->runstate_guest.compat, &info, 1); + __copy_to_guest(v->runstate_guest.virt.compat, &info, 1); + + xchg(&v->runstate_in_use, 0); =20 break; } diff --git a/xen/common/domain.c b/xen/common/domain.c index 90c6607..d276b87 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -698,6 +698,74 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, str= uct domain **d) return 0; } =20 +static void unmap_runstate_area(struct vcpu *v) +{ + mfn_t mfn; + + if ( ! v->runstate_guest.phys ) + return; + + mfn =3D domain_page_map_to_mfn(v->runstate_guest.phys); + + unmap_domain_page_global((void *) + ((unsigned long)v->runstate_guest.phys & + PAGE_MASK)); + + v->runstate_guest.phys =3D NULL; + put_page_and_type(mfn_to_page(mfn)); +} + +static int map_runstate_area(struct vcpu *v, + struct vcpu_register_runstate_memory_area *area) +{ + unsigned long offset =3D area->addr.p & ~PAGE_MASK; + gfn_t gfn =3D gaddr_to_gfn(area->addr.p); + struct domain *d =3D v->domain; + void *mapping; + struct page_info *page; + size_t size =3D sizeof(struct vcpu_runstate_info); + + if ( offset > (PAGE_SIZE - size) ) + return -EINVAL; + + page =3D get_page_from_gfn(d, gfn_x(gfn), NULL, P2M_ALLOC); + if ( !page ) + return -EINVAL; + + if ( !get_page_type(page, PGT_writable_page) ) + { + put_page(page); + return -EINVAL; + } + + mapping =3D __map_domain_page_global(page); + + if ( mapping =3D=3D NULL ) + { + put_page_and_type(page); + return -ENOMEM; + } + + v->runstate_guest.phys =3D mapping + offset; + + return 0; +} + +void discard_runstate_area(struct vcpu *v) +{ + if ( v->runstate_guest_type =3D=3D RUNSTATE_PADDR ) + unmap_runstate_area(v); + + v->runstate_guest_type =3D RUNSTATE_NONE; +} + +static void discard_runstate_area_locked(struct vcpu *v) +{ + while ( xchg(&v->runstate_in_use, 1) ); + discard_runstate_area(v); + xchg(&v->runstate_in_use, 0); +} + int domain_kill(struct domain *d) { int rc =3D 0; @@ -734,7 +802,10 @@ int domain_kill(struct domain *d) if ( cpupool_move_domain(d, cpupool0) ) return -ERESTART; for_each_vcpu ( d, v ) + { + discard_runstate_area_locked(v); unmap_vcpu_info(v); + } d->is_dying =3D DOMDYING_dead; /* Mem event cleanup has to go here because the rings=20 * have to be put before we call put_domain. */ @@ -1188,7 +1259,7 @@ int domain_soft_reset(struct domain *d) =20 for_each_vcpu ( d, v ) { - set_xen_guest_handle(runstate_guest(v), NULL); + discard_runstate_area_locked(v); unmap_vcpu_info(v); } =20 @@ -1518,18 +1589,50 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_G= UEST_HANDLE_PARAM(void) arg) break; =20 rc =3D 0; - runstate_guest(v) =3D area.addr.h; =20 - if ( v =3D=3D current ) - { - __copy_to_guest(runstate_guest(v), &v->runstate, 1); - } - else + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + + if ( !guest_handle_is_null(runstate_guest_virt(v)) ) { - vcpu_runstate_get(v, &runstate); - __copy_to_guest(runstate_guest(v), &runstate, 1); + runstate_guest_virt(v) =3D area.addr.h; + v->runstate_guest_type =3D RUNSTATE_VADDR; + + if ( v =3D=3D current ) + { + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); + } + else + { + vcpu_runstate_get(v, &runstate); + __copy_to_guest(runstate_guest_virt(v), &runstate, 1); + } } =20 + xchg(&v->runstate_in_use, 0); + + break; + } + + case VCPUOP_register_runstate_phys_memory_area: + { + struct vcpu_register_runstate_memory_area area; + + rc =3D -EFAULT; + if ( copy_from_guest(&area, arg, 1) ) + break; + + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + + rc =3D map_runstate_area(v, &area); + if ( !rc ) + v->runstate_guest_type =3D RUNSTATE_PADDR; + + xchg(&v->runstate_in_use, 0); + break; } =20 diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h index 3623af9..d7da4a3 100644 --- a/xen/include/public/vcpu.h +++ b/xen/include/public/vcpu.h @@ -235,6 +235,21 @@ struct vcpu_register_time_memory_area { typedef struct vcpu_register_time_memory_area vcpu_register_time_memory_ar= ea_t; DEFINE_XEN_GUEST_HANDLE(vcpu_register_time_memory_area_t); =20 +/* + * Register a shared memory area from which the guest may obtain its own + * runstate information without needing to execute a hypercall. + * Notes: + * 1. The registered address must be guest's physical address. + * 2. The registered runstate area should not cross page boundary. + * 3. Only one shared area may be registered per VCPU. The shared area is + * updated by the hypervisor each time the VCPU is scheduled. Thus + * runstate.state will always be RUNSTATE_running and + * runstate.state_entry_time will indicate the system time at which the + * VCPU was last scheduled to run. + * @extra_arg =3D=3D pointer to vcpu_register_runstate_memory_area structu= re. + */ +#define VCPUOP_register_runstate_phys_memory_area 14 + #endif /* __XEN_PUBLIC_VCPU_H__ */ =20 /* diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 2201fac..6c8de8f 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -163,17 +163,31 @@ struct vcpu void *sched_priv; /* scheduler-specific data */ =20 struct vcpu_runstate_info runstate; + + enum { + RUNSTATE_NONE =3D 0, + RUNSTATE_PADDR =3D 1, + RUNSTATE_VADDR =3D 2, + } runstate_guest_type; + + unsigned long runstate_in_use; + + union + { #ifndef CONFIG_COMPAT -# define runstate_guest(v) ((v)->runstate_guest) - XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest addres= s */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt) + XEN_GUEST_HANDLE(vcpu_runstate_info_t) virt; /* guest address */ #else -# define runstate_guest(v) ((v)->runstate_guest.native) - union { - XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; - XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; - } runstate_guest; /* guest address */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt.native) + union { + XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; + XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; + } virt; /* guest address */ #endif =20 + void* phys; + } runstate_guest; + /* last time when vCPU is scheduled out */ uint64_t last_run_time; =20 --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel