From nobody Thu May 2 10:38:52 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1558721670; cv=none; d=zoho.com; s=zohoarc; b=VOSIu+sytHJxbWB6BalpURAfw4o98xhOvlB1o+y3pnhYKFGYaIFwkiXGd4k6Uv0GFYnyDkZ0MKE6UugJ7dS3RUsrE4o6PAVT1/nGcy9JtSglUKz8eBXiyNcrREXWjzDCrDvvu8QvpUlikgo3aVDYUFvNNcHKU/C/jctJm0bHUCA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1558721670; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=4n43VbsnQxu1xAMThxE0jz5WSiy7VmI0KoDwrjI3sbQ=; b=Y2BEUeon5ziSPAC6GYTd5znmzT6iUgalXLHR7GVB/wCEi13utk39NFDcQ/eBWel4LKBY3BN5CVB6TNIxaGucLmTKn8FG4ngPn7jpY+MZ3N+XdeNknMe58dcptSJKQzm4ieepdvJhlPMatjBckaRkeHMtbv0ETr3hqQlGKWw479o= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1558721670284539.2823922882477; Fri, 24 May 2019 11:14:30 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hUEgo-000792-Tq; Fri, 24 May 2019 18:13:10 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hUEgm-00078S-U5 for xen-devel@lists.xenproject.org; Fri, 24 May 2019 18:13:08 +0000 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 93e8053e-7e4f-11e9-8980-bc764e045a96; Fri, 24 May 2019 18:13:06 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id b11so1389657lfa.5 for ; Fri, 24 May 2019 11:13:06 -0700 (PDT) Received: from aanisov-work.kyiv.epam.com (ll-22.209.223.85.sovam.net.ua. [85.223.209.22]) by smtp.gmail.com with ESMTPSA id m25sm629438ljj.92.2019.05.24.11.13.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 24 May 2019 11:13:04 -0700 (PDT) X-Inumbo-ID: 93e8053e-7e4f-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m/QSMWlX8wXGmeejl4fowyUqo8c8kVetl/n9aWRvZIc=; b=VYSYFSGoJ3zxuHZ2+TOjZjiK7Y6iq77Uo6cvZXNZv3TttzxIBV4ibg6SqKLgPZgdDd V2eo2NTaclSh+Hw/Qr3i3Xr37A1/c7qJQUd8IypsVFY4AwWjoZ/n6heLALUiRkvd07G9 l6L8i7SPEYsQkEyo4gXlFIjS2TX38vEM3gsHJmueQiwNIks0grkntV0ooOadHJSNRdkZ bqCI2aiL49WzrCoGOkOHLlfM27h4387ViALOJyD/8O84603N5gnQMTiFAOYm2O1MY8Xy 0VIs4dQ8R5LYzq2iQU4Cq51Q/Q6QMXRWQ+++Rm0xqPU6A0W5BRmBMAfoz1k6lTnRmBXZ u35g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m/QSMWlX8wXGmeejl4fowyUqo8c8kVetl/n9aWRvZIc=; b=SRDdMNKEbRr6UQv/54bPzB5svM3Ev0LvwQORMd1+oJ8ScCh2O9JTkv/5t7ksDBg27y /atCgH7Wj9cXT5ADzsiaKb/UBaTpukn9lIOyGDe4/GhIvAmxwldovQJKMbqc+UQbn2bN 6Y4gHwbc06XsSYxEUS/+P5To80WUYhBxKEFOTtc3jVopoY2t4J29Bmb1HSStxdN5jxBO X8z3t4+VCMu+INbc+Pu+2zYer8F2wrrMrjXY8wtdPMhwOWJjWD1OE5vU7stldZHvBAqz oeRlN7g266ODXVXSz40bDd/ZQc/KYzNLv17mGYl8biM9Zv/8Xz+BTQoEHVW1hRr489z4 Oh8A== X-Gm-Message-State: APjAAAUG7HyomD1/FHyLYWlakF3p+jVDnX+/v8NvewkdIMnNgF3ZYGqO 5wF3rGNtmFOiGv/i7ivj6yPdelWZoA8= X-Google-Smtp-Source: APXvYqyfFMChcpYiZqigS2OL1oMscnfgkt3IE0vbTBNx0D4PdPW0IKbnSV4HOzkwokH2x9KL4ySXYw== X-Received: by 2002:a19:97d3:: with SMTP id z202mr19944186lfd.145.1558721585018; Fri, 24 May 2019 11:13:05 -0700 (PDT) From: Andrii Anisov To: xen-devel@lists.xenproject.org Date: Fri, 24 May 2019 21:12:56 +0300 Message-Id: <1558721577-13958-3-git-send-email-andrii.anisov@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1558721577-13958-1-git-send-email-andrii.anisov@gmail.com> References: <1558721577-13958-1-git-send-email-andrii.anisov@gmail.com> Subject: [Xen-devel] [PATCH v3] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Andrii Anisov , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Andrii Anisov Existing interface to register runstate are with its virtual address is prone to issues which became more obvious with KPTI enablement in guests. The nature of those issues is the fact that the guest could be interrupted by the hypervisor at any time, and there is no guarantee to have the registered virtual address translated with the currently available guest's page tables. Before the KPTI such a situation was possible in case the guest is caught in the middle of PT processing (e.g. superpage shattering). With the KPTI this happens also when the guest runs userspace, so has a pretty high probability. So it was agreed to register runstate with the guest's physical address so that its mapping is permanent from the hypervisor point of view. [1] The hypercall employs the same vcpu_register_runstate_memory_area structure for the interface, but requires a registered area to not cross a page boundary. [1] https://lists.xenproject.org/archives/html/xen-devel/2019-02/msg00416.h= tml Signed-off-by: Andrii Anisov --- xen/arch/arm/domain.c | 58 ++++++++++++++++++--- xen/arch/x86/domain.c | 99 ++++++++++++++++++++++++++++++++--- xen/arch/x86/x86_64/domain.c | 16 +++++- xen/common/domain.c | 121 +++++++++++++++++++++++++++++++++++++++= ---- xen/include/public/vcpu.h | 15 ++++++ xen/include/xen/sched.h | 28 +++++++--- 6 files changed, 306 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index ff330b3..ecedf1c 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -274,17 +274,15 @@ static void ctxt_switch_to(struct vcpu *n) virt_timer_restore(n); } =20 -/* Update per-VCPU guest runstate shared memory area (if registered). */ -static void update_runstate_area(struct vcpu *v) +static void update_runstate_by_gvaddr(struct vcpu *v) { void __user *guest_handle =3D NULL; =20 - if ( guest_handle_is_null(runstate_guest(v)) ) - return; + ASSERT(!guest_handle_is_null(runstate_guest_virt(v))); =20 if ( VM_ASSIST(v->domain, runstate_update_flag) ) { - guest_handle =3D &v->runstate_guest.p->state_entry_time + 1; + guest_handle =3D &v->runstate_guest.virt.p->state_entry_time + 1; guest_handle--; v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; __raw_copy_to_guest(guest_handle, @@ -292,7 +290,7 @@ static void update_runstate_area(struct vcpu *v) smp_wmb(); } =20 - __copy_to_guest(runstate_guest(v), &v->runstate, 1); + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); =20 if ( guest_handle ) { @@ -303,6 +301,53 @@ static void update_runstate_area(struct vcpu *v) } } =20 +static void update_runstate_by_gpaddr(struct vcpu *v) +{ + struct vcpu_runstate_info *runstate =3D + (struct vcpu_runstate_info *)v->runstate_guest.phys; + + ASSERT(runstate !=3D NULL); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |=3D XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; + } + + memcpy(v->runstate_guest.phys, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + } +} + +/* Update per-VCPU guest runstate shared memory area (if registered). */ +static void update_runstate_area(struct vcpu *v) +{ + if ( xchg(&v->runstate_in_use, 1) ) + return; + + switch ( v->runstate_guest_type ) + { + case RUNSTATE_NONE: + break; + + case RUNSTATE_VADDR: + update_runstate_by_gvaddr(v); + break; + + case RUNSTATE_PADDR: + update_runstate_by_gpaddr(v); + break; + } + + xchg(&v->runstate_in_use, 0); +} + static void schedule_tail(struct vcpu *prev) { ASSERT(prev !=3D current); @@ -998,6 +1043,7 @@ long do_arm_vcpu_op(int cmd, unsigned int vcpuid, XEN_= GUEST_HANDLE_PARAM(void) a { case VCPUOP_register_vcpu_info: case VCPUOP_register_runstate_memory_area: + case VCPUOP_register_runstate_phys_memory_area: return do_vcpu_op(cmd, vcpuid, arg); default: return -EINVAL; diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index ac960dd..fe71776 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1566,22 +1566,21 @@ void paravirt_ctxt_switch_to(struct vcpu *v) } =20 /* Update per-VCPU guest runstate shared memory area (if registered). */ -bool update_runstate_area(struct vcpu *v) +static bool update_runstate_by_gvaddr(struct vcpu *v) { bool rc; struct guest_memory_policy policy =3D { .nested_guest_mode =3D false }; void __user *guest_handle =3D NULL; =20 - if ( guest_handle_is_null(runstate_guest(v)) ) - return true; + ASSERT(!guest_handle_is_null(runstate_guest_virt(v))); =20 update_guest_memory_policy(v, &policy); =20 if ( VM_ASSIST(v->domain, runstate_update_flag) ) { guest_handle =3D has_32bit_shinfo(v->domain) - ? &v->runstate_guest.compat.p->state_entry_time + 1 - : &v->runstate_guest.native.p->state_entry_time + 1; + ? &v->runstate_guest.virt.compat.p->state_entry_time + 1 + : &v->runstate_guest.virt.native.p->state_entry_time + 1; guest_handle--; v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; __raw_copy_to_guest(guest_handle, @@ -1594,11 +1593,11 @@ bool update_runstate_area(struct vcpu *v) struct compat_vcpu_runstate_info info; =20 XLAT_vcpu_runstate_info(&info, &v->runstate); - __copy_to_guest(v->runstate_guest.compat, &info, 1); + __copy_to_guest(v->runstate_guest.virt.compat, &info, 1); rc =3D true; } else - rc =3D __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=3D + rc =3D __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1) != =3D sizeof(v->runstate); =20 if ( guest_handle ) @@ -1614,6 +1613,92 @@ bool update_runstate_area(struct vcpu *v) return rc; } =20 +static bool update_runstate_by_gpaddr_native(struct vcpu *v) +{ + struct vcpu_runstate_info *runstate =3D + (struct vcpu_runstate_info *)v->runstate_guest.phys; + + ASSERT(runstate !=3D NULL); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |=3D XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; + } + + memcpy(v->runstate_guest.phys, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + } + + return true; +} + +static bool update_runstate_by_gpaddr_compat(struct vcpu *v) +{ + struct compat_vcpu_runstate_info *runstate =3D + (struct compat_vcpu_runstate_info *)v->runstate_guest.phys; + + ASSERT(runstate !=3D NULL); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |=3D XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; + } + + { + struct compat_vcpu_runstate_info info; + XLAT_vcpu_runstate_info(&info, &v->runstate); + memcpy(v->runstate_guest.phys, &info, sizeof(info)); + } + else + memcpy(v->runstate_guest.phys, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + } + + return true; +} + +bool update_runstate_area(struct vcpu *v) +{ + bool rc =3D true; + + if ( xchg(&v->runstate_in_use, 1) ) + return rc; + + switch ( v->runstate_guest_type ) + { + case RUNSTATE_NONE: + break; + + case RUNSTATE_VADDR: + rc =3D update_runstate_by_gvaddr(v); + break; + + case RUNSTATE_PADDR: + if ( has_32bit_shinfo(v->domain) ) + rc =3D update_runstate_by_gpaddr_compat(v); + else + rc =3D update_runstate_by_gpaddr_native(v); + break; + } + + xchg(&v->runstate_in_use, 0); + return rc; +} + static void _update_runstate_area(struct vcpu *v) { if ( !update_runstate_area(v) && is_pv_vcpu(v) && diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c index c46dccc..85d0072 100644 --- a/xen/arch/x86/x86_64/domain.c +++ b/xen/arch/x86/x86_64/domain.c @@ -12,6 +12,8 @@ CHECK_vcpu_get_physid; #undef xen_vcpu_get_physid =20 +extern void discard_runstate_area(struct vcpu *v); + int arch_compat_vcpu_op( int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) @@ -35,8 +37,16 @@ arch_compat_vcpu_op( !compat_handle_okay(area.addr.h, 1) ) break; =20 + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + rc =3D 0; - guest_from_compat_handle(v->runstate_guest.compat, area.addr.h); + + guest_from_compat_handle(v->runstate_guest.virt.compat, + area.addr.h); + + v->runstate_guest_type =3D RUNSTATE_VADDR; =20 if ( v =3D=3D current ) { @@ -49,7 +59,9 @@ arch_compat_vcpu_op( vcpu_runstate_get(v, &runstate); XLAT_vcpu_runstate_info(&info, &runstate); } - __copy_to_guest(v->runstate_guest.compat, &info, 1); + __copy_to_guest(v->runstate_guest.virt.compat, &info, 1); + + xchg(&v->runstate_in_use, 0); =20 break; } diff --git a/xen/common/domain.c b/xen/common/domain.c index 90c6607..d276b87 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -698,6 +698,74 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, str= uct domain **d) return 0; } =20 +static void unmap_runstate_area(struct vcpu *v) +{ + mfn_t mfn; + + if ( ! v->runstate_guest.phys ) + return; + + mfn =3D domain_page_map_to_mfn(v->runstate_guest.phys); + + unmap_domain_page_global((void *) + ((unsigned long)v->runstate_guest.phys & + PAGE_MASK)); + + v->runstate_guest.phys =3D NULL; + put_page_and_type(mfn_to_page(mfn)); +} + +static int map_runstate_area(struct vcpu *v, + struct vcpu_register_runstate_memory_area *area) +{ + unsigned long offset =3D area->addr.p & ~PAGE_MASK; + gfn_t gfn =3D gaddr_to_gfn(area->addr.p); + struct domain *d =3D v->domain; + void *mapping; + struct page_info *page; + size_t size =3D sizeof(struct vcpu_runstate_info); + + if ( offset > (PAGE_SIZE - size) ) + return -EINVAL; + + page =3D get_page_from_gfn(d, gfn_x(gfn), NULL, P2M_ALLOC); + if ( !page ) + return -EINVAL; + + if ( !get_page_type(page, PGT_writable_page) ) + { + put_page(page); + return -EINVAL; + } + + mapping =3D __map_domain_page_global(page); + + if ( mapping =3D=3D NULL ) + { + put_page_and_type(page); + return -ENOMEM; + } + + v->runstate_guest.phys =3D mapping + offset; + + return 0; +} + +void discard_runstate_area(struct vcpu *v) +{ + if ( v->runstate_guest_type =3D=3D RUNSTATE_PADDR ) + unmap_runstate_area(v); + + v->runstate_guest_type =3D RUNSTATE_NONE; +} + +static void discard_runstate_area_locked(struct vcpu *v) +{ + while ( xchg(&v->runstate_in_use, 1) ); + discard_runstate_area(v); + xchg(&v->runstate_in_use, 0); +} + int domain_kill(struct domain *d) { int rc =3D 0; @@ -734,7 +802,10 @@ int domain_kill(struct domain *d) if ( cpupool_move_domain(d, cpupool0) ) return -ERESTART; for_each_vcpu ( d, v ) + { + discard_runstate_area_locked(v); unmap_vcpu_info(v); + } d->is_dying =3D DOMDYING_dead; /* Mem event cleanup has to go here because the rings=20 * have to be put before we call put_domain. */ @@ -1188,7 +1259,7 @@ int domain_soft_reset(struct domain *d) =20 for_each_vcpu ( d, v ) { - set_xen_guest_handle(runstate_guest(v), NULL); + discard_runstate_area_locked(v); unmap_vcpu_info(v); } =20 @@ -1518,18 +1589,50 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_G= UEST_HANDLE_PARAM(void) arg) break; =20 rc =3D 0; - runstate_guest(v) =3D area.addr.h; =20 - if ( v =3D=3D current ) - { - __copy_to_guest(runstate_guest(v), &v->runstate, 1); - } - else + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + + if ( !guest_handle_is_null(runstate_guest_virt(v)) ) { - vcpu_runstate_get(v, &runstate); - __copy_to_guest(runstate_guest(v), &runstate, 1); + runstate_guest_virt(v) =3D area.addr.h; + v->runstate_guest_type =3D RUNSTATE_VADDR; + + if ( v =3D=3D current ) + { + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); + } + else + { + vcpu_runstate_get(v, &runstate); + __copy_to_guest(runstate_guest_virt(v), &runstate, 1); + } } =20 + xchg(&v->runstate_in_use, 0); + + break; + } + + case VCPUOP_register_runstate_phys_memory_area: + { + struct vcpu_register_runstate_memory_area area; + + rc =3D -EFAULT; + if ( copy_from_guest(&area, arg, 1) ) + break; + + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + + rc =3D map_runstate_area(v, &area); + if ( !rc ) + v->runstate_guest_type =3D RUNSTATE_PADDR; + + xchg(&v->runstate_in_use, 0); + break; } =20 diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h index 3623af9..d7da4a3 100644 --- a/xen/include/public/vcpu.h +++ b/xen/include/public/vcpu.h @@ -235,6 +235,21 @@ struct vcpu_register_time_memory_area { typedef struct vcpu_register_time_memory_area vcpu_register_time_memory_ar= ea_t; DEFINE_XEN_GUEST_HANDLE(vcpu_register_time_memory_area_t); =20 +/* + * Register a shared memory area from which the guest may obtain its own + * runstate information without needing to execute a hypercall. + * Notes: + * 1. The registered address must be guest's physical address. + * 2. The registered runstate area should not cross page boundary. + * 3. Only one shared area may be registered per VCPU. The shared area is + * updated by the hypervisor each time the VCPU is scheduled. Thus + * runstate.state will always be RUNSTATE_running and + * runstate.state_entry_time will indicate the system time at which the + * VCPU was last scheduled to run. + * @extra_arg =3D=3D pointer to vcpu_register_runstate_memory_area structu= re. + */ +#define VCPUOP_register_runstate_phys_memory_area 14 + #endif /* __XEN_PUBLIC_VCPU_H__ */ =20 /* diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 2201fac..6c8de8f 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -163,17 +163,31 @@ struct vcpu void *sched_priv; /* scheduler-specific data */ =20 struct vcpu_runstate_info runstate; + + enum { + RUNSTATE_NONE =3D 0, + RUNSTATE_PADDR =3D 1, + RUNSTATE_VADDR =3D 2, + } runstate_guest_type; + + unsigned long runstate_in_use; + + union + { #ifndef CONFIG_COMPAT -# define runstate_guest(v) ((v)->runstate_guest) - XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest addres= s */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt) + XEN_GUEST_HANDLE(vcpu_runstate_info_t) virt; /* guest address */ #else -# define runstate_guest(v) ((v)->runstate_guest.native) - union { - XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; - XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; - } runstate_guest; /* guest address */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt.native) + union { + XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; + XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; + } virt; /* guest address */ #endif =20 + void* phys; + } runstate_guest; + /* last time when vCPU is scheduled out */ uint64_t last_run_time; =20 --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel