From nobody Mon Apr 29 04:35:35 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1559205959; cv=none; d=zoho.com; s=zohoarc; b=j9a4EaehfM6Gy+zBQyHjT0Y0W82Q7xct4TTyK2ufNHqOXfR6TbdbO4Od/j/n7hsgB09ie8JQg3xA2JjwbqLpWVzIQJEAEXBmJ8LT5HvdlgY2P4TusCpUJQksdavZOPOqYn0QrroEEac0orpRYycQ3N6QMSXbaltUV7JUukZGycg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559205959; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:ARC-Authentication-Results; bh=2oeiH59oWw/hKEzYhpsoIPWy+oo7PhobYIvRNnDCs1w=; b=i3OOxlrgt9sMERI3EEJvD39r0SyQlRNoOsqLMep92U4f9QvvT8MzFJmmLHpxCVekVlAKZ/Yl71xuh6pCs9sXPvxWLrmeNtouDrn9HLJOpsZbGV54CZsJhiT3JLd55Rp4c1YDtpmKtwldlYhUcmI3txjm2+//gLRiSjykjwp9KZ4= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559205959509754.4333018708737; Thu, 30 May 2019 01:45:59 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWGfu-0002Bb-Rl; Thu, 30 May 2019 08:44:38 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWGfs-0002B1-U5 for xen-devel@lists.xenproject.org; Thu, 30 May 2019 08:44:36 +0000 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 266d7a74-82b7-11e9-8980-bc764e045a96; Thu, 30 May 2019 08:44:35 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id a9so2945604lff.7 for ; Thu, 30 May 2019 01:44:35 -0700 (PDT) Received: from aanisov-work.kyiv.epam.com (ll-22.209.223.85.sovam.net.ua. [85.223.209.22]) by smtp.gmail.com with ESMTPSA id y27sm369543lfg.33.2019.05.30.01.44.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 30 May 2019 01:44:33 -0700 (PDT) X-Inumbo-ID: 266d7a74-82b7-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1VhFGqpRzYi36j0UdsAC5u3jd3GfJ2G+kRwRwEcdc1s=; b=hbn8/p25NvwesNQ83hVsksg9wlBDEmJ83DZlTp9Yajiw9HTC8znpq5166XFATq8fDL CnG8jvzB6Rcnd6b0TQ9JeKpcn5DgR2uxaUb+lPUTH7rfjbZcmXLy81h4pETJKqU8Qj5d hhqaN8MNY+7yXd+xJiERphr4VunkFaa3v8kY9BJYuA6iK5iMKoGfJbYRtTMBHCq2YWZt mlCXIV0Zb799FKr35mnapM8AtaZAWuh0pQZH/jHOKE5kSX5ggkBKLiZFNriEEUSkmOHe 4C/ueLunUDY/g71r+9q4tEJZYJX/zP14kqVM57OzwoE98Uo18kBN5Q0f95DRtvWEbCyi cpEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1VhFGqpRzYi36j0UdsAC5u3jd3GfJ2G+kRwRwEcdc1s=; b=S787dC5Sr3k1QBS47ML2GtY5OnoIhN+V017Lb9LLafMl+dOWNtJ+beKmky03Vg7I5a Su6UsZA1s3zmT+ScLuKkAKwuXJ+0qOy7ci5b6R/1fBuMKE/Q4mjgEqr5bYNyWjN9ZelG JkKWwnTx293R2plPNjuJsxsIKRmdXfpgQypQVH3Sb/vBQwLG6ppxjx39Ujwc0wdIgui+ juU1ZTZa20iqzrSVwYAMOoT0yRTLgEs+GNGB49xzO/1ZClHU4Eej3IgCGsNRfJmlpstW NLEks/Y5dg5aFDnVaun4QhS/L3zzh69XzyG+0UKzhFDtSsmoB8Cw2+33R6zCh68bKwXX zY5w== X-Gm-Message-State: APjAAAUB3NfVxjV5cmwhRUIKsfNep29wxr8VoX+g63WwVtWw9RrPTEui pmZhTJ7B9mNLmaR1Yb7V3bM= X-Google-Smtp-Source: APXvYqwglzG26/RgSxPnRS7DkuO15aUyWm7Gl6UNmnTLSVimxJbCLG6GuMH3t6pkdOIobvMaOJonoA== X-Received: by 2002:a19:3f4b:: with SMTP id m72mr1396436lfa.91.1559205874152; Thu, 30 May 2019 01:44:34 -0700 (PDT) From: Andrii Anisov To: Date: Thu, 30 May 2019 11:44:27 +0300 Message-Id: <1559205867-19597-4-git-send-email-andrii.anisov@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559205867-19597-1-git-send-email-andrii.anisov@gmail.com> References: <1559205867-19597-1-git-send-email-andrii.anisov@gmail.com> Subject: [Xen-devel] [RESEND PATCH RFC 2] [DO NOT APPLY] introduce VCPUOP_register_runstate_phys_memory_area hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Andrii Anisov , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , xen-devel@lists.xenproject.org MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Andrii Anisov An RFC version of the runstate registration with phys address. Runstate area access is implemented with mapping on each update once for all accesses. Signed-off-by: Andrii Anisov --- xen/arch/arm/domain.c | 63 ++++++++++++++++++++++++++--- xen/common/domain.c | 101 ++++++++++++++++++++++++++++++++++++++++++= ++-- xen/include/public/vcpu.h | 15 +++++++ xen/include/xen/sched.h | 28 +++++++++---- 4 files changed, 190 insertions(+), 17 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index a9f7ff5..04c4cff 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -274,17 +274,15 @@ static void ctxt_switch_to(struct vcpu *n) virt_timer_restore(n); } =20 -/* Update per-VCPU guest runstate shared memory area (if registered). */ -static void update_runstate_area(struct vcpu *v) +static void update_runstate_by_gvaddr(struct vcpu *v) { void __user *guest_handle =3D NULL; =20 - if ( guest_handle_is_null(runstate_guest(v)) ) - return; + ASSERT(!guest_handle_is_null(runstate_guest_virt(v))); =20 if ( VM_ASSIST(v->domain, runstate_update_flag) ) { - guest_handle =3D &v->runstate_guest.p->state_entry_time + 1; + guest_handle =3D &v->runstate_guest.virt.p->state_entry_time + 1; guest_handle--; v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; __raw_copy_to_guest(guest_handle, @@ -292,7 +290,7 @@ static void update_runstate_area(struct vcpu *v) smp_wmb(); } =20 - __copy_to_guest(runstate_guest(v), &v->runstate, 1); + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); =20 if ( guest_handle ) { @@ -303,6 +301,58 @@ static void update_runstate_area(struct vcpu *v) } } =20 +extern int map_runstate_area(struct vcpu *v, struct vcpu_runstate_info **a= rea); +extern void unmap_runstate_area(struct vcpu_runstate_info *area); + +static void update_runstate_by_gpaddr(struct vcpu *v) +{ + struct vcpu_runstate_info *runstate; + + if ( map_runstate_area(v, &runstate) ) + return; + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |=3D XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; + } + + memcpy(runstate, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + } + + unmap_runstate_area(runstate); +} + +/* Update per-VCPU guest runstate shared memory area (if registered). */ +static void update_runstate_area(struct vcpu *v) +{ + if ( xchg(&v->runstate_in_use, 1) ) + return; + + switch ( v->runstate_guest_type ) + { + case RUNSTATE_NONE: + break; + + case RUNSTATE_VADDR: + update_runstate_by_gvaddr(v); + break; + + case RUNSTATE_PADDR: + update_runstate_by_gpaddr(v); + break; + } + + xchg(&v->runstate_in_use, 0); +} + static void schedule_tail(struct vcpu *prev) { ctxt_switch_from(prev); @@ -998,6 +1048,7 @@ long do_arm_vcpu_op(int cmd, unsigned int vcpuid, XEN_= GUEST_HANDLE_PARAM(void) a { case VCPUOP_register_vcpu_info: case VCPUOP_register_runstate_memory_area: + case VCPUOP_register_runstate_phys_memory_area: return do_vcpu_op(cmd, vcpuid, arg); default: return -EINVAL; diff --git a/xen/common/domain.c b/xen/common/domain.c index 32bca8d..f167a68 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -700,6 +700,68 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, str= uct domain **d) return 0; } =20 +void unmap_runstate_area(struct vcpu_runstate_info *area) +{ + mfn_t mfn; + + ASSERT(area !=3D NULL); + + mfn =3D _mfn(domain_page_map_to_mfn(area)); + + unmap_domain_page_global((void *) + ((unsigned long)area & + PAGE_MASK)); + + put_page_and_type(mfn_to_page(mfn)); +} + +int map_runstate_area(struct vcpu *v, struct vcpu_runstate_info **area) +{ + unsigned long offset =3D v->runstate_guest.phys & ~PAGE_MASK; + gfn_t gfn =3D gaddr_to_gfn(v->runstate_guest.phys); + struct domain *d =3D v->domain; + void *mapping; + struct page_info *page; + size_t size =3D sizeof(struct vcpu_runstate_info); + + if ( offset > (PAGE_SIZE - size) ) + return -EINVAL; + + page =3D get_page_from_gfn(d, gfn_x(gfn), NULL, P2M_ALLOC); + if ( !page ) + return -EINVAL; + + if ( !get_page_type(page, PGT_writable_page) ) + { + put_page(page); + return -EINVAL; + } + + mapping =3D __map_domain_page_global(page); + + if ( mapping =3D=3D NULL ) + { + put_page_and_type(page); + return -ENOMEM; + } + + *area =3D mapping + offset; + + return 0; +} + +static void discard_runstate_area(struct vcpu *v) +{ + v->runstate_guest_type =3D RUNSTATE_NONE; +} + +static void discard_runstate_area_locked(struct vcpu *v) +{ + while ( xchg(&v->runstate_in_use, 1) ); + discard_runstate_area(v); + xchg(&v->runstate_in_use, 0); +} + int domain_kill(struct domain *d) { int rc =3D 0; @@ -738,7 +800,10 @@ int domain_kill(struct domain *d) if ( cpupool_move_domain(d, cpupool0) ) return -ERESTART; for_each_vcpu ( d, v ) + { + discard_runstate_area_locked(v); unmap_vcpu_info(v); + } d->is_dying =3D DOMDYING_dead; /* Mem event cleanup has to go here because the rings=20 * have to be put before we call put_domain. */ @@ -1192,7 +1257,7 @@ int domain_soft_reset(struct domain *d) =20 for_each_vcpu ( d, v ) { - set_xen_guest_handle(runstate_guest(v), NULL); + discard_runstate_area_locked(v); unmap_vcpu_info(v); } =20 @@ -1520,18 +1585,46 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_G= UEST_HANDLE_PARAM(void) arg) break; =20 rc =3D 0; - runstate_guest(v) =3D area.addr.h; + + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + + runstate_guest_virt(v) =3D area.addr.h; + v->runstate_guest_type =3D RUNSTATE_VADDR; =20 if ( v =3D=3D current ) { - __copy_to_guest(runstate_guest(v), &v->runstate, 1); + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); } else { vcpu_runstate_get(v, &runstate); - __copy_to_guest(runstate_guest(v), &runstate, 1); + __copy_to_guest(runstate_guest_virt(v), &runstate, 1); } =20 + xchg(&v->runstate_in_use, 0); + + break; + } + + case VCPUOP_register_runstate_phys_memory_area: + { + struct vcpu_register_runstate_memory_area area; + + rc =3D -EFAULT; + if ( copy_from_guest(&area, arg, 1) ) + break; + + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + v->runstate_guest.phys =3D area.addr.p; + v->runstate_guest_type =3D RUNSTATE_PADDR; + + xchg(&v->runstate_in_use, 0); + rc =3D 0; + break; } =20 diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h index 3623af9..d7da4a3 100644 --- a/xen/include/public/vcpu.h +++ b/xen/include/public/vcpu.h @@ -235,6 +235,21 @@ struct vcpu_register_time_memory_area { typedef struct vcpu_register_time_memory_area vcpu_register_time_memory_ar= ea_t; DEFINE_XEN_GUEST_HANDLE(vcpu_register_time_memory_area_t); =20 +/* + * Register a shared memory area from which the guest may obtain its own + * runstate information without needing to execute a hypercall. + * Notes: + * 1. The registered address must be guest's physical address. + * 2. The registered runstate area should not cross page boundary. + * 3. Only one shared area may be registered per VCPU. The shared area is + * updated by the hypervisor each time the VCPU is scheduled. Thus + * runstate.state will always be RUNSTATE_running and + * runstate.state_entry_time will indicate the system time at which the + * VCPU was last scheduled to run. + * @extra_arg =3D=3D pointer to vcpu_register_runstate_memory_area structu= re. + */ +#define VCPUOP_register_runstate_phys_memory_area 14 + #endif /* __XEN_PUBLIC_VCPU_H__ */ =20 /* diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index edee52d..8ac597b 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -163,17 +163,31 @@ struct vcpu void *sched_priv; /* scheduler-specific data */ =20 struct vcpu_runstate_info runstate; + + enum { + RUNSTATE_NONE =3D 0, + RUNSTATE_PADDR =3D 1, + RUNSTATE_VADDR =3D 2, + } runstate_guest_type; + + unsigned long runstate_in_use; + + union + { #ifndef CONFIG_COMPAT -# define runstate_guest(v) ((v)->runstate_guest) - XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest addres= s */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt) + XEN_GUEST_HANDLE(vcpu_runstate_info_t) virt; /* guest address */ #else -# define runstate_guest(v) ((v)->runstate_guest.native) - union { - XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; - XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; - } runstate_guest; /* guest address */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt.native) + union { + XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; + XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; + } virt; /* guest address */ #endif =20 + paddr_t phys; + } runstate_guest; + /* last time when vCPU is scheduled out */ uint64_t last_run_time; =20 --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel