From nobody Tue Apr 30 05:01:01 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1559205960; cv=none; d=zoho.com; s=zohoarc; b=HYhNRuZMO//hhDLjWsocaPb/UivK3HgRG+8q+vNCNFF5YMiw84fOqN4tsNtCekYxGVgFj3VM3PCQoQgHHXjrokz6MPHh+OigWpuYF37Hjn/5X36q6A5+jgHwqPp6GJz24pICYvK2o07Po9kkyHjcD77cOrYI0i+IWZvfYbGzcew= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559205960; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:ARC-Authentication-Results; bh=7HZNQJb4Cgr1r4Xnar4Fp2+562snRoz2j+9oJClYWyc=; b=USXqzrn790dS+xGP+awv820Lv2eky269WmG/UQ4PfSYbGYQqpgX9Ew5UARbaq9OUqSGlH15YHPC6GbE2gC8yaSM+I0hjE4J1BfNTPxR/Q9EncWNarp8WUynYxK4AD5A6tCiX8qIQog3Ns8o+vtb8nvZs3gWAWCXrk44avAgbcbo= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559205960512433.562528083336; Thu, 30 May 2019 01:46:00 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWGft-0002BC-E8; Thu, 30 May 2019 08:44:37 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWGfr-0002Ah-Gn for xen-devel@lists.xenproject.org; Thu, 30 May 2019 08:44:35 +0000 Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 2593100e-82b7-11e9-8980-bc764e045a96; Thu, 30 May 2019 08:44:34 +0000 (UTC) Received: by mail-lj1-x243.google.com with SMTP id r76so5227421lja.12 for ; Thu, 30 May 2019 01:44:34 -0700 (PDT) Received: from aanisov-work.kyiv.epam.com (ll-22.209.223.85.sovam.net.ua. [85.223.209.22]) by smtp.gmail.com with ESMTPSA id y27sm369543lfg.33.2019.05.30.01.44.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 30 May 2019 01:44:31 -0700 (PDT) X-Inumbo-ID: 2593100e-82b7-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ToYgUBMbCPZ8jiO68i7AJLmv5tDV25Mvk6HHYTzxb4M=; b=IhUfIJbBrnAafHVR3WVKuQqzkjTgTyuW8sWSO+/1HyLVnZkvLv9F8QH2W07K3kaW1p AmILBsQOnfNFUt7/Oca0MT3z5atCGWo6thFwPNtKCl3nIO7qyYJHlF6zjCSf93h50XWC TLS+e7/vLP4Ab0ydS3VBo4UvbaNACxcp7jNmIMcompPwdeYxzDMWOrOQqNpSbPnCqXD8 Tzcfr+nqLcSG8bOB2ejVrmaTBE/OQKK4xkKYVSuXCHT2w/phVTlXC0jLnCeN5ivudGY+ sdwWMZFKJf9dC21uDB4534HKy20FPKWKywCVcNxCXuIl2kefpcPiEZgX9d9yiScTJM3U xJOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ToYgUBMbCPZ8jiO68i7AJLmv5tDV25Mvk6HHYTzxb4M=; b=AV19bUFYnTtHq3+dgbzx3M2iM16Wdy4xevquaIWF9bZhxrAG9ODDJxkC02MGPChRWB RbEtvRKv7k/YL3ExGdLf6EDeLXRHdpI0QA8komVUtWtNG3y+zJyyj6SVX3VVX7dYxYKU 5Sa4LGbz1aOn0c7s4bPLibnCAnANFtzGBt5+wi3h5qeZZ6tEgRVVaLKVinajVyDgWgsb CyiPdR1veKOvQUObihRABeU7RPs+isNV0Bm44EbOnBu1vp7Bjgso4ME3l1g4Z9Yp9XN+ ZF3L0JOUTV/PBjMtiq3r12rwwaJ1lIzd8hQN0gD5qxJ6K01Wv8BjKnF55NltGKXxRe8+ J/og== X-Gm-Message-State: APjAAAUWmOItrumVkibkLfUx1KmvbsDQqSRYjjo/+pA6WiS6IOESU5pJ ixZQ+gylnqwUmcdQ81SKSRI= X-Google-Smtp-Source: APXvYqzsNuUGo2GkkLTHlpSAiTeJDAgdZ3G/KRcuopE1dGzS5SK4FSiLbOQfdWjS67Mkl9XbXYnJOA== X-Received: by 2002:a2e:5b52:: with SMTP id p79mr1433070ljb.208.1559205872716; Thu, 30 May 2019 01:44:32 -0700 (PDT) From: Andrii Anisov To: Date: Thu, 30 May 2019 11:44:26 +0300 Message-Id: <1559205867-19597-3-git-send-email-andrii.anisov@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559205867-19597-1-git-send-email-andrii.anisov@gmail.com> References: <1559205867-19597-1-git-send-email-andrii.anisov@gmail.com> Subject: [Xen-devel] [RESEND PATCH RFC 1] [DO NOT APPLY] introduce VCPUOP_register_runstate_phys_memory_area hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Andrii Anisov , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , xen-devel@lists.xenproject.org MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Andrii Anisov An RFC version of the runstate registration with phys address. Runstate area access is implemented with mapping on each access, like old interface did. Signed-off-by: Andrii Anisov --- xen/arch/arm/domain.c | 63 ++++++++++++++++++++++++++++++++++++++++++-= ---- xen/common/domain.c | 51 +++++++++++++++++++++++++++++++++++--- xen/include/public/vcpu.h | 15 +++++++++++ xen/include/xen/sched.h | 28 +++++++++++++++------ 4 files changed, 140 insertions(+), 17 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index a9f7ff5..610957a 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -274,17 +274,15 @@ static void ctxt_switch_to(struct vcpu *n) virt_timer_restore(n); } =20 -/* Update per-VCPU guest runstate shared memory area (if registered). */ -static void update_runstate_area(struct vcpu *v) +static void update_runstate_by_gvaddr(struct vcpu *v) { void __user *guest_handle =3D NULL; =20 - if ( guest_handle_is_null(runstate_guest(v)) ) - return; + ASSERT(!guest_handle_is_null(runstate_guest_virt(v))); =20 if ( VM_ASSIST(v->domain, runstate_update_flag) ) { - guest_handle =3D &v->runstate_guest.p->state_entry_time + 1; + guest_handle =3D &v->runstate_guest.virt.p->state_entry_time + 1; guest_handle--; v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; __raw_copy_to_guest(guest_handle, @@ -292,7 +290,7 @@ static void update_runstate_area(struct vcpu *v) smp_wmb(); } =20 - __copy_to_guest(runstate_guest(v), &v->runstate, 1); + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); =20 if ( guest_handle ) { @@ -303,6 +301,58 @@ static void update_runstate_area(struct vcpu *v) } } =20 +extern int map_runstate_area(struct vcpu *v, struct vcpu_runstate_info **a= rea); +extern void unmap_runstate_area(struct vcpu_runstate_info *area); + +static void update_runstate_by_gpaddr(struct vcpu *v) +{ + struct domain *d =3D v->domain; + paddr_t gpaddr =3D 0; + + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + gpaddr =3D v->runstate_guest.phys + offsetof(struct vcpu_runstate_= info, state_entry_time) + sizeof(uint64_t) - 1; + v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE; + copy_to_guest_phys_flush_dcache (d, gpaddr, + (void *)(&v->runstate.state_entry= _time + 1) - 1, 1); + smp_wmb(); + } + + copy_to_guest_phys_flush_dcache (d, v->runstate_guest.phys, &v->runsta= te, sizeof(struct vcpu_runstate_info)); + + if ( gpaddr ) + { + v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + copy_to_guest_phys_flush_dcache (d, gpaddr, + (void *)(&v->runstate.state_entry= _time + 1) - 1, 1); + } +} + +/* Update per-VCPU guest runstate shared memory area (if registered). */ +static void update_runstate_area(struct vcpu *v) +{ + if ( xchg(&v->runstate_in_use, 1) ) + return; + + switch ( v->runstate_guest_type ) + { + case RUNSTATE_NONE: + break; + + case RUNSTATE_VADDR: + update_runstate_by_gvaddr(v); + break; + + case RUNSTATE_PADDR: + update_runstate_by_gpaddr(v); + break; + } + + xchg(&v->runstate_in_use, 0); +} + static void schedule_tail(struct vcpu *prev) { ctxt_switch_from(prev); @@ -998,6 +1048,7 @@ long do_arm_vcpu_op(int cmd, unsigned int vcpuid, XEN_= GUEST_HANDLE_PARAM(void) a { case VCPUOP_register_vcpu_info: case VCPUOP_register_runstate_memory_area: + case VCPUOP_register_runstate_phys_memory_area: return do_vcpu_op(cmd, vcpuid, arg); default: return -EINVAL; diff --git a/xen/common/domain.c b/xen/common/domain.c index 32bca8d..b58d6dd 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -700,6 +700,18 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, str= uct domain **d) return 0; } =20 +static void discard_runstate_area(struct vcpu *v) +{ + v->runstate_guest_type =3D RUNSTATE_NONE; +} + +static void discard_runstate_area_locked(struct vcpu *v) +{ + while ( xchg(&v->runstate_in_use, 1) ); + discard_runstate_area(v); + xchg(&v->runstate_in_use, 0); +} + int domain_kill(struct domain *d) { int rc =3D 0; @@ -738,7 +750,10 @@ int domain_kill(struct domain *d) if ( cpupool_move_domain(d, cpupool0) ) return -ERESTART; for_each_vcpu ( d, v ) + { + discard_runstate_area_locked(v); unmap_vcpu_info(v); + } d->is_dying =3D DOMDYING_dead; /* Mem event cleanup has to go here because the rings=20 * have to be put before we call put_domain. */ @@ -1192,7 +1207,7 @@ int domain_soft_reset(struct domain *d) =20 for_each_vcpu ( d, v ) { - set_xen_guest_handle(runstate_guest(v), NULL); + discard_runstate_area_locked(v); unmap_vcpu_info(v); } =20 @@ -1520,18 +1535,46 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_G= UEST_HANDLE_PARAM(void) arg) break; =20 rc =3D 0; - runstate_guest(v) =3D area.addr.h; + + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + + runstate_guest_virt(v) =3D area.addr.h; + v->runstate_guest_type =3D RUNSTATE_VADDR; =20 if ( v =3D=3D current ) { - __copy_to_guest(runstate_guest(v), &v->runstate, 1); + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); } else { vcpu_runstate_get(v, &runstate); - __copy_to_guest(runstate_guest(v), &runstate, 1); + __copy_to_guest(runstate_guest_virt(v), &runstate, 1); } =20 + xchg(&v->runstate_in_use, 0); + + break; + } + + case VCPUOP_register_runstate_phys_memory_area: + { + struct vcpu_register_runstate_memory_area area; + + rc =3D -EFAULT; + if ( copy_from_guest(&area, arg, 1) ) + break; + + while( xchg(&v->runstate_in_use, 1) =3D=3D 0); + + discard_runstate_area(v); + v->runstate_guest.phys =3D area.addr.p; + v->runstate_guest_type =3D RUNSTATE_PADDR; + + xchg(&v->runstate_in_use, 0); + rc =3D 0; + break; } =20 diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h index 3623af9..d7da4a3 100644 --- a/xen/include/public/vcpu.h +++ b/xen/include/public/vcpu.h @@ -235,6 +235,21 @@ struct vcpu_register_time_memory_area { typedef struct vcpu_register_time_memory_area vcpu_register_time_memory_ar= ea_t; DEFINE_XEN_GUEST_HANDLE(vcpu_register_time_memory_area_t); =20 +/* + * Register a shared memory area from which the guest may obtain its own + * runstate information without needing to execute a hypercall. + * Notes: + * 1. The registered address must be guest's physical address. + * 2. The registered runstate area should not cross page boundary. + * 3. Only one shared area may be registered per VCPU. The shared area is + * updated by the hypervisor each time the VCPU is scheduled. Thus + * runstate.state will always be RUNSTATE_running and + * runstate.state_entry_time will indicate the system time at which the + * VCPU was last scheduled to run. + * @extra_arg =3D=3D pointer to vcpu_register_runstate_memory_area structu= re. + */ +#define VCPUOP_register_runstate_phys_memory_area 14 + #endif /* __XEN_PUBLIC_VCPU_H__ */ =20 /* diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index edee52d..8ac597b 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -163,17 +163,31 @@ struct vcpu void *sched_priv; /* scheduler-specific data */ =20 struct vcpu_runstate_info runstate; + + enum { + RUNSTATE_NONE =3D 0, + RUNSTATE_PADDR =3D 1, + RUNSTATE_VADDR =3D 2, + } runstate_guest_type; + + unsigned long runstate_in_use; + + union + { #ifndef CONFIG_COMPAT -# define runstate_guest(v) ((v)->runstate_guest) - XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest addres= s */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt) + XEN_GUEST_HANDLE(vcpu_runstate_info_t) virt; /* guest address */ #else -# define runstate_guest(v) ((v)->runstate_guest.native) - union { - XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; - XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; - } runstate_guest; /* guest address */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt.native) + union { + XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; + XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; + } virt; /* guest address */ #endif =20 + paddr_t phys; + } runstate_guest; + /* last time when vCPU is scheduled out */ uint64_t last_run_time; =20 --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel