From nobody Wed May 8 23:10:08 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559225977; cv=none; d=zoho.com; s=zohoarc; b=Qsd2LZu1b4HZCb/4FMHKFRd3jzHA+AhnWZmGGcQ7iZiKX/+vVuuYRVyZR5jjAsky0Hr7zsbR0T1YGS13lV6nEjHsv8gwKRF7InVDn6jnJXImZcClLRVpyx+S9Tyha8oVCVZNa/YJ1jxmXKAFFxGMkNUscvLxzE/FVMP/9AgCXnY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559225977; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=uR21q+gId6RsYBu45SFrwTSD7kaIxL99ueYRNKR9k3A=; b=YDuc/6xGWlflvvO9XUUM7Ro0QNNosO8kysvBEArdM0BTewVbY8oJw6zwI5VJEEpvStwzA+7TChwcfHUmdI9lioy4jIpMRAe8QTJnJsN/CKHmkW3M+Uy6b37IKhE7KWUVrt+WRxrAmn/xORwDvnSEaXFTlS6lRn2RsJTbOv/+pmg= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559225977164226.35757857882254; Thu, 30 May 2019 07:19:37 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt1-00038V-Hu; Thu, 30 May 2019 14:18:31 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLsz-00036v-F8 for xen-devel@lists.xenproject.org; Thu, 30 May 2019 14:18:29 +0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id c9b316c8-82e5-11e9-8e3e-a371afae5c75; Thu, 30 May 2019 14:18:26 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 15FED305FFA1; Thu, 30 May 2019 17:18:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id EE83D303442F; Thu, 30 May 2019 17:18:24 +0300 (EEST) X-Inumbo-ID: c9b316c8-82e5-11e9-8e3e-a371afae5c75 From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 17:18:15 +0300 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH 1/9] tools/libxc: Consistent usage of xc_vm_event_* interface X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Ian Jackson , Wei Liu MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Modified xc_mem_paging_enable to use directly xc_vm_event_enable and moved the ring_page handling from client to libxc (xenpaging). Restricted vm_event_control usage only to simplest domctls which do not expect any return values and change xc_vm_event_enable to call do_domctl directly. Removed xc_memshr_ring_enable/disable and xc_memshr_domain_resume. Signed-off-by: Petre Pircalabu --- tools/libxc/include/xenctrl.h | 49 +-------------------------------- tools/libxc/xc_mem_paging.c | 23 +++++----------- tools/libxc/xc_memshr.c | 34 ----------------------- tools/libxc/xc_monitor.c | 31 +++++++++++++++++---- tools/libxc/xc_private.h | 2 +- tools/libxc/xc_vm_event.c | 64 ++++++++++++++++-----------------------= ---- tools/xenpaging/xenpaging.c | 42 +++------------------------- 7 files changed, 62 insertions(+), 183 deletions(-) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index 538007a..28fdbc0 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -1949,7 +1949,7 @@ int xc_altp2m_change_gfn(xc_interface *handle, uint32= _t domid, * Hardware-Assisted Paging (i.e. Intel EPT, AMD NPT). Moreover, AMD NPT * support is considered experimental. */ -int xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id, uint32_t *= port); +void *xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id, uint32_t= *port); int xc_mem_paging_disable(xc_interface *xch, uint32_t domain_id); int xc_mem_paging_resume(xc_interface *xch, uint32_t domain_id); int xc_mem_paging_nominate(xc_interface *xch, uint32_t domain_id, @@ -2082,53 +2082,6 @@ int xc_memshr_control(xc_interface *xch, uint32_t domid, int enable); =20 -/* Create a communication ring in which the hypervisor will place ENOMEM - * notifications. - * - * ENOMEM happens when unsharing pages: a Copy-on-Write duplicate needs to= be - * allocated, and thus the out-of-memory error occurr. - * - * For complete examples on how to plumb a notification ring, look into - * xenpaging or xen-access. - * - * On receipt of a notification, the helper should ensure there is memory - * available to the domain before retrying. - * - * If a domain encounters an ENOMEM condition when sharing and this ring - * has not been set up, the hypervisor will crash the domain. - * - * Fails with: - * EINVAL if port is NULL - * EINVAL if the sharing ring has already been enabled - * ENOSYS if no guest gfn has been specified to host the ring via an hvm = param - * EINVAL if the gfn for the ring has not been populated - * ENOENT if the gfn for the ring is paged out, or cannot be unshared - * EINVAL if the gfn for the ring cannot be written to - * EINVAL if the domain is dying - * ENOSPC if an event channel cannot be allocated for the ring - * ENOMEM if memory cannot be allocated for internal data structures - * EINVAL or EACCESS if the request is denied by the security policy - */ - -int xc_memshr_ring_enable(xc_interface *xch,=20 - uint32_t domid, - uint32_t *port); -/* Disable the ring for ENOMEM communication. - * May fail with EINVAL if the ring was not enabled in the first place. - */ -int xc_memshr_ring_disable(xc_interface *xch,=20 - uint32_t domid); - -/* - * Calls below return EINVAL if sharing has not been enabled for the domain - * Calls below return EINVAL if the domain is dying - */ -/* Once a reponse to an ENOMEM notification is prepared, the tool can - * notify the hypervisor to re-schedule the faulting vcpu of the domain wi= th an - * event channel kick and/or this call. */ -int xc_memshr_domain_resume(xc_interface *xch, - uint32_t domid); - /* Select a page for sharing.=20 * * A 64 bit opaque handle will be stored in handle. The hypervisor ensures diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c index a067706..08468fb 100644 --- a/tools/libxc/xc_mem_paging.c +++ b/tools/libxc/xc_mem_paging.c @@ -37,35 +37,26 @@ static int xc_mem_paging_memop(xc_interface *xch, uint3= 2_t domain_id, return do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo)); } =20 -int xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id, - uint32_t *port) +void *xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id, + uint32_t *port) { - if ( !port ) - { - errno =3D EINVAL; - return -1; - } - - return xc_vm_event_control(xch, domain_id, - XEN_VM_EVENT_ENABLE, - XEN_DOMCTL_VM_EVENT_OP_PAGING, - port); + return xc_vm_event_enable(xch, domain_id, + XEN_DOMCTL_VM_EVENT_OP_PAGING, + port); } =20 int xc_mem_paging_disable(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_PAGING, - NULL); + XEN_DOMCTL_VM_EVENT_OP_PAGING); } =20 int xc_mem_paging_resume(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_PAGING, - NULL); + XEN_DOMCTL_VM_EVENT_OP_PAGING); } =20 int xc_mem_paging_nominate(xc_interface *xch, uint32_t domain_id, uint64_t= gfn) diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c index d5e135e..06f613a 100644 --- a/tools/libxc/xc_memshr.c +++ b/tools/libxc/xc_memshr.c @@ -41,31 +41,6 @@ int xc_memshr_control(xc_interface *xch, return do_domctl(xch, &domctl); } =20 -int xc_memshr_ring_enable(xc_interface *xch,=20 - uint32_t domid, - uint32_t *port) -{ - if ( !port ) - { - errno =3D EINVAL; - return -1; - } - - return xc_vm_event_control(xch, domid, - XEN_VM_EVENT_ENABLE, - XEN_DOMCTL_VM_EVENT_OP_SHARING, - port); -} - -int xc_memshr_ring_disable(xc_interface *xch,=20 - uint32_t domid) -{ - return xc_vm_event_control(xch, domid, - XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_SHARING, - NULL); -} - static int xc_memshr_memop(xc_interface *xch, uint32_t domid, xen_mem_sharing_op_t *mso) { @@ -200,15 +175,6 @@ int xc_memshr_range_share(xc_interface *xch, return xc_memshr_memop(xch, source_domain, &mso); } =20 -int xc_memshr_domain_resume(xc_interface *xch, - uint32_t domid) -{ - return xc_vm_event_control(xch, domid, - XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_SHARING, - NULL); -} - int xc_memshr_debug_gfn(xc_interface *xch, uint32_t domid, unsigned long gfn) diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c index 4ac823e..d190c29 100644 --- a/tools/libxc/xc_monitor.c +++ b/tools/libxc/xc_monitor.c @@ -24,24 +24,43 @@ =20 void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *p= ort) { - return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN, - port); + void *buffer; + int saved_errno; + + /* Pause the domain for ring page setup */ + if ( xc_domain_pause(xch, domain_id) ) + { + PERROR("Unable to pause domain\n"); + return NULL; + } + + buffer =3D xc_vm_event_enable(xch, domain_id, + HVM_PARAM_MONITOR_RING_PFN, + port); + saved_errno =3D errno; + if ( xc_domain_unpause(xch, domain_id) ) + { + if ( buffer ) + saved_errno =3D errno; + PERROR("Unable to unpause domain"); + } + + errno =3D saved_errno; + return buffer; } =20 int xc_monitor_disable(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_MONITOR, - NULL); + XEN_DOMCTL_VM_EVENT_OP_MONITOR); } =20 int xc_monitor_resume(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_MONITOR, - NULL); + XEN_DOMCTL_VM_EVENT_OP_MONITOR); } =20 int xc_monitor_get_capabilities(xc_interface *xch, uint32_t domain_id, diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h index adc3b6a..663e78b 100644 --- a/tools/libxc/xc_private.h +++ b/tools/libxc/xc_private.h @@ -412,7 +412,7 @@ int xc_ffs64(uint64_t x); * vm_event operations. Internal use only. */ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned in= t op, - unsigned int mode, uint32_t *port); + unsigned int mode); /* * Enables vm_event and returns the mapped ring page indicated by param. * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c index a97c615..ea10366 100644 --- a/tools/libxc/xc_vm_event.c +++ b/tools/libxc/xc_vm_event.c @@ -23,20 +23,16 @@ #include "xc_private.h" =20 int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned in= t op, - unsigned int mode, uint32_t *port) + unsigned int mode) { DECLARE_DOMCTL; - int rc; =20 domctl.cmd =3D XEN_DOMCTL_vm_event_op; domctl.domain =3D domain_id; domctl.u.vm_event_op.op =3D op; domctl.u.vm_event_op.mode =3D mode; =20 - rc =3D do_domctl(xch, &domctl); - if ( !rc && port ) - *port =3D domctl.u.vm_event_op.u.enable.port; - return rc; + return do_domctl(xch, &domctl); } =20 void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, @@ -46,7 +42,8 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t doma= in_id, int param, uint64_t pfn; xen_pfn_t ring_pfn, mmap_pfn; unsigned int op, mode; - int rc1, rc2, saved_errno; + int rc; + DECLARE_DOMCTL; =20 if ( !port ) { @@ -54,17 +51,9 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t dom= ain_id, int param, return NULL; } =20 - /* Pause the domain for ring page setup */ - rc1 =3D xc_domain_pause(xch, domain_id); - if ( rc1 !=3D 0 ) - { - PERROR("Unable to pause domain\n"); - return NULL; - } - /* Get the pfn of the ring page */ - rc1 =3D xc_hvm_param_get(xch, domain_id, param, &pfn); - if ( rc1 !=3D 0 ) + rc =3D xc_hvm_param_get(xch, domain_id, param, &pfn); + if ( rc !=3D 0 ) { PERROR("Failed to get pfn of ring page\n"); goto out; @@ -72,13 +61,13 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t do= main_id, int param, =20 ring_pfn =3D pfn; mmap_pfn =3D pfn; - rc1 =3D xc_get_pfn_type_batch(xch, domain_id, 1, &mmap_pfn); - if ( rc1 || mmap_pfn & XEN_DOMCTL_PFINFO_XTAB ) + rc =3D xc_get_pfn_type_batch(xch, domain_id, 1, &mmap_pfn); + if ( rc || mmap_pfn & XEN_DOMCTL_PFINFO_XTAB ) { /* Page not in the physmap, try to populate it */ - rc1 =3D xc_domain_populate_physmap_exact(xch, domain_id, 1, 0, 0, + rc =3D xc_domain_populate_physmap_exact(xch, domain_id, 1, 0, 0, &ring_pfn); - if ( rc1 !=3D 0 ) + if ( rc !=3D 0 ) { PERROR("Failed to populate ring pfn\n"); goto out; @@ -87,7 +76,7 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t doma= in_id, int param, =20 mmap_pfn =3D ring_pfn; ring_page =3D xc_map_foreign_pages(xch, domain_id, PROT_READ | PROT_WR= ITE, - &mmap_pfn, 1); + &mmap_pfn, 1); if ( !ring_page ) { PERROR("Could not map the ring page\n"); @@ -117,40 +106,35 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t = domain_id, int param, */ default: errno =3D EINVAL; - rc1 =3D -1; + rc =3D -1; goto out; } =20 - rc1 =3D xc_vm_event_control(xch, domain_id, op, mode, port); - if ( rc1 !=3D 0 ) + domctl.cmd =3D XEN_DOMCTL_vm_event_op; + domctl.domain =3D domain_id; + domctl.u.vm_event_op.op =3D op; + domctl.u.vm_event_op.mode =3D mode; + + rc =3D do_domctl(xch, &domctl); + if ( rc !=3D 0 ) { PERROR("Failed to enable vm_event\n"); goto out; } =20 + *port =3D domctl.u.vm_event_op.u.enable.port; + /* Remove the ring_pfn from the guest's physmap */ - rc1 =3D xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ri= ng_pfn); - if ( rc1 !=3D 0 ) + rc =3D xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &rin= g_pfn); + if ( rc !=3D 0 ) PERROR("Failed to remove ring page from guest physmap"); =20 out: - saved_errno =3D errno; - - rc2 =3D xc_domain_unpause(xch, domain_id); - if ( rc1 !=3D 0 || rc2 !=3D 0 ) + if ( rc !=3D 0 ) { - if ( rc2 !=3D 0 ) - { - if ( rc1 =3D=3D 0 ) - saved_errno =3D errno; - PERROR("Unable to unpause domain"); - } - if ( ring_page ) xenforeignmemory_unmap(xch->fmem, ring_page, 1); ring_page =3D NULL; - - errno =3D saved_errno; } =20 return ring_page; diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c index d0571ca..b4a3a5c 100644 --- a/tools/xenpaging/xenpaging.c +++ b/tools/xenpaging/xenpaging.c @@ -337,40 +337,11 @@ static struct xenpaging *xenpaging_init(int argc, cha= r *argv[]) goto err; } =20 - /* Map the ring page */ - xc_get_hvm_param(xch, paging->vm_event.domain_id,=20 - HVM_PARAM_PAGING_RING_PFN, &ring_pfn); - mmap_pfn =3D ring_pfn; - paging->vm_event.ring_page =3D=20 - xc_map_foreign_pages(xch, paging->vm_event.domain_id, - PROT_READ | PROT_WRITE, &mmap_pfn, 1); - if ( !paging->vm_event.ring_page ) - { - /* Map failed, populate ring page */ - rc =3D xc_domain_populate_physmap_exact(paging->xc_handle,=20 - paging->vm_event.domain_id, - 1, 0, 0, &ring_pfn); - if ( rc !=3D 0 ) - { - PERROR("Failed to populate ring gfn\n"); - goto err; - } - - paging->vm_event.ring_page =3D=20 - xc_map_foreign_pages(xch, paging->vm_event.domain_id, - PROT_READ | PROT_WRITE, - &mmap_pfn, 1); - if ( !paging->vm_event.ring_page ) - { - PERROR("Could not map the ring page\n"); - goto err; - } - } - =20 /* Initialise Xen */ - rc =3D xc_mem_paging_enable(xch, paging->vm_event.domain_id, - &paging->vm_event.evtchn_port); - if ( rc !=3D 0 ) + paging->vm_event.ring_page =3D + xc_mem_paging_enable(xch, paging->vm_event.domain_id, + &paging->vm_event.evtchn_port); + if ( paging->vm_event.ring_page =3D=3D NULL ) { switch ( errno ) { case EBUSY: @@ -418,11 +389,6 @@ static struct xenpaging *xenpaging_init(int argc, char= *argv[]) (vm_event_sring_t *)paging->vm_event.ring_page, PAGE_SIZE); =20 - /* Now that the ring is set, remove it from the guest's physmap */ - if ( xc_domain_decrease_reservation_exact(xch,=20 - paging->vm_event.domain_id, 1, 0, &ring_pfn) ) - PERROR("Failed to remove ring from guest physmap"); - /* Get max_pages from guest if not provided via cmdline */ if ( !paging->max_pages ) { --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Wed May 8 23:10:08 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559225990; cv=none; d=zoho.com; s=zohoarc; b=Xp1WPZ5LZriGNJ7oOkwg4H/oLun9Qts936mdvC17cdSnE9Nb7qpNiLr5Xo7yq0IVq7rIa0ite4ZZeIxO31CIfgMe9cZ7mJbmr7DZ1HzlruQhv5XNjjG+GCOYhA3rQR4sQVhd9hKSXts9BIe3hcUmJcGym5LoWG4BTzodnd+jdDo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559225990; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=5ah1CU5UXYDNvmkLHwwEbjXFmAh0BLBL8PuR3F6nek0=; b=KKgGBingqzYGlfOxaZUjjVzVlOX+xuCWsYNTX/1FkKNsmioMYl6ww6uzu/4nBdg5qo6Pspjxxczs1P9XEWmoTsiOzuLPbHGLSpZCZWOXGfvO1+lewpKeGA4CHm6BYIW1tgu8BZBQTUmyPW3XSYf87NdChwGimODFl2mYnNqQypk= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559225990217420.18293463223097; Thu, 30 May 2019 07:19:50 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt1-000383-3u; Thu, 30 May 2019 14:18:31 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLsz-00036u-EZ for xen-devel@lists.xenproject.org; Thu, 30 May 2019 14:18:29 +0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id c9cb4248-82e5-11e9-b1fc-1fb024e09d15; Thu, 30 May 2019 14:18:26 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 2970F305FFA2; Thu, 30 May 2019 17:18:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 097B93086D00; Thu, 30 May 2019 17:18:25 +0300 (EEST) X-Inumbo-ID: c9cb4248-82e5-11e9-b1fc-1fb024e09d15 From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 17:18:16 +0300 Message-Id: <9cde4926b56fa05afffee270e5e28a3b9bd830d9.1559224640.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH 2/9] vm_event: Define VM_EVENT type X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Stefano Stabellini , Razvan Cojocaru , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Tamas K Lengyel , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Define the type for each of the supported vm_event rings (paging, monitor and sharing) and replace the ring param field with this type. Replace XEN_DOMCTL_VM_EVENT_OP_ occurrences with their corresponding XEN_VM_EVENT_TYPE_ counterpart. Signed-off-by: Petre Pircalabu --- tools/libxc/include/xenctrl.h | 1 + tools/libxc/xc_mem_paging.c | 6 ++-- tools/libxc/xc_monitor.c | 6 ++-- tools/libxc/xc_private.h | 8 ++--- tools/libxc/xc_vm_event.c | 70 ++++++++++++++++++------------------- xen/common/vm_event.c | 12 +++---- xen/include/public/domctl.h | 81 ++++++---------------------------------= ---- xen/include/public/vm_event.h | 31 +++++++++++++++++ 8 files changed, 93 insertions(+), 122 deletions(-) diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index 28fdbc0..943b933 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -46,6 +46,7 @@ #include #include #include +#include =20 #include "xentoollog.h" =20 diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c index 08468fb..37a8224 100644 --- a/tools/libxc/xc_mem_paging.c +++ b/tools/libxc/xc_mem_paging.c @@ -41,7 +41,7 @@ void *xc_mem_paging_enable(xc_interface *xch, uint32_t do= main_id, uint32_t *port) { return xc_vm_event_enable(xch, domain_id, - XEN_DOMCTL_VM_EVENT_OP_PAGING, + XEN_VM_EVENT_TYPE_PAGING, port); } =20 @@ -49,14 +49,14 @@ int xc_mem_paging_disable(xc_interface *xch, uint32_t d= omain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_PAGING); + XEN_VM_EVENT_TYPE_PAGING); } =20 int xc_mem_paging_resume(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_PAGING); + XEN_VM_EVENT_TYPE_PAGING); } =20 int xc_mem_paging_nominate(xc_interface *xch, uint32_t domain_id, uint64_t= gfn) diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c index d190c29..718fe8b 100644 --- a/tools/libxc/xc_monitor.c +++ b/tools/libxc/xc_monitor.c @@ -35,7 +35,7 @@ void *xc_monitor_enable(xc_interface *xch, uint32_t domai= n_id, uint32_t *port) } =20 buffer =3D xc_vm_event_enable(xch, domain_id, - HVM_PARAM_MONITOR_RING_PFN, + XEN_VM_EVENT_TYPE_MONITOR, port); saved_errno =3D errno; if ( xc_domain_unpause(xch, domain_id) ) @@ -53,14 +53,14 @@ int xc_monitor_disable(xc_interface *xch, uint32_t doma= in_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_DISABLE, - XEN_DOMCTL_VM_EVENT_OP_MONITOR); + XEN_VM_EVENT_TYPE_MONITOR); } =20 int xc_monitor_resume(xc_interface *xch, uint32_t domain_id) { return xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_RESUME, - XEN_DOMCTL_VM_EVENT_OP_MONITOR); + XEN_VM_EVENT_TYPE_MONITOR); } =20 int xc_monitor_get_capabilities(xc_interface *xch, uint32_t domain_id, diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h index 663e78b..482451c 100644 --- a/tools/libxc/xc_private.h +++ b/tools/libxc/xc_private.h @@ -412,12 +412,12 @@ int xc_ffs64(uint64_t x); * vm_event operations. Internal use only. */ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned in= t op, - unsigned int mode); + unsigned int type); /* - * Enables vm_event and returns the mapped ring page indicated by param. - * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN + * Enables vm_event and returns the mapped ring page indicated by type. + * type can be XEN_VM_EVENT_TYPE_(PAGING/MONITOR/SHARING) */ -void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, +void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type, uint32_t *port); =20 int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...); diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c index ea10366..3b1018b 100644 --- a/tools/libxc/xc_vm_event.c +++ b/tools/libxc/xc_vm_event.c @@ -23,29 +23,54 @@ #include "xc_private.h" =20 int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned in= t op, - unsigned int mode) + unsigned int type) { DECLARE_DOMCTL; =20 domctl.cmd =3D XEN_DOMCTL_vm_event_op; domctl.domain =3D domain_id; domctl.u.vm_event_op.op =3D op; - domctl.u.vm_event_op.mode =3D mode; + domctl.u.vm_event_op.type =3D type; =20 return do_domctl(xch, &domctl); } =20 -void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param, +static int xc_vm_event_ring_pfn_param(int type, int *param) +{ + if ( !param ) + return -EINVAL; + + switch ( type ) + { + case XEN_VM_EVENT_TYPE_PAGING: + *param =3D HVM_PARAM_PAGING_RING_PFN; + break; + + case XEN_VM_EVENT_TYPE_MONITOR: + *param =3D HVM_PARAM_MONITOR_RING_PFN; + break; + + case XEN_VM_EVENT_TYPE_SHARING: + *param =3D HVM_PARAM_SHARING_RING_PFN; + break; + + default: + return -EINVAL; + } + + return 0; +} + +void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type, uint32_t *port) { void *ring_page =3D NULL; uint64_t pfn; xen_pfn_t ring_pfn, mmap_pfn; - unsigned int op, mode; - int rc; + int param, rc; DECLARE_DOMCTL; =20 - if ( !port ) + if ( !port || xc_vm_event_ring_pfn_param(type, ¶m) !=3D 0 ) { errno =3D EINVAL; return NULL; @@ -83,37 +108,10 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t d= omain_id, int param, goto out; } =20 - switch ( param ) - { - case HVM_PARAM_PAGING_RING_PFN: - op =3D XEN_VM_EVENT_ENABLE; - mode =3D XEN_DOMCTL_VM_EVENT_OP_PAGING; - break; - - case HVM_PARAM_MONITOR_RING_PFN: - op =3D XEN_VM_EVENT_ENABLE; - mode =3D XEN_DOMCTL_VM_EVENT_OP_MONITOR; - break; - - case HVM_PARAM_SHARING_RING_PFN: - op =3D XEN_VM_EVENT_ENABLE; - mode =3D XEN_DOMCTL_VM_EVENT_OP_SHARING; - break; - - /* - * This is for the outside chance that the HVM_PARAM is valid but is i= nvalid - * as far as vm_event goes. - */ - default: - errno =3D EINVAL; - rc =3D -1; - goto out; - } - domctl.cmd =3D XEN_DOMCTL_vm_event_op; domctl.domain =3D domain_id; - domctl.u.vm_event_op.op =3D op; - domctl.u.vm_event_op.mode =3D mode; + domctl.u.vm_event_op.op =3D XEN_VM_EVENT_ENABLE; + domctl.u.vm_event_op.type =3D type; =20 rc =3D do_domctl(xch, &domctl); if ( rc !=3D 0 ) @@ -148,7 +146,7 @@ int xc_vm_event_get_version(xc_interface *xch) domctl.cmd =3D XEN_DOMCTL_vm_event_op; domctl.domain =3D DOMID_INVALID; domctl.u.vm_event_op.op =3D XEN_VM_EVENT_GET_VERSION; - domctl.u.vm_event_op.mode =3D XEN_DOMCTL_VM_EVENT_OP_MONITOR; + domctl.u.vm_event_op.type =3D XEN_VM_EVENT_TYPE_MONITOR; =20 rc =3D do_domctl(xch, &domctl); if ( !rc ) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 6833c21..d7c5f22 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -371,7 +371,7 @@ static int vm_event_resume(struct domain *d, struct vm_= event_domain *ved) vm_event_response_t rsp; =20 /* - * vm_event_resume() runs in either XEN_DOMCTL_VM_EVENT_OP_*, or + * vm_event_resume() runs in either XEN_VM_EVENT_* domctls, or * EVTCHN_send context from the introspection consumer. Both contexts * are guaranteed not to be the subject of vm_event responses. * While we could ASSERT(v !=3D current) for each VCPU in d in the loop @@ -597,7 +597,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl= _vm_event_op *vec, if ( unlikely(d =3D=3D NULL) ) return -ESRCH; =20 - rc =3D xsm_vm_event_control(XSM_PRIV, d, vec->mode, vec->op); + rc =3D xsm_vm_event_control(XSM_PRIV, d, vec->type, vec->op); if ( rc ) return rc; =20 @@ -624,10 +624,10 @@ int vm_event_domctl(struct domain *d, struct xen_domc= tl_vm_event_op *vec, =20 rc =3D -ENOSYS; =20 - switch ( vec->mode ) + switch ( vec->type ) { #ifdef CONFIG_HAS_MEM_PAGING - case XEN_DOMCTL_VM_EVENT_OP_PAGING: + case XEN_VM_EVENT_TYPE_PAGING: { rc =3D -EINVAL; =20 @@ -683,7 +683,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl= _vm_event_op *vec, break; #endif =20 - case XEN_DOMCTL_VM_EVENT_OP_MONITOR: + case XEN_VM_EVENT_TYPE_MONITOR: { rc =3D -EINVAL; =20 @@ -721,7 +721,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl= _vm_event_op *vec, break; =20 #ifdef CONFIG_HAS_MEM_SHARING - case XEN_DOMCTL_VM_EVENT_OP_SHARING: + case XEN_VM_EVENT_TYPE_SHARING: { rc =3D -EINVAL; =20 diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 19486d5..19281fa 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -38,7 +38,7 @@ #include "hvm/save.h" #include "memory.h" =20 -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000011 +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000012 =20 /* * NB. xen_domctl.domain is an IN/OUT parameter for this operation. @@ -769,80 +769,18 @@ struct xen_domctl_gdbsx_domstatus { * VM event operations */ =20 -/* XEN_DOMCTL_vm_event_op */ - -/* - * There are currently three rings available for VM events: - * sharing, monitor and paging. This hypercall allows one to - * control these rings (enable/disable), as well as to signal - * to the hypervisor to pull responses (resume) from the given - * ring. +/* XEN_DOMCTL_vm_event_op. + * Use for teardown/setup of helper<->hypervisor interface for paging, + * access and sharing. */ #define XEN_VM_EVENT_ENABLE 0 #define XEN_VM_EVENT_DISABLE 1 #define XEN_VM_EVENT_RESUME 2 #define XEN_VM_EVENT_GET_VERSION 3 =20 -/* - * Domain memory paging - * Page memory in and out. - * Domctl interface to set up and tear down the - * pager<->hypervisor interface. Use XENMEM_paging_op* - * to perform per-page operations. - * - * The XEN_VM_EVENT_PAGING_ENABLE domctl returns several - * non-standard error codes to indicate why paging could not be enabled: - * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest - * EMLINK - guest has iommu passthrough enabled - * EXDEV - guest has PoD enabled - * EBUSY - guest has or had paging enabled, ring buffer still active - */ -#define XEN_DOMCTL_VM_EVENT_OP_PAGING 1 - -/* - * Monitor helper. - * - * As with paging, use the domctl for teardown/setup of the - * helper<->hypervisor interface. - * - * The monitor interface can be used to register for various VM events. For - * example, there are HVM hypercalls to set the per-page access permissions - * of every page in a domain. When one of these permissions--independent, - * read, write, and execute--is violated, the VCPU is paused and a memory = event - * is sent with what happened. The memory event handler can then resume the - * VCPU and redo the access with a XEN_VM_EVENT_RESUME option. - * - * See public/vm_event.h for the list of available events that can be - * subscribed to via the monitor interface. - * - * The XEN_VM_EVENT_MONITOR_* domctls returns - * non-standard error codes to indicate why access could not be enabled: - * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest - * EBUSY - guest has or had access enabled, ring buffer still active - * - */ -#define XEN_DOMCTL_VM_EVENT_OP_MONITOR 2 - -/* - * Sharing ENOMEM helper. - * - * As with paging, use the domctl for teardown/setup of the - * helper<->hypervisor interface. - * - * If setup, this ring is used to communicate failed allocations - * in the unshare path. XENMEM_sharing_op_resume is used to wake up - * vcpus that could not unshare. - * - * Note that shring can be turned on (as per the domctl below) - * *without* this ring being setup. - */ -#define XEN_DOMCTL_VM_EVENT_OP_SHARING 3 - -/* Use for teardown/setup of helper<->hypervisor interface for paging, - * access and sharing.*/ struct xen_domctl_vm_event_op { - uint32_t op; /* XEN_VM_EVENT_* */ - uint32_t mode; /* XEN_DOMCTL_VM_EVENT_OP_* */ + uint32_t op; /* XEN_VM_EVENT_* */ + uint32_t type; /* XEN_VM_EVENT_TYPE_* */ =20 union { struct { @@ -857,7 +795,10 @@ struct xen_domctl_vm_event_op { * Memory sharing operations */ /* XEN_DOMCTL_mem_sharing_op. - * The CONTROL sub-domctl is used for bringup/teardown. */ + * The CONTROL sub-domctl is used for bringup/teardown. + * Please note that mem sharing can be turned on *without* setting-up the + * correspondin ring + */ #define XEN_DOMCTL_MEM_SHARING_CONTROL 0 =20 struct xen_domctl_mem_sharing_op { @@ -1004,7 +945,7 @@ struct xen_domctl_psr_cmt_op { * Enable/disable monitoring various VM events. * This domctl configures what events will be reported to helper apps * via the ring buffer "MONITOR". The ring has to be first enabled - * with the domctl XEN_DOMCTL_VM_EVENT_OP_MONITOR. + * with XEN_VM_EVENT_ENABLE. * * GET_CAPABILITIES can be used to determine which of these features is * available on a given platform. diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h index 959083d..c48bc21 100644 --- a/xen/include/public/vm_event.h +++ b/xen/include/public/vm_event.h @@ -36,6 +36,37 @@ #include "io/ring.h" =20 /* + * There are currently three types of VM events. + */ + +/* + * Domain memory paging + * + * Page memory in and out. + */ +#define XEN_VM_EVENT_TYPE_PAGING 1 + +/* + * Monitor. + * + * The monitor interface can be used to register for various VM events. For + * example, there are HVM hypercalls to set the per-page access permissions + * of every page in a domain. When one of these permissions--independent, + * read, write, and execute--is violated, the VCPU is paused and a memory = event + * is sent with what happened. The memory event handler can then resume the + * VCPU and redo the access with a XEN_VM_EVENT_RESUME option. + */ +#define XEN_VM_EVENT_TYPE_MONITOR 2 + +/* + * Sharing ENOMEM. + * + * Used to communicate failed allocations in the unshare path. + * XENMEM_sharing_op_resume is used to wake up vcpus that could not unshar= e. + */ +#define XEN_VM_EVENT_TYPE_SHARING 3 + +/* * Memory event flags */ =20 --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Wed May 8 23:10:08 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559225973; cv=none; d=zoho.com; s=zohoarc; b=W/gxiaW8oMcGJdD96WvT4+9dUnj0Wfs16zEby7T3Af9gGPrHXjPcDsrCt9kQmI7+lIXOihHkxwa+swFGQJUWBXHIYwLLRa0Gbb8NK4/nnez4MYi/vBsoy68Cs0qY0zKILJ0YwX++KKaPMDcW4fwPqY2QLL++Nqhf+zJtekjDJ3c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559225973; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=K9TPv/RAksb3GvgxZ0epAcF18TFbMUbGHJjEpPK2tXE=; b=jd/RkswQvt7IsjGuPe+4ZtrG+g2jf5HGFlrVeLoaK7qQsTBgviXfG0gfTvMvcix042Q5Ij0WyRDMf7iCWQIUD0muu4t33/sFxnTBataA1pBOhAuAbis4gZabAHmBXdms1Y87VTFzwff6O139naHjaLHBXiaKH8TIs4rf/8hn0HQ= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559225973525856.4973878421397; Thu, 30 May 2019 07:19:33 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt0-00037B-Dd; Thu, 30 May 2019 14:18:30 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLsy-00036e-Mt for xen-devel@lists.xenproject.org; Thu, 30 May 2019 14:18:28 +0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c9e581c6-82e5-11e9-8980-bc764e045a96; Thu, 30 May 2019 14:18:26 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 371AD305FFA3; Thu, 30 May 2019 17:18:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 250DD3086D01; Thu, 30 May 2019 17:18:25 +0300 (EEST) X-Inumbo-ID: c9e581c6-82e5-11e9-8980-bc764e045a96 From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 17:18:17 +0300 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: MIME-Version: 1.0 Subject: [Xen-devel] =?utf-8?b?W1BBVENIIDMvOV0gdm1fZXZlbnQ6IE1ha2Ug4oCY?= =?utf-8?b?bG9jYWzigJkgZnVuY3Rpb25zIOKAmHN0YXRpY+KAmQ==?= X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Razvan Cojocaru Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" vm_event_get_response, vm_event_resume, and vm_event_mark_and_pause are used only in xen/common/vm_event.c. Signed-off-by: Petre Pircalabu Acked-by: Tamas K Lengyel Reviewed-by: Andrew Cooper --- xen/common/vm_event.c | 6 +++--- xen/include/xen/vm_event.h | 3 --- 2 files changed, 3 insertions(+), 6 deletions(-) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index d7c5f22..3505589 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -252,7 +252,7 @@ static inline void vm_event_release_slot(struct domain = *d, * vm_event_mark_and_pause() tags vcpu and put it to sleep. * The vcpu will resume execution in vm_event_wake_blocked(). */ -void vm_event_mark_and_pause(struct vcpu *v, struct vm_event_domain *ved) +static void vm_event_mark_and_pause(struct vcpu *v, struct vm_event_domain= *ved) { if ( !test_and_set_bit(ved->pause_flag, &v->pause_flags) ) { @@ -324,8 +324,8 @@ void vm_event_put_request(struct domain *d, notify_via_xen_event_channel(d, ved->xen_port); } =20 -int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, - vm_event_response_t *rsp) +static int vm_event_get_response(struct domain *d, struct vm_event_domain = *ved, + vm_event_response_t *rsp) { vm_event_front_ring_t *front_ring; RING_IDX rsp_cons; diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h index 53af2d5..7f6fb6d 100644 --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -64,9 +64,6 @@ void vm_event_cancel_slot(struct domain *d, struct vm_eve= nt_domain *ved); void vm_event_put_request(struct domain *d, struct vm_event_domain *ved, vm_event_request_t *req); =20 -int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, - vm_event_response_t *rsp); - int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec, XEN_GUEST_HANDLE_PARAM(void) u_domctl); =20 --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Wed May 8 23:10:08 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559225987; cv=none; d=zoho.com; s=zohoarc; b=O5xSBb5uamYBk7dORsNKWXRYgtrWxMDeUITwvyV242kDJGP5OTxRlpmPhdUMrhUm8L1MV44kS9vqnQL8Fd7gHA6z4XHcX2V+WVxBKwM2FEANYsBk/1oH9rZB0llIUdIr8VDtVexiPARcdBPvHl1bbQR3AF7rWAb+ByNTuxDaywo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559225987; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=7OS38rbtI98KFCtfFfxOxlGuaE4zmZoxfCCJgw0wmbs=; b=Z+DDd62U1Bhlt0Q9nhWoF5e8zIyPDk0YQ8uHa5xTFKz58zwpifOMyiGPdTaB/XqmMOx8mRd/YBYO3e28RCHEvQ2s8cKT76+J+gbPUXvCqmn5TOkTGOBtr+yn2slQ3BQbmCALhysP0RZTVYMn/25/Lks9Bh8aRBptmTXdGIk56Ds= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559225987997145.1453614957561; Thu, 30 May 2019 07:19:47 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt0-00037b-Oj; Thu, 30 May 2019 14:18:30 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLsy-00036f-NF for xen-devel@lists.xenproject.org; Thu, 30 May 2019 14:18:28 +0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c9fc3c52-82e5-11e9-8980-bc764e045a96; Thu, 30 May 2019 14:18:26 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 84AC3305FFA4; Thu, 30 May 2019 17:18:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 3923E3086D0A; Thu, 30 May 2019 17:18:25 +0300 (EEST) X-Inumbo-ID: c9fc3c52-82e5-11e9-8980-bc764e045a96 From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 17:18:18 +0300 Message-Id: <9e731967741fac6046a3a862964ac61ba7cababc.1559224640.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH 4/9] vm_event: Remove "ring" suffix from vm_event_check_ring X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Wei Liu , Razvan Cojocaru , George Dunlap , Andrew Cooper , Julien Grall , Stefano Stabellini , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Decouple implementation from interface to allow vm_event_check to be used regardless of the vm_event underlying implementation. Signed-off-by: Petre Pircalabu Acked-by: Andrew Cooper Acked-by: Tamas K Lengyel --- xen/arch/arm/mem_access.c | 2 +- xen/arch/x86/mm/mem_access.c | 4 ++-- xen/arch/x86/mm/mem_paging.c | 2 +- xen/common/mem_access.c | 2 +- xen/common/vm_event.c | 24 ++++++++++++------------ xen/drivers/passthrough/pci.c | 2 +- xen/include/xen/vm_event.h | 4 ++-- 7 files changed, 20 insertions(+), 20 deletions(-) diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c index 3e36202..d54760b 100644 --- a/xen/arch/arm/mem_access.c +++ b/xen/arch/arm/mem_access.c @@ -290,7 +290,7 @@ bool p2m_mem_access_check(paddr_t gpa, vaddr_t gla, con= st struct npfec npfec) } =20 /* Otherwise, check if there is a vm_event monitor subscriber */ - if ( !vm_event_check_ring(v->domain->vm_event_monitor) ) + if ( !vm_event_check(v->domain->vm_event_monitor) ) { /* No listener */ if ( p2m->access_required ) diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c index 0144f92..640352e 100644 --- a/xen/arch/x86/mm/mem_access.c +++ b/xen/arch/x86/mm/mem_access.c @@ -182,7 +182,7 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gl= a, gfn_unlock(p2m, gfn, 0); =20 /* Otherwise, check if there is a memory event listener, and send the = message along */ - if ( !vm_event_check_ring(d->vm_event_monitor) || !req_ptr ) + if ( !vm_event_check(d->vm_event_monitor) || !req_ptr ) { /* No listener */ if ( p2m->access_required ) @@ -210,7 +210,7 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gl= a, return true; } } - if ( vm_event_check_ring(d->vm_event_monitor) && + if ( vm_event_check(d->vm_event_monitor) && d->arch.monitor.inguest_pagefault_disabled && npfec.kind !=3D npfec_kind_with_gla ) /* don't send a mem_event */ { diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c index 54a94fa..dc2a59a 100644 --- a/xen/arch/x86/mm/mem_paging.c +++ b/xen/arch/x86/mm/mem_paging.c @@ -44,7 +44,7 @@ int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_pagin= g_op_t) arg) goto out; =20 rc =3D -ENODEV; - if ( unlikely(!vm_event_check_ring(d->vm_event_paging)) ) + if ( unlikely(!vm_event_check(d->vm_event_paging)) ) goto out; =20 switch( mpo.op ) diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c index 010e6f8..51e4e2b 100644 --- a/xen/common/mem_access.c +++ b/xen/common/mem_access.c @@ -52,7 +52,7 @@ int mem_access_memop(unsigned long cmd, goto out; =20 rc =3D -ENODEV; - if ( unlikely(!vm_event_check_ring(d->vm_event_monitor)) ) + if ( unlikely(!vm_event_check(d->vm_event_monitor)) ) goto out; =20 switch ( mao.op ) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 3505589..1dd3e48 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -196,7 +196,7 @@ void vm_event_wake(struct domain *d, struct vm_event_do= main *ved) =20 static int vm_event_disable(struct domain *d, struct vm_event_domain **ved) { - if ( vm_event_check_ring(*ved) ) + if ( vm_event_check(*ved) ) { struct vcpu *v; =20 @@ -277,7 +277,7 @@ void vm_event_put_request(struct domain *d, RING_IDX req_prod; struct vcpu *curr =3D current; =20 - if( !vm_event_check_ring(ved)) + if( !vm_event_check(ved)) return; =20 if ( curr->domain !=3D d ) @@ -380,7 +380,7 @@ static int vm_event_resume(struct domain *d, struct vm_= event_domain *ved) */ ASSERT(d !=3D current->domain); =20 - if ( unlikely(!vm_event_check_ring(ved)) ) + if ( unlikely(!vm_event_check(ved)) ) return -ENODEV; =20 /* Pull all responses off the ring. */ @@ -452,7 +452,7 @@ static int vm_event_resume(struct domain *d, struct vm_= event_domain *ved) =20 void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved) { - if( !vm_event_check_ring(ved) ) + if( !vm_event_check(ved) ) return; =20 vm_event_ring_lock(ved); @@ -501,7 +501,7 @@ static int vm_event_wait_slot(struct vm_event_domain *v= ed) return rc; } =20 -bool vm_event_check_ring(struct vm_event_domain *ved) +bool vm_event_check(struct vm_event_domain *ved) { return (ved && ved->ring_page); } @@ -521,7 +521,7 @@ bool vm_event_check_ring(struct vm_event_domain *ved) int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved, bool allow_sleep) { - if ( !vm_event_check_ring(ved) ) + if ( !vm_event_check(ved) ) return -EOPNOTSUPP; =20 if ( (current->domain =3D=3D d) && allow_sleep ) @@ -556,7 +556,7 @@ static void mem_sharing_notification(struct vcpu *v, un= signed int port) void vm_event_cleanup(struct domain *d) { #ifdef CONFIG_HAS_MEM_PAGING - if ( vm_event_check_ring(d->vm_event_paging) ) + if ( vm_event_check(d->vm_event_paging) ) { /* Destroying the wait queue head means waking up all * queued vcpus. This will drain the list, allowing @@ -569,13 +569,13 @@ void vm_event_cleanup(struct domain *d) (void)vm_event_disable(d, &d->vm_event_paging); } #endif - if ( vm_event_check_ring(d->vm_event_monitor) ) + if ( vm_event_check(d->vm_event_monitor) ) { destroy_waitqueue_head(&d->vm_event_monitor->wq); (void)vm_event_disable(d, &d->vm_event_monitor); } #ifdef CONFIG_HAS_MEM_SHARING - if ( vm_event_check_ring(d->vm_event_share) ) + if ( vm_event_check(d->vm_event_share) ) { destroy_waitqueue_head(&d->vm_event_share->wq); (void)vm_event_disable(d, &d->vm_event_share); @@ -663,7 +663,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl= _vm_event_op *vec, break; =20 case XEN_VM_EVENT_DISABLE: - if ( vm_event_check_ring(d->vm_event_paging) ) + if ( vm_event_check(d->vm_event_paging) ) { domain_pause(d); rc =3D vm_event_disable(d, &d->vm_event_paging); @@ -700,7 +700,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl= _vm_event_op *vec, break; =20 case XEN_VM_EVENT_DISABLE: - if ( vm_event_check_ring(d->vm_event_monitor) ) + if ( vm_event_check(d->vm_event_monitor) ) { domain_pause(d); rc =3D vm_event_disable(d, &d->vm_event_monitor); @@ -745,7 +745,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl= _vm_event_op *vec, break; =20 case XEN_VM_EVENT_DISABLE: - if ( vm_event_check_ring(d->vm_event_share) ) + if ( vm_event_check(d->vm_event_share) ) { domain_pause(d); rc =3D vm_event_disable(d, &d->vm_event_share); diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 061b201..a7d4d9e 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -1453,7 +1453,7 @@ static int assign_device(struct domain *d, u16 seg, u= 8 bus, u8 devfn, u32 flag) /* Prevent device assign if mem paging or mem sharing have been=20 * enabled for this domain */ if ( unlikely(d->arch.hvm.mem_sharing_enabled || - vm_event_check_ring(d->vm_event_paging) || + vm_event_check(d->vm_event_paging) || p2m_get_hostp2m(d)->global_logdirty) ) return -EXDEV; =20 diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h index 7f6fb6d..0a05e5b 100644 --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -29,8 +29,8 @@ /* Clean up on domain destruction */ void vm_event_cleanup(struct domain *d); =20 -/* Returns whether a ring has been set up */ -bool vm_event_check_ring(struct vm_event_domain *ved); +/* Returns whether the VM event domain has been set up */ +bool vm_event_check(struct vm_event_domain *ved); =20 /* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no * available space and the caller is a foreign domain. If the guest itself --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Wed May 8 23:10:08 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559225979; cv=none; d=zoho.com; s=zohoarc; b=AXn1GFgjk3+DHsM6bJVq+2KdrFQ4Er350dIxKJixnTqZbVmLQi0gxokk+8eiqvWwHWE1l5AKOvMVXm8Z1beETGU/TJUIEMITrQ0x/KMzqLwkQ1Z0ZuJZdUBFU7eTEPf/6M5ri5pAU58s3LcRPyllFm8LakXcqq3OgsFTRqXuX0Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559225979; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=I7Iz5zh3Vt6XfLiw/V6nqfvY5u7PreYfYet0nF3I3tM=; b=mobYS27kQTHPbvpoSwN0tXPcAqjCsRj6x3uyRVui77I/Kceub+Ih/M7yIfHRlg7ZUsfFJ8ftL5iptKwLWWTrz+Lx9Uw/YLv217f3eCLk9OGf3NJHZYK8RZSVZX9IIQ0RNUDZ+JN/YFZoFnI1ya2mfgRDpHOXlUcVZr0ZJdQJbIc= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559225979986470.2893982558678; Thu, 30 May 2019 07:19:39 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt2-00039H-48; Thu, 30 May 2019 14:18:32 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt0-000376-5K for xen-devel@lists.xenproject.org; Thu, 30 May 2019 14:18:30 +0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id cb16482a-82e5-11e9-8980-bc764e045a96; Thu, 30 May 2019 14:18:28 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 94AAB305FFA5; Thu, 30 May 2019 17:18:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 835D43086D00; Thu, 30 May 2019 17:18:25 +0300 (EEST) X-Inumbo-ID: cb16482a-82e5-11e9-8980-bc764e045a96 From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 17:18:19 +0300 Message-Id: <922965b261f4ca23bcb276d6907f36c892c2478b.1559224640.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH 5/9] vm_event: Simplify vm_event interface X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Razvan Cojocaru , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Stefano Stabellini , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The domain reference can be part of the vm_event_domain structure because for every call to a vm_event interface function both the latter and it's corresponding domain are passed as parameters. Affected functions: - __vm_event_claim_slot / vm_event_claim_slot / vm_event_claim_slot_nosleep - vm_event_cancel_slot - vm_event_put_request Signed-off-by: Petre Pircalabu --- xen/arch/x86/mm/mem_sharing.c | 5 ++--- xen/arch/x86/mm/p2m.c | 11 +++++------ xen/common/monitor.c | 4 ++-- xen/common/vm_event.c | 37 ++++++++++++++++++------------------- xen/include/xen/sched.h | 2 ++ xen/include/xen/vm_event.h | 17 +++++++---------- 6 files changed, 36 insertions(+), 40 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index f16a3f5..9d80389 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -557,8 +557,7 @@ int mem_sharing_notify_enomem(struct domain *d, unsigne= d long gfn, .u.mem_sharing.p2mt =3D p2m_ram_shared }; =20 - if ( (rc =3D __vm_event_claim_slot(d,=20 - d->vm_event_share, allow_sleep)) < 0 ) + if ( (rc =3D __vm_event_claim_slot(d->vm_event_share, allow_sleep)) < = 0 ) return rc; =20 if ( v->domain =3D=3D d ) @@ -567,7 +566,7 @@ int mem_sharing_notify_enomem(struct domain *d, unsigne= d long gfn, vm_event_vcpu_pause(v); } =20 - vm_event_put_request(d, d->vm_event_share, &req); + vm_event_put_request(d->vm_event_share, &req); =20 return 0; } diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 4c99548..625fc9b 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1652,7 +1652,7 @@ void p2m_mem_paging_drop_page(struct domain *d, unsig= ned long gfn, * correctness of the guest execution at this point. If this is the o= nly * page that happens to be paged-out, we'll be okay.. but it's likely= the * guest will crash shortly anyways. */ - int rc =3D vm_event_claim_slot(d, d->vm_event_paging); + int rc =3D vm_event_claim_slot(d->vm_event_paging); if ( rc < 0 ) return; =20 @@ -1666,7 +1666,7 @@ void p2m_mem_paging_drop_page(struct domain *d, unsig= ned long gfn, /* Evict will fail now, tag this request for pager */ req.u.mem_paging.flags |=3D MEM_PAGING_EVICT_FAIL; =20 - vm_event_put_request(d, d->vm_event_paging, &req); + vm_event_put_request(d->vm_event_paging, &req); } =20 /** @@ -1704,8 +1704,7 @@ void p2m_mem_paging_populate(struct domain *d, unsign= ed long gfn_l) struct p2m_domain *p2m =3D p2m_get_hostp2m(d); =20 /* We're paging. There should be a ring */ - int rc =3D vm_event_claim_slot(d, d->vm_event_paging); - + int rc =3D vm_event_claim_slot(d->vm_event_paging); if ( rc =3D=3D -EOPNOTSUPP ) { gdprintk(XENLOG_ERR, "Domain %hu paging gfn %lx yet no ring " @@ -1746,7 +1745,7 @@ void p2m_mem_paging_populate(struct domain *d, unsign= ed long gfn_l) { /* gfn is already on its way back and vcpu is not paused */ out_cancel: - vm_event_cancel_slot(d, d->vm_event_paging); + vm_event_cancel_slot(d->vm_event_paging); return; } =20 @@ -1754,7 +1753,7 @@ void p2m_mem_paging_populate(struct domain *d, unsign= ed long gfn_l) req.u.mem_paging.p2mt =3D p2mt; req.vcpu_id =3D v->vcpu_id; =20 - vm_event_put_request(d, d->vm_event_paging, &req); + vm_event_put_request(d->vm_event_paging, &req); } =20 /** diff --git a/xen/common/monitor.c b/xen/common/monitor.c index d5c9ff1..b8d33c4 100644 --- a/xen/common/monitor.c +++ b/xen/common/monitor.c @@ -93,7 +93,7 @@ int monitor_traps(struct vcpu *v, bool sync, vm_event_req= uest_t *req) int rc; struct domain *d =3D v->domain; =20 - rc =3D vm_event_claim_slot(d, d->vm_event_monitor); + rc =3D vm_event_claim_slot(d->vm_event_monitor); switch ( rc ) { case 0: @@ -125,7 +125,7 @@ int monitor_traps(struct vcpu *v, bool sync, vm_event_r= equest_t *req) } =20 vm_event_fill_regs(req); - vm_event_put_request(d, d->vm_event_monitor, req); + vm_event_put_request(d->vm_event_monitor, req); =20 return rc; } diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 1dd3e48..3e87bbc 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -131,10 +131,11 @@ static unsigned int vm_event_ring_available(struct vm= _event_domain *ved) * but need to be resumed where the ring is capable of processing at least * one event from them. */ -static void vm_event_wake_blocked(struct domain *d, struct vm_event_domain= *ved) +static void vm_event_wake_blocked(struct vm_event_domain *ved) { struct vcpu *v; unsigned int avail_req =3D vm_event_ring_available(ved); + struct domain *d =3D ved->d; =20 if ( avail_req =3D=3D 0 || ved->blocked =3D=3D 0 ) return; @@ -171,7 +172,7 @@ static void vm_event_wake_blocked(struct domain *d, str= uct vm_event_domain *ved) * was unable to do so, it is queued on a wait queue. These are woken as * needed, and take precedence over the blocked vCPUs. */ -static void vm_event_wake_queued(struct domain *d, struct vm_event_domain = *ved) +static void vm_event_wake_queued(struct vm_event_domain *ved) { unsigned int avail_req =3D vm_event_ring_available(ved); =20 @@ -186,12 +187,12 @@ static void vm_event_wake_queued(struct domain *d, st= ruct vm_event_domain *ved) * call vm_event_wake() again, ensuring that any blocked vCPUs will get * unpaused once all the queued vCPUs have made it through. */ -void vm_event_wake(struct domain *d, struct vm_event_domain *ved) +void vm_event_wake(struct vm_event_domain *ved) { if (!list_empty(&ved->wq.list)) - vm_event_wake_queued(d, ved); + vm_event_wake_queued(ved); else - vm_event_wake_blocked(d, ved); + vm_event_wake_blocked(ved); } =20 static int vm_event_disable(struct domain *d, struct vm_event_domain **ved) @@ -235,17 +236,16 @@ static int vm_event_disable(struct domain *d, struct = vm_event_domain **ved) return 0; } =20 -static inline void vm_event_release_slot(struct domain *d, - struct vm_event_domain *ved) +static inline void vm_event_release_slot(struct vm_event_domain *ved) { /* Update the accounting */ - if ( current->domain =3D=3D d ) + if ( current->domain =3D=3D ved->d ) ved->target_producers--; else ved->foreign_producers--; =20 /* Kick any waiters */ - vm_event_wake(d, ved); + vm_event_wake(ved); } =20 /* @@ -267,8 +267,7 @@ static void vm_event_mark_and_pause(struct vcpu *v, str= uct vm_event_domain *ved) * overly full and its continued execution would cause stalling and excess= ive * waiting. The vCPU will be automatically unpaused when the ring clears. */ -void vm_event_put_request(struct domain *d, - struct vm_event_domain *ved, +void vm_event_put_request(struct vm_event_domain *ved, vm_event_request_t *req) { vm_event_front_ring_t *front_ring; @@ -276,6 +275,7 @@ void vm_event_put_request(struct domain *d, unsigned int avail_req; RING_IDX req_prod; struct vcpu *curr =3D current; + struct domain *d =3D ved->d; =20 if( !vm_event_check(ved)) return; @@ -309,7 +309,7 @@ void vm_event_put_request(struct domain *d, RING_PUSH_REQUESTS(front_ring); =20 /* We've actually *used* our reservation, so release the slot. */ - vm_event_release_slot(d, ved); + vm_event_release_slot(ved); =20 /* Give this vCPU a black eye if necessary, on the way out. * See the comments above wake_blocked() for more information @@ -351,7 +351,7 @@ static int vm_event_get_response(struct domain *d, stru= ct vm_event_domain *ved, =20 /* Kick any waiters -- since we've just consumed an event, * there may be additional space available in the ring. */ - vm_event_wake(d, ved); + vm_event_wake(ved); =20 vm_event_ring_unlock(ved); =20 @@ -450,13 +450,13 @@ static int vm_event_resume(struct domain *d, struct v= m_event_domain *ved) return 0; } =20 -void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved) +void vm_event_cancel_slot(struct vm_event_domain *ved) { if( !vm_event_check(ved) ) return; =20 vm_event_ring_lock(ved); - vm_event_release_slot(d, ved); + vm_event_release_slot(ved); vm_event_ring_unlock(ved); } =20 @@ -518,16 +518,15 @@ bool vm_event_check(struct vm_event_domain *ved) * 0: a spot has been reserved * */ -int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved, - bool allow_sleep) +int __vm_event_claim_slot(struct vm_event_domain *ved, bool allow_sleep) { if ( !vm_event_check(ved) ) return -EOPNOTSUPP; =20 - if ( (current->domain =3D=3D d) && allow_sleep ) + if ( (current->domain =3D=3D ved->d) && allow_sleep ) return vm_event_wait_slot(ved); else - return vm_event_grab_slot(ved, (current->domain !=3D d)); + return vm_event_grab_slot(ved, (current->domain !=3D ved->d)); } =20 #ifdef CONFIG_HAS_MEM_PAGING diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 2201fac..7dee022 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -282,6 +282,8 @@ struct vcpu /* VM event */ struct vm_event_domain { + /* Domain reference */ + struct domain *d; /* ring lock */ spinlock_t ring_lock; /* The ring has 64 entries */ diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h index 0a05e5b..a5c82d6 100644 --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -45,23 +45,20 @@ bool vm_event_check(struct vm_event_domain *ved); * cancel_slot(), both of which are guaranteed to * succeed. */ -int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved, - bool allow_sleep); -static inline int vm_event_claim_slot(struct domain *d, - struct vm_event_domain *ved) +int __vm_event_claim_slot(struct vm_event_domain *ved, bool allow_sleep); +static inline int vm_event_claim_slot(struct vm_event_domain *ved) { - return __vm_event_claim_slot(d, ved, true); + return __vm_event_claim_slot(ved, true); } =20 -static inline int vm_event_claim_slot_nosleep(struct domain *d, - struct vm_event_domain *ved) +static inline int vm_event_claim_slot_nosleep(struct vm_event_domain *ved) { - return __vm_event_claim_slot(d, ved, false); + return __vm_event_claim_slot(ved, false); } =20 -void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved); +void vm_event_cancel_slot(struct vm_event_domain *ved); =20 -void vm_event_put_request(struct domain *d, struct vm_event_domain *ved, +void vm_event_put_request(struct vm_event_domain *ved, vm_event_request_t *req); =20 int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec, --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Wed May 8 23:10:08 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559225982; cv=none; d=zoho.com; s=zohoarc; b=YSJP7tlsJvKZPyp1dRbQ6zBAW1zV4sQsVYrvNJJPjSYlXn3mYJEWHje6QrAFclbTAs8vX4OwoJjgrbb75uskMjlmexPFnDflbHRLPphY+Llt6M02YklrmqmEPgjAJ+UkmBf0qusaMnu4P98ZOYONFBq/5vT4WAmwEgC+3Lr44X0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559225982; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=nQr6tR1VXG62aC5Rg4hYK+DSOsUrXB5gR4iR3APWnf4=; b=Ja7OOWLyDT5NgvT+Smdz316pWPFu2NkjaJ+iCpElaAyP6QDjEv77KROLzG203qIjeTRDU1x6eKgqqFVz++2jULGv0yyRO+g3wsq/TX1TRELxBJ2Mi77d1yHjM89xb49X2PRXQl+5BYCEWELDrGlNynVrImDvFFxVDW49VbMV2dU= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 155922598257120.142770195385538; Thu, 30 May 2019 07:19:42 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt2-0003AT-RQ; Thu, 30 May 2019 14:18:32 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt1-00037m-1E for xen-devel@lists.xenproject.org; Thu, 30 May 2019 14:18:31 +0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id cb0d51e6-82e5-11e9-9385-ebffe58d023f; Thu, 30 May 2019 14:18:28 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id A16B5305FFA6; Thu, 30 May 2019 17:18:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 8FA193086D01; Thu, 30 May 2019 17:18:25 +0300 (EEST) X-Inumbo-ID: cb0d51e6-82e5-11e9-9385-ebffe58d023f From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 17:18:20 +0300 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH 6/9] vm_event: Move struct vm_event_domain to vm_event.c X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Wei Liu , Razvan Cojocaru , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Stefano Stabellini , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The vm_event_domain members are not accessed outside vm_event.c so it's better to hide de implementation details. Signed-off-by: Petre Pircalabu Acked-by: Andrew Cooper Acked-by: Tamas K Lengyel --- xen/common/vm_event.c | 27 +++++++++++++++++++++++++++ xen/include/xen/sched.h | 27 +-------------------------- 2 files changed, 28 insertions(+), 26 deletions(-) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 3e87bbc..02c5853 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -39,6 +39,33 @@ #define vm_event_ring_lock(_ved) spin_lock(&(_ved)->ring_lock) #define vm_event_ring_unlock(_ved) spin_unlock(&(_ved)->ring_lock) =20 +/* VM event */ +struct vm_event_domain +{ + /* Domain reference */ + struct domain *d; + /* ring lock */ + spinlock_t ring_lock; + /* The ring has 64 entries */ + unsigned char foreign_producers; + unsigned char target_producers; + /* shared ring page */ + void *ring_page; + struct page_info *ring_pg_struct; + /* front-end ring */ + vm_event_front_ring_t front_ring; + /* event channel port (vcpu0 only) */ + int xen_port; + /* vm_event bit for vcpu->pause_flags */ + int pause_flag; + /* list of vcpus waiting for room in the ring */ + struct waitqueue_head wq; + /* the number of vCPUs blocked */ + unsigned int blocked; + /* The last vcpu woken up */ + unsigned int last_vcpu_wake_up; +}; + static int vm_event_enable( struct domain *d, struct xen_domctl_vm_event_op *vec, diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 7dee022..207fbc4 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -279,32 +279,7 @@ struct vcpu #define domain_lock(d) spin_lock_recursive(&(d)->domain_lock) #define domain_unlock(d) spin_unlock_recursive(&(d)->domain_lock) =20 -/* VM event */ -struct vm_event_domain -{ - /* Domain reference */ - struct domain *d; - /* ring lock */ - spinlock_t ring_lock; - /* The ring has 64 entries */ - unsigned char foreign_producers; - unsigned char target_producers; - /* shared ring page */ - void *ring_page; - struct page_info *ring_pg_struct; - /* front-end ring */ - vm_event_front_ring_t front_ring; - /* event channel port (vcpu0 only) */ - int xen_port; - /* vm_event bit for vcpu->pause_flags */ - int pause_flag; - /* list of vcpus waiting for room in the ring */ - struct waitqueue_head wq; - /* the number of vCPUs blocked */ - unsigned int blocked; - /* The last vcpu woken up */ - unsigned int last_vcpu_wake_up; -}; +struct vm_event_domain; =20 struct evtchn_port_ops; =20 --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Wed May 8 23:10:08 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559225987; cv=none; d=zoho.com; s=zohoarc; b=RQGeuQCNNk8pZyrrNNBgdMsxE+rRarOU4orcB02r6ALLVgJT2HHTqPaqbvv7/lZG6oHikwTy3/Jrp1bglcNfCB+hskdSMSphqiIKufG+jRVw8Cwv5dIfBxt7pHXDwDOvYE2wqPnk0gttmdSRIQdC4B9IgFqijXBEiDTWjd7GNZA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559225987; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=jPfq2Str0e2O5T501NcDRRtjPRFr4pOOpHINvADxIu8=; b=gccf7uSeX6mfX2Rrhww2xgWru5/YsWH/5Pp6QoiRMUT2U/36AV9E3urTZF4ThfB4RN7VRp5tnx3pWNVR/US8JhjNSJkXSUWhrPx32M0YID3H6tNRrgeRrKp1bk0eJqJXtix/NmZfKvHBQUJhRZ3XHSHnuxeNcD2CSTX4TK2K+i0= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559225987637295.36813731809616; Thu, 30 May 2019 07:19:47 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt3-0003BJ-B8; Thu, 30 May 2019 14:18:33 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt2-00039p-IX for xen-devel@lists.xenproject.org; Thu, 30 May 2019 14:18:32 +0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id cb2f78ac-82e5-11e9-9ed3-eb07dfe1e9cd; Thu, 30 May 2019 14:18:28 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id A95CB305FFA7; Thu, 30 May 2019 17:18:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 96D393051E79; Thu, 30 May 2019 17:18:25 +0300 (EEST) X-Inumbo-ID: cb2f78ac-82e5-11e9-9ed3-eb07dfe1e9cd From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 17:18:21 +0300 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH 7/9] vm_event: Decouple implementation details from interface. X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Razvan Cojocaru MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To accommodate a second implementation of the vm_event subsystem, the current one (ring) should be decoupled from the xen/vm_event.h interface. Signed-off-by: Petre Pircalabu --- xen/common/vm_event.c | 407 ++++++++++++++++++++++-------------------= ---- xen/include/xen/vm_event.h | 56 ++++++- 2 files changed, 252 insertions(+), 211 deletions(-) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 02c5853..1d85f3e 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -35,17 +35,13 @@ #define xen_rmb() smp_rmb() #define xen_wmb() smp_wmb() =20 -#define vm_event_ring_lock_init(_ved) spin_lock_init(&(_ved)->ring_lock) -#define vm_event_ring_lock(_ved) spin_lock(&(_ved)->ring_lock) -#define vm_event_ring_unlock(_ved) spin_unlock(&(_ved)->ring_lock) +#define to_ring(_ved) container_of((_ved), struct vm_event_ring_domain, ve= d) =20 -/* VM event */ -struct vm_event_domain +/* VM event ring implementation */ +struct vm_event_ring_domain { - /* Domain reference */ - struct domain *d; - /* ring lock */ - spinlock_t ring_lock; + /* VM event domain */ + struct vm_event_domain ved; /* The ring has 64 entries */ unsigned char foreign_producers; unsigned char target_producers; @@ -56,8 +52,6 @@ struct vm_event_domain vm_event_front_ring_t front_ring; /* event channel port (vcpu0 only) */ int xen_port; - /* vm_event bit for vcpu->pause_flags */ - int pause_flag; /* list of vcpus waiting for room in the ring */ struct waitqueue_head wq; /* the number of vCPUs blocked */ @@ -66,48 +60,54 @@ struct vm_event_domain unsigned int last_vcpu_wake_up; }; =20 -static int vm_event_enable( +static const struct vm_event_ops vm_event_ring_ops; + +static int vm_event_ring_enable( struct domain *d, struct xen_domctl_vm_event_op *vec, - struct vm_event_domain **ved, + struct vm_event_domain **_ved, int pause_flag, int param, xen_event_channel_notification_t notification_fn) { int rc; unsigned long ring_gfn =3D d->arch.hvm.params[param]; + struct vm_event_ring_domain *impl; =20 - if ( !*ved ) - *ved =3D xzalloc(struct vm_event_domain); - if ( !*ved ) + impl =3D (*_ved) ? to_ring(*_ved) : + xzalloc(struct vm_event_ring_domain); + if ( !impl ) return -ENOMEM; =20 /* Only one helper at a time. If the helper crashed, * the ring is in an undefined state and so is the guest. */ - if ( (*ved)->ring_page ) - return -EBUSY;; + if ( impl->ring_page ) + return -EBUSY; =20 /* The parameter defaults to zero, and it should be * set to something */ if ( ring_gfn =3D=3D 0 ) return -EOPNOTSUPP; =20 - vm_event_ring_lock_init(*ved); - vm_event_ring_lock(*ved); + spin_lock_init(&impl->ved.lock); + spin_lock(&impl->ved.lock); =20 rc =3D vm_event_init_domain(d); =20 if ( rc < 0 ) goto err; =20 - rc =3D prepare_ring_for_helper(d, ring_gfn, &(*ved)->ring_pg_struct, - &(*ved)->ring_page); + impl->ved.d =3D d; + impl->ved.ops =3D &vm_event_ring_ops; + + rc =3D prepare_ring_for_helper(d, ring_gfn, &impl->ring_pg_struct, + &impl->ring_page); if ( rc < 0 ) goto err; =20 /* Set the number of currently blocked vCPUs to 0. */ - (*ved)->blocked =3D 0; + impl->blocked =3D 0; =20 /* Allocate event channel */ rc =3D alloc_unbound_xen_event_channel(d, 0, current->domain->domain_i= d, @@ -115,37 +115,37 @@ static int vm_event_enable( if ( rc < 0 ) goto err; =20 - (*ved)->xen_port =3D vec->u.enable.port =3D rc; + impl->xen_port =3D vec->u.enable.port =3D rc; =20 /* Prepare ring buffer */ - FRONT_RING_INIT(&(*ved)->front_ring, - (vm_event_sring_t *)(*ved)->ring_page, + FRONT_RING_INIT(&impl->front_ring, + (vm_event_sring_t *)impl->ring_page, PAGE_SIZE); =20 /* Save the pause flag for this particular ring. */ - (*ved)->pause_flag =3D pause_flag; + impl->ved.pause_flag =3D pause_flag; =20 /* Initialize the last-chance wait queue. */ - init_waitqueue_head(&(*ved)->wq); + init_waitqueue_head(&impl->wq); =20 - vm_event_ring_unlock(*ved); + spin_unlock(&impl->ved.lock); + *_ved =3D &impl->ved; return 0; =20 err: - destroy_ring_for_helper(&(*ved)->ring_page, - (*ved)->ring_pg_struct); - vm_event_ring_unlock(*ved); - xfree(*ved); - *ved =3D NULL; + destroy_ring_for_helper(&impl->ring_page, + impl->ring_pg_struct); + spin_unlock(&impl->ved.lock); + XFREE(impl); =20 return rc; } =20 -static unsigned int vm_event_ring_available(struct vm_event_domain *ved) +static unsigned int vm_event_ring_available(struct vm_event_ring_domain *i= mpl) { - int avail_req =3D RING_FREE_REQUESTS(&ved->front_ring); - avail_req -=3D ved->target_producers; - avail_req -=3D ved->foreign_producers; + int avail_req =3D RING_FREE_REQUESTS(&impl->front_ring); + avail_req -=3D impl->target_producers; + avail_req -=3D impl->foreign_producers; =20 BUG_ON(avail_req < 0); =20 @@ -153,42 +153,42 @@ static unsigned int vm_event_ring_available(struct vm= _event_domain *ved) } =20 /* - * vm_event_wake_blocked() will wakeup vcpus waiting for room in the + * vm_event_ring_wake_blocked() will wakeup vcpus waiting for room in the * ring. These vCPUs were paused on their way out after placing an event, * but need to be resumed where the ring is capable of processing at least * one event from them. */ -static void vm_event_wake_blocked(struct vm_event_domain *ved) +static void vm_event_ring_wake_blocked(struct vm_event_ring_domain *impl) { struct vcpu *v; - unsigned int avail_req =3D vm_event_ring_available(ved); - struct domain *d =3D ved->d; + unsigned int avail_req =3D vm_event_ring_available(impl); =20 - if ( avail_req =3D=3D 0 || ved->blocked =3D=3D 0 ) + if ( avail_req =3D=3D 0 || impl->blocked =3D=3D 0 ) return; =20 /* We remember which vcpu last woke up to avoid scanning always linear= ly * from zero and starving higher-numbered vcpus under high load */ - if ( d->vcpu ) + if ( impl->ved.d->vcpu ) { int i, j, k; =20 - for (i =3D ved->last_vcpu_wake_up + 1, j =3D 0; j < d->max_vcpus; = i++, j++) + for ( i =3D impl->last_vcpu_wake_up + 1, j =3D 0; + j < impl->ved.d->max_vcpus; i++, j++) { - k =3D i % d->max_vcpus; - v =3D d->vcpu[k]; + k =3D i % impl->ved.d->max_vcpus; + v =3D impl->ved.d->vcpu[k]; if ( !v ) continue; =20 - if ( !(ved->blocked) || avail_req =3D=3D 0 ) + if ( !(impl->blocked) || avail_req =3D=3D 0 ) break; =20 - if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) ) + if ( test_and_clear_bit(impl->ved.pause_flag, &v->pause_flags)= ) { vcpu_unpause(v); avail_req--; - ved->blocked--; - ved->last_vcpu_wake_up =3D k; + impl->blocked--; + impl->last_vcpu_wake_up =3D k; } } } @@ -199,92 +199,90 @@ static void vm_event_wake_blocked(struct vm_event_dom= ain *ved) * was unable to do so, it is queued on a wait queue. These are woken as * needed, and take precedence over the blocked vCPUs. */ -static void vm_event_wake_queued(struct vm_event_domain *ved) +static void vm_event_ring_wake_queued(struct vm_event_ring_domain *impl) { - unsigned int avail_req =3D vm_event_ring_available(ved); + unsigned int avail_req =3D vm_event_ring_available(impl); =20 if ( avail_req > 0 ) - wake_up_nr(&ved->wq, avail_req); + wake_up_nr(&impl->wq, avail_req); } =20 /* - * vm_event_wake() will wakeup all vcpus waiting for the ring to + * vm_event_ring_wake() will wakeup all vcpus waiting for the ring to * become available. If we have queued vCPUs, they get top priority. We * are guaranteed that they will go through code paths that will eventually - * call vm_event_wake() again, ensuring that any blocked vCPUs will get + * call vm_event_ring_wake() again, ensuring that any blocked vCPUs will g= et * unpaused once all the queued vCPUs have made it through. */ -void vm_event_wake(struct vm_event_domain *ved) +static void vm_event_ring_wake(struct vm_event_ring_domain *impl) { - if (!list_empty(&ved->wq.list)) - vm_event_wake_queued(ved); + if ( !list_empty(&impl->wq.list) ) + vm_event_ring_wake_queued(impl); else - vm_event_wake_blocked(ved); + vm_event_ring_wake_blocked(impl); } =20 -static int vm_event_disable(struct domain *d, struct vm_event_domain **ved) +static int vm_event_ring_disable(struct vm_event_domain **_ved) { - if ( vm_event_check(*ved) ) - { - struct vcpu *v; + struct vcpu *v; + struct domain *d =3D (*_ved)->d; + struct vm_event_ring_domain *impl =3D to_ring(*_ved); =20 - vm_event_ring_lock(*ved); + spin_lock(&(*_ved)->lock); =20 - if ( !list_empty(&(*ved)->wq.list) ) - { - vm_event_ring_unlock(*ved); - return -EBUSY; - } + if ( !list_empty(&impl->wq.list) ) + { + spin_unlock(&(*_ved)->lock); + return -EBUSY; + } =20 - /* Free domU's event channel and leave the other one unbound */ - free_xen_event_channel(d, (*ved)->xen_port); + /* Free domU's event channel and leave the other one unbound */ + free_xen_event_channel(d, impl->xen_port); =20 - /* Unblock all vCPUs */ - for_each_vcpu ( d, v ) + /* Unblock all vCPUs */ + for_each_vcpu ( d, v ) + { + if ( test_and_clear_bit((*_ved)->pause_flag, &v->pause_flags) ) { - if ( test_and_clear_bit((*ved)->pause_flag, &v->pause_flags) ) - { - vcpu_unpause(v); - (*ved)->blocked--; - } + vcpu_unpause(v); + impl->blocked--; } + } =20 - destroy_ring_for_helper(&(*ved)->ring_page, - (*ved)->ring_pg_struct); - - vm_event_cleanup_domain(d); + destroy_ring_for_helper(&impl->ring_page, + impl->ring_pg_struct); =20 - vm_event_ring_unlock(*ved); - } + vm_event_cleanup_domain(d); =20 - xfree(*ved); - *ved =3D NULL; + spin_unlock(&(*_ved)->lock); =20 + XFREE(*_ved); return 0; } =20 -static inline void vm_event_release_slot(struct vm_event_domain *ved) +static inline void vm_event_ring_release_slot(struct vm_event_ring_domain = *impl) { /* Update the accounting */ - if ( current->domain =3D=3D ved->d ) - ved->target_producers--; + if ( current->domain =3D=3D impl->ved.d ) + impl->target_producers--; else - ved->foreign_producers--; + impl->foreign_producers--; =20 /* Kick any waiters */ - vm_event_wake(ved); + vm_event_ring_wake(impl); } =20 /* - * vm_event_mark_and_pause() tags vcpu and put it to sleep. - * The vcpu will resume execution in vm_event_wake_blocked(). + * vm_event_ring_mark_and_pause() tags vcpu and put it to sleep. + * The vcpu will resume execution in vm_event_ring_wake_blocked(). */ -static void vm_event_mark_and_pause(struct vcpu *v, struct vm_event_domain= *ved) +static void vm_event_ring_mark_and_pause(struct vcpu *v, + struct vm_event_ring_domain *impl) { - if ( !test_and_set_bit(ved->pause_flag, &v->pause_flags) ) + if ( !test_and_set_bit(impl->ved.pause_flag, &v->pause_flags) ) { vcpu_pause_nosync(v); - ved->blocked++; + impl->blocked++; } } =20 @@ -294,35 +292,32 @@ static void vm_event_mark_and_pause(struct vcpu *v, s= truct vm_event_domain *ved) * overly full and its continued execution would cause stalling and excess= ive * waiting. The vCPU will be automatically unpaused when the ring clears. */ -void vm_event_put_request(struct vm_event_domain *ved, - vm_event_request_t *req) +static void vm_event_ring_put_request(struct vm_event_domain *ved, + vm_event_request_t *req) { vm_event_front_ring_t *front_ring; int free_req; unsigned int avail_req; RING_IDX req_prod; struct vcpu *curr =3D current; - struct domain *d =3D ved->d; - - if( !vm_event_check(ved)) - return; + struct vm_event_ring_domain *impl =3D to_ring(ved); =20 - if ( curr->domain !=3D d ) + if ( curr->domain !=3D ved->d ) { req->flags |=3D VM_EVENT_FLAG_FOREIGN; #ifndef NDEBUG if ( !(req->flags & VM_EVENT_FLAG_VCPU_PAUSED) ) gdprintk(XENLOG_G_WARNING, "d%dv%d was not paused.\n", - d->domain_id, req->vcpu_id); + ved->d->domain_id, req->vcpu_id); #endif } =20 req->version =3D VM_EVENT_INTERFACE_VERSION; =20 - vm_event_ring_lock(ved); + spin_lock(&impl->ved.lock); =20 /* Due to the reservations, this step must succeed. */ - front_ring =3D &ved->front_ring; + front_ring =3D &impl->front_ring; free_req =3D RING_FREE_REQUESTS(front_ring); ASSERT(free_req > 0); =20 @@ -336,35 +331,35 @@ void vm_event_put_request(struct vm_event_domain *ved, RING_PUSH_REQUESTS(front_ring); =20 /* We've actually *used* our reservation, so release the slot. */ - vm_event_release_slot(ved); + vm_event_ring_release_slot(impl); =20 /* Give this vCPU a black eye if necessary, on the way out. * See the comments above wake_blocked() for more information * on how this mechanism works to avoid waiting. */ - avail_req =3D vm_event_ring_available(ved); - if( curr->domain =3D=3D d && avail_req < d->max_vcpus && + avail_req =3D vm_event_ring_available(impl); + if( curr->domain =3D=3D ved->d && avail_req < ved->d->max_vcpus && !atomic_read(&curr->vm_event_pause_count) ) - vm_event_mark_and_pause(curr, ved); + vm_event_ring_mark_and_pause(curr, impl); =20 - vm_event_ring_unlock(ved); + spin_unlock(&impl->ved.lock); =20 - notify_via_xen_event_channel(d, ved->xen_port); + notify_via_xen_event_channel(ved->d, impl->xen_port); } =20 -static int vm_event_get_response(struct domain *d, struct vm_event_domain = *ved, - vm_event_response_t *rsp) +static int vm_event_ring_get_response(struct vm_event_ring_domain *impl, + vm_event_response_t *rsp) { vm_event_front_ring_t *front_ring; RING_IDX rsp_cons; =20 - vm_event_ring_lock(ved); + spin_lock(&impl->ved.lock); =20 - front_ring =3D &ved->front_ring; + front_ring =3D &impl->front_ring; rsp_cons =3D front_ring->rsp_cons; =20 if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) ) { - vm_event_ring_unlock(ved); + spin_unlock(&impl->ved.lock); return 0; } =20 @@ -378,9 +373,9 @@ static int vm_event_get_response(struct domain *d, stru= ct vm_event_domain *ved, =20 /* Kick any waiters -- since we've just consumed an event, * there may be additional space available in the ring. */ - vm_event_wake(ved); + vm_event_ring_wake(impl); =20 - vm_event_ring_unlock(ved); + spin_unlock(&impl->ved.lock); =20 return 1; } @@ -393,10 +388,13 @@ static int vm_event_get_response(struct domain *d, st= ruct vm_event_domain *ved, * Note: responses are handled the same way regardless of which ring they * arrive on. */ -static int vm_event_resume(struct domain *d, struct vm_event_domain *ved) +static int vm_event_ring_resume(struct vm_event_ring_domain *impl) { vm_event_response_t rsp; =20 + if ( unlikely(!impl || !vm_event_check(&impl->ved)) ) + return -ENODEV; + /* * vm_event_resume() runs in either XEN_VM_EVENT_* domctls, or * EVTCHN_send context from the introspection consumer. Both contexts @@ -405,13 +403,10 @@ static int vm_event_resume(struct domain *d, struct v= m_event_domain *ved) * below, this covers the case where we would need to iterate over all * of them more succintly. */ - ASSERT(d !=3D current->domain); - - if ( unlikely(!vm_event_check(ved)) ) - return -ENODEV; + ASSERT(impl->ved.d !=3D current->domain); =20 /* Pull all responses off the ring. */ - while ( vm_event_get_response(d, ved, &rsp) ) + while ( vm_event_ring_get_response(impl, &rsp) ) { struct vcpu *v; =20 @@ -422,10 +417,11 @@ static int vm_event_resume(struct domain *d, struct v= m_event_domain *ved) } =20 /* Validate the vcpu_id in the response. */ - if ( (rsp.vcpu_id >=3D d->max_vcpus) || !d->vcpu[rsp.vcpu_id] ) + if ( (rsp.vcpu_id >=3D impl->ved.d->max_vcpus) || + !impl->ved.d->vcpu[rsp.vcpu_id] ) continue; =20 - v =3D d->vcpu[rsp.vcpu_id]; + v =3D impl->ved.d->vcpu[rsp.vcpu_id]; =20 /* * In some cases the response type needs extra handling, so here @@ -437,7 +433,7 @@ static int vm_event_resume(struct domain *d, struct vm_= event_domain *ved) { #ifdef CONFIG_HAS_MEM_PAGING if ( rsp.reason =3D=3D VM_EVENT_REASON_MEM_PAGING ) - p2m_mem_paging_resume(d, &rsp); + p2m_mem_paging_resume(impl->ved.d, &rsp); #endif =20 /* @@ -457,7 +453,7 @@ static int vm_event_resume(struct domain *d, struct vm_= event_domain *ved) * Check in arch-specific handler to avoid bitmask overhead wh= en * not supported. */ - vm_event_toggle_singlestep(d, v, &rsp); + vm_event_toggle_singlestep(impl->ved.d, v, &rsp); =20 /* Check for altp2m switch */ if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M ) @@ -477,66 +473,63 @@ static int vm_event_resume(struct domain *d, struct v= m_event_domain *ved) return 0; } =20 -void vm_event_cancel_slot(struct vm_event_domain *ved) +static void vm_event_ring_cancel_slot(struct vm_event_domain *ved) { - if( !vm_event_check(ved) ) - return; - - vm_event_ring_lock(ved); - vm_event_release_slot(ved); - vm_event_ring_unlock(ved); + spin_lock(&ved->lock); + vm_event_ring_release_slot(to_ring(ved)); + spin_unlock(&ved->lock); } =20 -static int vm_event_grab_slot(struct vm_event_domain *ved, int foreign) +static int vm_event_ring_grab_slot(struct vm_event_ring_domain *impl, int = foreign) { unsigned int avail_req; =20 - if ( !ved->ring_page ) + if ( !impl->ring_page ) return -EOPNOTSUPP; =20 - vm_event_ring_lock(ved); + spin_lock(&impl->ved.lock); =20 - avail_req =3D vm_event_ring_available(ved); + avail_req =3D vm_event_ring_available(impl); if ( avail_req =3D=3D 0 ) { - vm_event_ring_unlock(ved); + spin_unlock(&impl->ved.lock); return -EBUSY; } =20 if ( !foreign ) - ved->target_producers++; + impl->target_producers++; else - ved->foreign_producers++; + impl->foreign_producers++; =20 - vm_event_ring_unlock(ved); + spin_unlock(&impl->ved.lock); =20 return 0; } =20 /* Simple try_grab wrapper for use in the wait_event() macro. */ -static int vm_event_wait_try_grab(struct vm_event_domain *ved, int *rc) +static int vm_event_ring_wait_try_grab(struct vm_event_ring_domain *impl, = int *rc) { - *rc =3D vm_event_grab_slot(ved, 0); + *rc =3D vm_event_ring_grab_slot(impl, 0); return *rc; } =20 -/* Call vm_event_grab_slot() until the ring doesn't exist, or is available= . */ -static int vm_event_wait_slot(struct vm_event_domain *ved) +/* Call vm_event_ring_grab_slot() until the ring doesn't exist, or is avai= lable. */ +static int vm_event_ring_wait_slot(struct vm_event_ring_domain *impl) { int rc =3D -EBUSY; - wait_event(ved->wq, vm_event_wait_try_grab(ved, &rc) !=3D -EBUSY); + wait_event(impl->wq, vm_event_ring_wait_try_grab(impl, &rc) !=3D -EBUS= Y); return rc; } =20 -bool vm_event_check(struct vm_event_domain *ved) +static bool vm_event_ring_check(struct vm_event_domain *ved) { - return (ved && ved->ring_page); + return ( to_ring(ved)->ring_page !=3D NULL ); } =20 /* * Determines whether or not the current vCPU belongs to the target domain, * and calls the appropriate wait function. If it is a guest vCPU, then we - * use vm_event_wait_slot() to reserve a slot. As long as there is a ring, + * use vm_event_ring_wait_slot() to reserve a slot. As long as there is a= ring, * this function will always return 0 for a guest. For a non-guest, we ch= eck * for space and return -EBUSY if the ring is not available. * @@ -545,36 +538,33 @@ bool vm_event_check(struct vm_event_domain *ved) * 0: a spot has been reserved * */ -int __vm_event_claim_slot(struct vm_event_domain *ved, bool allow_sleep) +static int vm_event_ring_claim_slot(struct vm_event_domain *ved, bool allo= w_sleep) { - if ( !vm_event_check(ved) ) - return -EOPNOTSUPP; - if ( (current->domain =3D=3D ved->d) && allow_sleep ) - return vm_event_wait_slot(ved); + return vm_event_ring_wait_slot(to_ring(ved)); else - return vm_event_grab_slot(ved, (current->domain !=3D ved->d)); + return vm_event_ring_grab_slot(to_ring(ved), (current->domain !=3D= ved->d)); } =20 #ifdef CONFIG_HAS_MEM_PAGING /* Registered with Xen-bound event channel for incoming notifications. */ static void mem_paging_notification(struct vcpu *v, unsigned int port) { - vm_event_resume(v->domain, v->domain->vm_event_paging); + vm_event_ring_resume(to_ring(v->domain->vm_event_paging)); } #endif =20 /* Registered with Xen-bound event channel for incoming notifications. */ static void monitor_notification(struct vcpu *v, unsigned int port) { - vm_event_resume(v->domain, v->domain->vm_event_monitor); + vm_event_ring_resume(to_ring(v->domain->vm_event_monitor)); } =20 #ifdef CONFIG_HAS_MEM_SHARING /* Registered with Xen-bound event channel for incoming notifications. */ static void mem_sharing_notification(struct vcpu *v, unsigned int port) { - vm_event_resume(v->domain, v->domain->vm_event_share); + vm_event_ring_resume(to_ring(v->domain->vm_event_share)); } #endif =20 @@ -583,32 +573,32 @@ void vm_event_cleanup(struct domain *d) { #ifdef CONFIG_HAS_MEM_PAGING if ( vm_event_check(d->vm_event_paging) ) - { - /* Destroying the wait queue head means waking up all - * queued vcpus. This will drain the list, allowing - * the disable routine to complete. It will also drop - * all domain refs the wait-queued vcpus are holding. - * Finally, because this code path involves previously - * pausing the domain (domain_kill), unpausing the - * vcpus causes no harm. */ - destroy_waitqueue_head(&d->vm_event_paging->wq); - (void)vm_event_disable(d, &d->vm_event_paging); - } + d->vm_event_paging->ops->cleanup(&d->vm_event_paging); #endif + if ( vm_event_check(d->vm_event_monitor) ) - { - destroy_waitqueue_head(&d->vm_event_monitor->wq); - (void)vm_event_disable(d, &d->vm_event_monitor); - } + d->vm_event_monitor->ops->cleanup(&d->vm_event_monitor); + #ifdef CONFIG_HAS_MEM_SHARING if ( vm_event_check(d->vm_event_share) ) - { - destroy_waitqueue_head(&d->vm_event_share->wq); - (void)vm_event_disable(d, &d->vm_event_share); - } + d->vm_event_share->ops->cleanup(&d->vm_event_share); #endif } =20 +static void vm_event_ring_cleanup(struct vm_event_domain **_ved) +{ + struct vm_event_ring_domain *impl =3D to_ring(*_ved); + /* Destroying the wait queue head means waking up all + * queued vcpus. This will drain the list, allowing + * the disable routine to complete. It will also drop + * all domain refs the wait-queued vcpus are holding. + * Finally, because this code path involves previously + * pausing the domain (domain_kill), unpausing the + * vcpus causes no harm. */ + destroy_waitqueue_head(&impl->wq); + (void)vm_event_ring_disable(_ved); +} + int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec, XEN_GUEST_HANDLE_PARAM(void) u_domctl) { @@ -682,23 +672,22 @@ int vm_event_domctl(struct domain *d, struct xen_domc= tl_vm_event_op *vec, break; =20 /* domain_pause() not required here, see XSA-99 */ - rc =3D vm_event_enable(d, vec, &d->vm_event_paging, _VPF_mem_p= aging, + rc =3D vm_event_ring_enable(d, vec, &d->vm_event_paging, _VPF_= mem_paging, HVM_PARAM_PAGING_RING_PFN, mem_paging_notification); } break; =20 case XEN_VM_EVENT_DISABLE: - if ( vm_event_check(d->vm_event_paging) ) - { - domain_pause(d); - rc =3D vm_event_disable(d, &d->vm_event_paging); - domain_unpause(d); - } + if ( !vm_event_check(d->vm_event_paging) ) + break; + domain_pause(d); + rc =3D vm_event_ring_disable(&d->vm_event_paging); + domain_unpause(d); break; =20 case XEN_VM_EVENT_RESUME: - rc =3D vm_event_resume(d, d->vm_event_paging); + rc =3D vm_event_ring_resume(to_ring(d->vm_event_paging)); break; =20 default: @@ -720,23 +709,22 @@ int vm_event_domctl(struct domain *d, struct xen_domc= tl_vm_event_op *vec, rc =3D arch_monitor_init_domain(d); if ( rc ) break; - rc =3D vm_event_enable(d, vec, &d->vm_event_monitor, _VPF_mem_= access, + rc =3D vm_event_ring_enable(d, vec, &d->vm_event_monitor, _VPF= _mem_access, HVM_PARAM_MONITOR_RING_PFN, monitor_notification); break; =20 case XEN_VM_EVENT_DISABLE: - if ( vm_event_check(d->vm_event_monitor) ) - { - domain_pause(d); - rc =3D vm_event_disable(d, &d->vm_event_monitor); - arch_monitor_cleanup_domain(d); - domain_unpause(d); - } + if ( !vm_event_check(d->vm_event_monitor) ) + break; + domain_pause(d); + rc =3D vm_event_ring_disable(&d->vm_event_monitor); + arch_monitor_cleanup_domain(d); + domain_unpause(d); break; =20 case XEN_VM_EVENT_RESUME: - rc =3D vm_event_resume(d, d->vm_event_monitor); + rc =3D vm_event_ring_resume(to_ring(d->vm_event_monitor)); break; =20 default: @@ -765,22 +753,21 @@ int vm_event_domctl(struct domain *d, struct xen_domc= tl_vm_event_op *vec, break; =20 /* domain_pause() not required here, see XSA-99 */ - rc =3D vm_event_enable(d, vec, &d->vm_event_share, _VPF_mem_sh= aring, + rc =3D vm_event_ring_enable(d, vec, &d->vm_event_share, _VPF_m= em_sharing, HVM_PARAM_SHARING_RING_PFN, mem_sharing_notification); break; =20 case XEN_VM_EVENT_DISABLE: - if ( vm_event_check(d->vm_event_share) ) - { - domain_pause(d); - rc =3D vm_event_disable(d, &d->vm_event_share); - domain_unpause(d); - } + if ( !vm_event_check(d->vm_event_share) ) + break; + domain_pause(d); + rc =3D vm_event_ring_disable(&d->vm_event_share); + domain_unpause(d); break; =20 case XEN_VM_EVENT_RESUME: - rc =3D vm_event_resume(d, d->vm_event_share); + rc =3D vm_event_ring_resume(to_ring(d->vm_event_share)); break; =20 default: @@ -832,6 +819,14 @@ void vm_event_vcpu_unpause(struct vcpu *v) vcpu_unpause(v); } =20 +static const struct vm_event_ops vm_event_ring_ops =3D { + .check =3D vm_event_ring_check, + .cleanup =3D vm_event_ring_cleanup, + .claim_slot =3D vm_event_ring_claim_slot, + .cancel_slot =3D vm_event_ring_cancel_slot, + .put_request =3D vm_event_ring_put_request +}; + /* * Local variables: * mode: C diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h index a5c82d6..15c15e6 100644 --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -26,11 +26,38 @@ #include #include =20 +struct vm_event_ops +{ + bool (*check)(struct vm_event_domain *ved); + void (*cleanup)(struct vm_event_domain **_ved); + int (*claim_slot)(struct vm_event_domain *ved, bool allow_sleep); + void (*cancel_slot)(struct vm_event_domain *ved); + void (*put_request)(struct vm_event_domain *ved, vm_event_request_t *r= eq); +}; + +struct vm_event_domain +{ + /* Domain reference */ + struct domain *d; + + /* vm_event_ops */ + const struct vm_event_ops *ops; + + /* vm_event domain lock */ + spinlock_t lock; + + /* vm_event bit for vcpu->pause_flags */ + int pause_flag; +}; + /* Clean up on domain destruction */ void vm_event_cleanup(struct domain *d); =20 /* Returns whether the VM event domain has been set up */ -bool vm_event_check(struct vm_event_domain *ved); +static inline bool vm_event_check(struct vm_event_domain *ved) +{ + return (ved) && ved->ops->check(ved); +} =20 /* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no * available space and the caller is a foreign domain. If the guest itself @@ -45,7 +72,14 @@ bool vm_event_check(struct vm_event_domain *ved); * cancel_slot(), both of which are guaranteed to * succeed. */ -int __vm_event_claim_slot(struct vm_event_domain *ved, bool allow_sleep); +static inline int __vm_event_claim_slot(struct vm_event_domain *ved, bool = allow_sleep) +{ + if ( !vm_event_check(ved) ) + return -EOPNOTSUPP; + + return ved->ops->claim_slot(ved, allow_sleep); +} + static inline int vm_event_claim_slot(struct vm_event_domain *ved) { return __vm_event_claim_slot(ved, true); @@ -56,10 +90,22 @@ static inline int vm_event_claim_slot_nosleep(struct vm= _event_domain *ved) return __vm_event_claim_slot(ved, false); } =20 -void vm_event_cancel_slot(struct vm_event_domain *ved); +static inline void vm_event_cancel_slot(struct vm_event_domain *ved) +{ + if ( !vm_event_check(ved) ) + return; =20 -void vm_event_put_request(struct vm_event_domain *ved, - vm_event_request_t *req); + ved->ops->cancel_slot(ved); +} + +static inline void vm_event_put_request(struct vm_event_domain *ved, + vm_event_request_t *req) +{ + if ( !vm_event_check(ved) ) + return; + + ved->ops->put_request(ved, req); +} =20 int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec, XEN_GUEST_HANDLE_PARAM(void) u_domctl); --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Wed May 8 23:10:08 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559225984; cv=none; d=zoho.com; s=zohoarc; b=XAAC8IDmOX6QPDhBEqfGB9dd9+lxu3LOvmxLoMtRUW4xZ5qz909Lf374g3EuvWU+XlMXg/yWpcCV2XBGfl6r8kVI/peIAQ5WyK6zBCffEiFz4TaitHnBBxnWhqSN53bAWOG8T2bjjauKEdO9pkazWktCtRddSeV+OUT3HWcCR3o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559225984; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=3o2Q8O+MciyviGemPqw0rh8Akgz2rdjMdBXEJ+HI7mY=; b=EzuqLIP3mhqFHEz0GHThdOLfD5jHBuOfNtZblys8p+LW3ifD+CLfkzWkZ8DBymrsvLPGXvHyyxgrpSGxBGTvqtVsE6xKfmuh2rc6zgTU4237IhrFYRh+8hoSW0ntbhsyP2Y39RInrA2M8J+pym8CAYw3gnKNIN+bTWUyGv6gq64= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559225984970615.7089745740353; Thu, 30 May 2019 07:19:44 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt3-0003By-NL; Thu, 30 May 2019 14:18:33 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt2-0003AJ-RH for xen-devel@lists.xenproject.org; Thu, 30 May 2019 14:18:32 +0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id cb30d1d4-82e5-11e9-9fea-93e827edf83c; Thu, 30 May 2019 14:18:28 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id C2A70305FFA8; Thu, 30 May 2019 17:18:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id A65753051E7A; Thu, 30 May 2019 17:18:25 +0300 (EEST) X-Inumbo-ID: cb30d1d4-82e5-11e9-9fea-93e827edf83c From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 17:18:22 +0300 Message-Id: <3ec19ed5425a62ecbc524e44c4bba86d5fe41762.1559224640.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH 8/9] vm_event: Add vm_event_ng interface X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Stefano Stabellini , Razvan Cojocaru , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Tamas K Lengyel , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In high throughput introspection scenarios where lots of monitor vm_events are generated, the ring buffer can fill up before the monitor application gets a chance to handle all the requests thus blocking other vcpus which will have to wait for a slot to become available. This patch adds support for a different mechanism to handle synchronous vm_event requests / responses. As each synchronous request pauses the vcpu until the corresponding response is handled, it can be stored in a slotted memory buffer (one per vcpu) shared between the hypervisor and the controlling domain. Signed-off-by: Petre Pircalabu --- tools/libxc/include/xenctrl.h | 6 + tools/libxc/xc_monitor.c | 15 ++ tools/libxc/xc_private.h | 8 + tools/libxc/xc_vm_event.c | 53 +++++ xen/arch/x86/mm.c | 5 + xen/common/Makefile | 1 + xen/common/domctl.c | 7 + xen/common/vm_event.c | 94 ++++----- xen/common/vm_event_ng.c | 449 ++++++++++++++++++++++++++++++++++++++= ++++ xen/include/public/domctl.h | 20 ++ xen/include/public/memory.h | 2 + xen/include/public/vm_event.h | 16 ++ xen/include/xen/vm_event.h | 10 + 13 files changed, 642 insertions(+), 44 deletions(-) create mode 100644 xen/common/vm_event_ng.c diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index 943b933..c36b623 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -1993,6 +1993,7 @@ int xc_get_mem_access(xc_interface *xch, uint32_t dom= ain_id, * Returns the VM_EVENT_INTERFACE version. */ int xc_vm_event_get_version(xc_interface *xch); +int xc_vm_event_ng_get_version(xc_interface *xch); =20 /*** * Monitor control operations. @@ -2007,6 +2008,11 @@ int xc_vm_event_get_version(xc_interface *xch); void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *p= ort); int xc_monitor_disable(xc_interface *xch, uint32_t domain_id); int xc_monitor_resume(xc_interface *xch, uint32_t domain_id); + +/* Monitor NG interface */ +int xc_monitor_ng_create(xc_interface *xch, uint32_t domain_id); +int xc_monitor_ng_destroy(xc_interface *xch, uint32_t domain_id); +int xc_monitor_ng_set_state(xc_interface *xch, uint32_t domain_id, bool en= abled); /* * Get a bitmap of supported monitor events in the form * (1 << XEN_DOMCTL_MONITOR_EVENT_*). diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c index 718fe8b..4c7ef2b 100644 --- a/tools/libxc/xc_monitor.c +++ b/tools/libxc/xc_monitor.c @@ -265,6 +265,21 @@ int xc_monitor_emul_unimplemented(xc_interface *xch, u= int32_t domain_id, return do_domctl(xch, &domctl); } =20 +int xc_monitor_ng_create(xc_interface *xch, uint32_t domain_id) +{ + return xc_vm_event_ng_create(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR= ); +} + +int xc_monitor_ng_destroy(xc_interface *xch, uint32_t domain_id) +{ + return xc_vm_event_ng_destroy(xch, domain_id, XEN_VM_EVENT_TYPE_MONITO= R); +} + +int xc_monitor_ng_set_state(xc_interface *xch, uint32_t domain_id, bool en= abled) +{ + return xc_vm_event_ng_set_state(xch, domain_id, XEN_VM_EVENT_TYPE_MONI= TOR, enabled); +} + /* * Local variables: * mode: C diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h index 482451c..1904a1e 100644 --- a/tools/libxc/xc_private.h +++ b/tools/libxc/xc_private.h @@ -420,6 +420,14 @@ int xc_vm_event_control(xc_interface *xch, uint32_t do= main_id, unsigned int op, void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type, uint32_t *port); =20 +/** + * VM_EVENT NG operations. Internal use only. + */ +int xc_vm_event_ng_create(xc_interface *xch, uint32_t domain_id, int type); +int xc_vm_event_ng_destroy(xc_interface *xch, uint32_t domain_id, int type= ); +int xc_vm_event_ng_set_state(xc_interface *xch, uint32_t domain_id, int ty= pe, bool enabled); + + int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...); =20 #endif /* __XC_PRIVATE_H__ */ diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c index 3b1018b..07243a6 100644 --- a/tools/libxc/xc_vm_event.c +++ b/tools/libxc/xc_vm_event.c @@ -154,6 +154,59 @@ int xc_vm_event_get_version(xc_interface *xch) return rc; } =20 +int xc_vm_event_ng_get_version(xc_interface *xch) +{ + DECLARE_DOMCTL; + int rc; + + domctl.cmd =3D XEN_DOMCTL_vm_event_ng_op; + domctl.domain =3D DOMID_INVALID; + domctl.u.vm_event_op.op =3D XEN_VM_EVENT_NG_GET_VERSION; + domctl.u.vm_event_op.type =3D XEN_VM_EVENT_TYPE_MONITOR; + + rc =3D do_domctl(xch, &domctl); + if ( !rc ) + rc =3D domctl.u.vm_event_ng_op.u.version; + return rc; +} + +int xc_vm_event_ng_create(xc_interface *xch, uint32_t domain_id, int type) +{ + DECLARE_DOMCTL; + + domctl.cmd =3D XEN_DOMCTL_vm_event_ng_op; + domctl.domain =3D domain_id; + domctl.u.vm_event_ng_op.op =3D XEN_VM_EVENT_NG_CREATE; + domctl.u.vm_event_ng_op.type =3D type; + + return do_domctl(xch, &domctl); +} + +int xc_vm_event_ng_destroy(xc_interface *xch, uint32_t domain_id, int type) +{ + DECLARE_DOMCTL; + + domctl.cmd =3D XEN_DOMCTL_vm_event_ng_op; + domctl.domain =3D domain_id; + domctl.u.vm_event_ng_op.op =3D XEN_VM_EVENT_NG_DESTROY; + domctl.u.vm_event_ng_op.type =3D type; + + return do_domctl(xch, &domctl); +} + +int xc_vm_event_ng_set_state(xc_interface *xch, uint32_t domain_id, int ty= pe, bool enabled) +{ + DECLARE_DOMCTL; + + domctl.cmd =3D XEN_DOMCTL_vm_event_ng_op; + domctl.domain =3D domain_id; + domctl.u.vm_event_ng_op.op =3D XEN_VM_EVENT_NG_SET_STATE; + domctl.u.vm_event_ng_op.type =3D type; + domctl.u.vm_event_ng_op.u.enabled =3D enabled; + + return do_domctl(xch, &domctl); +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 2f620d9..030b5bd 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -119,6 +119,7 @@ #include #include #include +#include #include #include #include @@ -4584,6 +4585,10 @@ int arch_acquire_resource(struct domain *d, unsigned= int type, } #endif =20 + case XENMEM_resource_vm_event: + rc =3D vm_event_ng_get_frames(d, id, frame, nr_frames, mfn_list); + break; + default: rc =3D -EOPNOTSUPP; break; diff --git a/xen/common/Makefile b/xen/common/Makefile index 33d03b8..8cb33e2 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -59,6 +59,7 @@ obj-y +=3D trace.o obj-y +=3D version.o obj-y +=3D virtual_region.o obj-y +=3D vm_event.o +obj-y +=3D vm_event_ng.o obj-y +=3D vmap.o obj-y +=3D vsprintf.o obj-y +=3D wait.o diff --git a/xen/common/domctl.c b/xen/common/domctl.c index bade9a6..23f6e56 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -393,6 +393,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_d= omctl) { case XEN_DOMCTL_test_assign_device: case XEN_DOMCTL_vm_event_op: + case XEN_DOMCTL_vm_event_ng_op: if ( op->domain =3D=3D DOMID_INVALID ) { case XEN_DOMCTL_createdomain: @@ -1023,6 +1024,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) = u_domctl) copyback =3D 1; break; =20 + case XEN_DOMCTL_vm_event_ng_op: + ret =3D vm_event_ng_domctl(d, &op->u.vm_event_ng_op, + guest_handle_cast(u_domctl, void)); + copyback =3D 1; + break; + #ifdef CONFIG_MEM_ACCESS case XEN_DOMCTL_set_access_required: if ( unlikely(current->domain =3D=3D d) ) /* no domain_pause() */ diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 1d85f3e..e94fe3c 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -380,6 +380,51 @@ static int vm_event_ring_get_response(struct vm_event_= ring_domain *impl, return 1; } =20 +void vm_event_handle_response(struct domain *d, struct vcpu *v, + vm_event_response_t *rsp) +{ + /* Check flags which apply only when the vCPU is paused */ + if ( atomic_read(&v->vm_event_pause_count) ) + { +#ifdef CONFIG_HAS_MEM_PAGING + if ( rsp->reason =3D=3D VM_EVENT_REASON_MEM_PAGING ) + p2m_mem_paging_resume(d, rsp); +#endif + + /* + * Check emulation flags in the arch-specific handler only, as it + * has to set arch-specific flags when supported, and to avoid + * bitmask overhead when it isn't supported. + */ + vm_event_emulate_check(v, rsp); + + /* + * Check in arch-specific handler to avoid bitmask overhead when + * not supported. + */ + vm_event_register_write_resume(v, rsp); + + /* + * Check in arch-specific handler to avoid bitmask overhead when + * not supported. + */ + vm_event_toggle_singlestep(d, v, rsp); + + /* Check for altp2m switch */ + if ( rsp->flags & VM_EVENT_FLAG_ALTERNATE_P2M ) + p2m_altp2m_check(v, rsp->altp2m_idx); + + if ( rsp->flags & VM_EVENT_FLAG_SET_REGISTERS ) + vm_event_set_registers(v, rsp); + + if ( rsp->flags & VM_EVENT_FLAG_GET_NEXT_INTERRUPT ) + vm_event_monitor_next_interrupt(v); + + if ( rsp->flags & VM_EVENT_FLAG_VCPU_PAUSED ) + vm_event_vcpu_unpause(v); + } +} + /* * Pull all responses from the given ring and unpause the corresponding vC= PU * if required. Based on the response type, here we can also call custom @@ -427,47 +472,7 @@ static int vm_event_ring_resume(struct vm_event_ring_d= omain *impl) * In some cases the response type needs extra handling, so here * we call the appropriate handlers. */ - - /* Check flags which apply only when the vCPU is paused */ - if ( atomic_read(&v->vm_event_pause_count) ) - { -#ifdef CONFIG_HAS_MEM_PAGING - if ( rsp.reason =3D=3D VM_EVENT_REASON_MEM_PAGING ) - p2m_mem_paging_resume(impl->ved.d, &rsp); -#endif - - /* - * Check emulation flags in the arch-specific handler only, as= it - * has to set arch-specific flags when supported, and to avoid - * bitmask overhead when it isn't supported. - */ - vm_event_emulate_check(v, &rsp); - - /* - * Check in arch-specific handler to avoid bitmask overhead wh= en - * not supported. - */ - vm_event_register_write_resume(v, &rsp); - - /* - * Check in arch-specific handler to avoid bitmask overhead wh= en - * not supported. - */ - vm_event_toggle_singlestep(impl->ved.d, v, &rsp); - - /* Check for altp2m switch */ - if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M ) - p2m_altp2m_check(v, rsp.altp2m_idx); - - if ( rsp.flags & VM_EVENT_FLAG_SET_REGISTERS ) - vm_event_set_registers(v, &rsp); - - if ( rsp.flags & VM_EVENT_FLAG_GET_NEXT_INTERRUPT ) - vm_event_monitor_next_interrupt(v); - - if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED ) - vm_event_vcpu_unpause(v); - } + vm_event_handle_response(impl->ved.d, v, &rsp); } =20 return 0; @@ -709,9 +714,10 @@ int vm_event_domctl(struct domain *d, struct xen_domct= l_vm_event_op *vec, rc =3D arch_monitor_init_domain(d); if ( rc ) break; - rc =3D vm_event_ring_enable(d, vec, &d->vm_event_monitor, _VPF= _mem_access, - HVM_PARAM_MONITOR_RING_PFN, - monitor_notification); + rc =3D vm_event_ring_enable(d, vec, &d->vm_event_monitor, + _VPF_mem_access, + HVM_PARAM_MONITOR_RING_PFN, + monitor_notification); break; =20 case XEN_VM_EVENT_DISABLE: diff --git a/xen/common/vm_event_ng.c b/xen/common/vm_event_ng.c new file mode 100644 index 0000000..17ae33c --- /dev/null +++ b/xen/common/vm_event_ng.c @@ -0,0 +1,449 @@ +/*************************************************************************= ***** + * vm_event_ng.c + * + * VM event support (new generation). + * + * Copyright (c) 2019, Bitdefender S.R.L. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; If not, see . + */ + +#include +#include +#include +#include +#include +#include +#include + +#define to_channels(_ved) container_of((_ved), \ + struct vm_event_channels_domain, v= ed) + +#define VM_EVENT_CHANNELS_ENABLED 1 + +struct vm_event_channels_domain +{ + /* VM event domain */ + struct vm_event_domain ved; + /* shared channels buffer */ + struct vm_event_slot *slots; + /* the buffer size (number of frames) */ + unsigned int nr_frames; + /* state */ + bool enabled; + /* buffer's mnf list */ + mfn_t mfn[0]; +}; + +static const struct vm_event_ops vm_event_channels_ops; + +static int vm_event_channels_alloc_buffer(struct vm_event_channels_domain = *impl) +{ + int i, rc =3D -ENOMEM; + + for ( i =3D 0; i < impl->nr_frames; i++ ) + { + struct page_info *page =3D alloc_domheap_page(impl->ved.d, 0); + if ( !page ) + goto err; + + if ( !get_page_and_type(page, impl->ved.d, PGT_writable_page) ) + { + rc =3D -ENODATA; + goto err; + } + + impl->mfn[i] =3D page_to_mfn(page); + } + + impl->slots =3D (struct vm_event_slot *)vmap(impl->mfn, impl->nr_frame= s); + if ( !impl->slots ) + goto err; + + for ( i =3D 0; i < impl->nr_frames; i++ ) + clear_page((void*)impl->slots + i * PAGE_SIZE); + + return 0; + +err: + while ( --i >=3D 0 ) + { + struct page_info *page =3D mfn_to_page(impl->mfn[i]); + + if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) + put_page(page); + put_page_and_type(page); + } + + return rc; +} + +static void vm_event_channels_free_buffer(struct vm_event_channels_domain = *impl) +{ + int i; + + ASSERT(impl); + + if ( !impl->slots ) + return; + + vunmap(impl->slots); + + for ( i =3D 0; i < impl->nr_frames; i++ ) + { + struct page_info *page =3D mfn_to_page(impl->mfn[i]); + + ASSERT(page); + if ( test_and_clear_bit(_PGC_allocated, &page->count_info) ) + put_page(page); + put_page_and_type(page); + } +} + +static int vm_event_channels_create( + struct domain *d, + struct xen_domctl_vm_event_ng_op *vec, + struct vm_event_domain **_ved, + int pause_flag, + xen_event_channel_notification_t notification_fn) +{ + int rc, i; + unsigned int nr_frames =3D PFN_UP(d->max_vcpus * sizeof(struct vm_even= t_slot)); + struct vm_event_channels_domain *impl; + + if ( *_ved ) + return -EBUSY; + + impl =3D _xzalloc(sizeof(struct vm_event_channels_domain) + + nr_frames * sizeof(mfn_t), + __alignof__(struct vm_event_channels_domain)); + if ( unlikely(!impl) ) + return -ENOMEM; + + spin_lock_init(&impl->ved.lock); + spin_lock(&impl->ved.lock); + + impl->nr_frames =3D nr_frames; + impl->ved.d =3D d; + impl->ved.ops =3D &vm_event_channels_ops; + + rc =3D vm_event_init_domain(d); + if ( rc < 0 ) + goto err; + + rc =3D vm_event_channels_alloc_buffer(impl); + if ( rc ) + goto err; + + for ( i =3D 0; i < d->max_vcpus; i++ ) + { + rc =3D alloc_unbound_xen_event_channel(d, i, current->domain->doma= in_id, + notification_fn); + if ( rc < 0 ) + goto err; + + impl->slots[i].port =3D rc; + impl->slots[i].state =3D STATE_VM_EVENT_SLOT_IDLE; + } + + impl->enabled =3D false; + + spin_unlock(&impl->ved.lock); + *_ved =3D &impl->ved; + return 0; + +err: + spin_unlock(&impl->ved.lock); + XFREE(impl); + return rc; +} + +static int vm_event_channels_destroy(struct vm_event_domain **_ved) +{ + struct vcpu *v; + struct vm_event_channels_domain *impl =3D to_channels(*_ved); + int i; + + spin_lock(&(*_ved)->lock); + + for_each_vcpu( (*_ved)->d, v ) + { + if ( atomic_read(&v->vm_event_pause_count) ) + vm_event_vcpu_unpause(v); + } + + for ( i =3D 0; i < (*_ved)->d->max_vcpus; i++ ) + evtchn_close((*_ved)->d, impl->slots[i].port, 0); + + vm_event_channels_free_buffer(impl); + spin_unlock(&(*_ved)->lock); + XFREE(*_ved); + + return 0; +} + +static bool vm_event_channels_check(struct vm_event_domain *ved) +{ + return to_channels(ved)->slots !=3D NULL; +} + +static void vm_event_channels_cleanup(struct vm_event_domain **_ved) +{ + vm_event_channels_destroy(_ved); +} + +static int vm_event_channels_claim_slot(struct vm_event_domain *ved, + bool allow_sleep) +{ + return 0; +} + +static void vm_event_channels_cancel_slot(struct vm_event_domain *ved) +{ +} + +static void vm_event_channels_put_request(struct vm_event_domain *ved, + vm_event_request_t *req) +{ + struct vm_event_channels_domain *impl =3D to_channels(ved); + struct vm_event_slot *slot; + + /* exit if the vm_event_domain was not specifically enabled */ + if ( !impl->enabled ) + return; + + ASSERT( req->vcpu_id >=3D 0 && req->vcpu_id < ved->d->max_vcpus ); + + slot =3D &impl->slots[req->vcpu_id]; + + if ( current->domain !=3D ved->d ) + { + req->flags |=3D VM_EVENT_FLAG_FOREIGN; +#ifndef NDEBUG + if ( !(req->flags & VM_EVENT_FLAG_VCPU_PAUSED) ) + gdprintk(XENLOG_G_WARNING, "d%dv%d was not paused.\n", + ved->d->domain_id, req->vcpu_id); +#endif + } + + req->version =3D VM_EVENT_INTERFACE_VERSION; + + spin_lock(&impl->ved.lock); + if ( slot->state !=3D STATE_VM_EVENT_SLOT_IDLE ) + { + gdprintk(XENLOG_G_WARNING, "The VM event slot for d%dv%d is not ID= LE.\n", + impl->ved.d->domain_id, req->vcpu_id); + spin_unlock(&impl->ved.lock); + return; + } + + slot->u.req =3D *req; + slot->state =3D STATE_VM_EVENT_SLOT_SUBMIT; + spin_unlock(&impl->ved.lock); + notify_via_xen_event_channel(impl->ved.d, slot->port); +} + +static int vm_event_channels_get_response(struct vm_event_channels_domain = *impl, + struct vcpu *v, vm_event_respons= e_t *rsp) +{ + struct vm_event_slot *slot =3D &impl->slots[v->vcpu_id]; + + ASSERT( slot !=3D NULL ); + spin_lock(&impl->ved.lock); + + if ( slot->state !=3D STATE_VM_EVENT_SLOT_FINISH ) + { + gdprintk(XENLOG_G_WARNING, "The VM event slot state for d%dv%d is = invalid.\n", + impl->ved.d->domain_id, v->vcpu_id); + spin_unlock(&impl->ved.lock); + return -1; + } + + *rsp =3D slot->u.rsp; + slot->state =3D STATE_VM_EVENT_SLOT_IDLE; + + spin_unlock(&impl->ved.lock); + return 0; +} + +static int vm_event_channels_resume(struct vm_event_channels_domain *impl, + struct vcpu *v) +{ + vm_event_response_t rsp; + + if ( unlikely(!impl || !vm_event_check(&impl->ved)) ) + return -ENODEV; + + ASSERT(impl->ved.d !=3D current->domain); + + if ( vm_event_channels_get_response(impl, v, &rsp) || + rsp.version !=3D VM_EVENT_INTERFACE_VERSION || + rsp.vcpu_id !=3D v->vcpu_id ) + return -1; + + vm_event_handle_response(impl->ved.d, v, &rsp); + + return 0; +} + +/* Registered with Xen-bound event channel for incoming notifications. */ +static void monitor_notification(struct vcpu *v, unsigned int port) +{ + vm_event_channels_resume(to_channels(v->domain->vm_event_monitor), v); +} + +int vm_event_ng_domctl(struct domain *d, struct xen_domctl_vm_event_ng_op = *vec, + XEN_GUEST_HANDLE_PARAM(void) u_domctl) +{ + int rc; + + if ( vec->op =3D=3D XEN_VM_EVENT_NG_GET_VERSION ) + { + vec->u.version =3D VM_EVENT_INTERFACE_VERSION; + return 0; + } + + if ( unlikely(d =3D=3D NULL) ) + return -ESRCH; + + rc =3D xsm_vm_event_control(XSM_PRIV, d, vec->type, vec->op); + if ( rc ) + return rc; + + if ( unlikely(d =3D=3D current->domain) ) /* no domain_pause() */ + { + gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n"= ); + return -EINVAL; + } + + if ( unlikely(d->is_dying) ) + { + gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain %u= \n", + d->domain_id); + return 0; + } + + if ( unlikely(d->vcpu =3D=3D NULL) || unlikely(d->vcpu[0] =3D=3D NULL)= ) + { + gdprintk(XENLOG_INFO, + "Memory event op on a domain (%u) with no vcpus\n", + d->domain_id); + return -EINVAL; + } + + switch ( vec->type ) + { + case XEN_VM_EVENT_TYPE_MONITOR: + { + rc =3D -EINVAL; + + switch ( vec-> op) + { + case XEN_VM_EVENT_NG_CREATE: + /* domain_pause() not required here, see XSA-99 */ + rc =3D arch_monitor_init_domain(d); + if ( rc ) + break; + rc =3D vm_event_channels_create(d, vec, &d->vm_event_monitor, + _VPF_mem_access, monitor_notification= ); + break; + + case XEN_VM_EVENT_NG_DESTROY: + if ( !vm_event_check(d->vm_event_monitor) ) + break; + domain_pause(d); + rc =3D vm_event_channels_destroy(&d->vm_event_monitor); + arch_monitor_cleanup_domain(d); + domain_unpause(d); + break; + + case XEN_VM_EVENT_NG_SET_STATE: + if ( !vm_event_check(d->vm_event_monitor) ) + break; + domain_pause(d); + to_channels(d->vm_event_monitor)->enabled =3D !!vec->u.enabled; + domain_unpause(d); + rc =3D 0; + break; + + default: + rc =3D -ENOSYS; + } + break; + } + +#ifdef CONFIG_HAS_MEM_PAGING + case XEN_VM_EVENT_TYPE_PAGING: +#endif + +#ifdef CONFIG_HAS_MEM_SHARING + case XEN_VM_EVENT_TYPE_SHARING: +#endif + + default: + rc =3D -ENOSYS; + } + + return rc; +} + +int vm_event_ng_get_frames(struct domain *d, unsigned int id, + unsigned long frame, unsigned int nr_frames, + xen_pfn_t mfn_list[]) +{ + struct vm_event_domain *ved; + int i; + + switch (id ) + { + case XEN_VM_EVENT_TYPE_MONITOR: + ved =3D d->vm_event_monitor; + break; + + default: + return -ENOSYS; + } + + if ( !vm_event_check(ved) ) + return -EINVAL; + + if ( frame !=3D 0 || nr_frames !=3D to_channels(ved)->nr_frames ) + return -EINVAL; + + spin_lock(&ved->lock); + + for ( i =3D 0; i < to_channels(ved)->nr_frames; i++ ) + mfn_list[i] =3D mfn_x(to_channels(ved)->mfn[i]); + + spin_unlock(&ved->lock); + return 0; +} + +static const struct vm_event_ops vm_event_channels_ops =3D { + .check =3D vm_event_channels_check, + .cleanup =3D vm_event_channels_cleanup, + .claim_slot =3D vm_event_channels_claim_slot, + .cancel_slot =3D vm_event_channels_cancel_slot, + .put_request =3D vm_event_channels_put_request +}; + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index 19281fa..ff8b680 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -792,6 +792,24 @@ struct xen_domctl_vm_event_op { }; =20 /* + * XEN_DOMCTL_vm_event_ng_op. + * Next Generation vm_event operations. + */ +#define XEN_VM_EVENT_NG_CREATE 0 +#define XEN_VM_EVENT_NG_DESTROY 1 +#define XEN_VM_EVENT_NG_SET_STATE 2 +#define XEN_VM_EVENT_NG_GET_VERSION 3 + +struct xen_domctl_vm_event_ng_op { + uint32_t op; /* XEN_VM_EVENT_NG_* */ + uint32_t type; /* XEN_VM_EVENT_TYPE_* */ + union { + uint32_t version; /* OUT: version number */ + uint8_t enabled; /* IN: state */ + } u; +}; + +/* * Memory sharing operations */ /* XEN_DOMCTL_mem_sharing_op. @@ -1142,6 +1160,7 @@ struct xen_domctl { /* #define XEN_DOMCTL_set_gnttab_limits 80 - Moved into XEN_DOMCT= L_createdomain */ #define XEN_DOMCTL_vuart_op 81 #define XEN_DOMCTL_get_cpu_policy 82 +#define XEN_DOMCTL_vm_event_ng_op 83 #define XEN_DOMCTL_gdbsx_guestmemio 1000 #define XEN_DOMCTL_gdbsx_pausevcpu 1001 #define XEN_DOMCTL_gdbsx_unpausevcpu 1002 @@ -1183,6 +1202,7 @@ struct xen_domctl { struct xen_domctl_subscribe subscribe; struct xen_domctl_debug_op debug_op; struct xen_domctl_vm_event_op vm_event_op; + struct xen_domctl_vm_event_ng_op vm_event_ng_op; struct xen_domctl_mem_sharing_op mem_sharing_op; #if defined(__i386__) || defined(__x86_64__) struct xen_domctl_cpuid cpuid; diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index 68ddadb..2e8912e 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -612,6 +612,7 @@ struct xen_mem_acquire_resource { =20 #define XENMEM_resource_ioreq_server 0 #define XENMEM_resource_grant_table 1 +#define XENMEM_resource_vm_event 2 =20 /* * IN - a type-specific resource identifier, which must be zero @@ -619,6 +620,7 @@ struct xen_mem_acquire_resource { * * type =3D=3D XENMEM_resource_ioreq_server -> id =3D=3D ioreq server = id * type =3D=3D XENMEM_resource_grant_table -> id defined below + * type =3D=3D XENMEM_resource_vm_event -> id =3D=3D vm_event type */ uint32_t id; =20 diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h index c48bc21..2f2160b 100644 --- a/xen/include/public/vm_event.h +++ b/xen/include/public/vm_event.h @@ -421,6 +421,22 @@ typedef struct vm_event_st { =20 DEFINE_RING_TYPES(vm_event, vm_event_request_t, vm_event_response_t); =20 +/* VM Event slot state */ +#define STATE_VM_EVENT_SLOT_IDLE 0 /* the slot data is invalid */ +#define STATE_VM_EVENT_SLOT_SUBMIT 1 /* a request was submitted */ +#define STATE_VM_EVENT_SLOT_FINISH 2 /* a response was issued */ + +struct vm_event_slot +{ + uint32_t port; /* evtchn for notifications to/from helper */ + uint32_t state:4; + uint32_t pad:28; + union { + vm_event_request_t req; + vm_event_response_t rsp; + } u; +}; + #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */ #endif /* _XEN_PUBLIC_VM_EVENT_H */ =20 diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h index 15c15e6..df0aafc 100644 --- a/xen/include/xen/vm_event.h +++ b/xen/include/xen/vm_event.h @@ -110,6 +110,13 @@ static inline void vm_event_put_request(struct vm_even= t_domain *ved, int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec, XEN_GUEST_HANDLE_PARAM(void) u_domctl); =20 +int vm_event_ng_domctl(struct domain *d, struct xen_domctl_vm_event_ng_op = *vec, + XEN_GUEST_HANDLE_PARAM(void) u_domctl); + +int vm_event_ng_get_frames(struct domain *d, unsigned int id, + unsigned long frame, unsigned int nr_frames, + xen_pfn_t mfn_list[]); + void vm_event_vcpu_pause(struct vcpu *v); void vm_event_vcpu_unpause(struct vcpu *v); =20 @@ -118,6 +125,9 @@ void vm_event_set_registers(struct vcpu *v, vm_event_re= sponse_t *rsp); =20 void vm_event_monitor_next_interrupt(struct vcpu *v); =20 +void vm_event_handle_response(struct domain *d, struct vcpu *v, + vm_event_response_t *rsp); + #endif /* __VM_EVENT_H__ */ =20 /* --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Wed May 8 23:10:08 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559225997; cv=none; d=zoho.com; s=zohoarc; b=Mk+Uf5Creg6HqUacwfF1XgSAcKK3ZUjKkOHNoDs9qggIVdphZpW6uN0rpyIcgcBwz7s5SwwRDi64RgHiLjQW4sPd8AYP4LN5UtjbNBVOQe+CLaMQM1QNCBd/3RC862rYsJR1KS7YnCqWKmaW/xCbdC53KYH2TkJSYfr7ZxG3qJc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559225997; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=0iO58gHu2kxlsaLSu9ZBMOst2UwhE2tKa4DWFyn7QDE=; b=b2CuWhSIvg6pPJsUYXBTJNHkBV7H4vxj+xQCxJiBE7Pj3nefcV/DLOTlbi9BPqwHttTinVERH1eJC/e3Znm/1a6GiKfKjCzgfqaAuu7vHYnVsFwD/WP4iMuer0WDHpJwRWthuFK/9/9b8KhPnIzTlLonhInuW2GgdWLgHpEdnJI= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559225997833779.9720190238347; Thu, 30 May 2019 07:19:57 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt5-0003FG-Rd; Thu, 30 May 2019 14:18:35 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hWLt4-0003CU-4X for xen-devel@lists.xenproject.org; Thu, 30 May 2019 14:18:34 +0000 Received: from mx01.bbu.dsd.mx.bitdefender.com (unknown [91.199.104.161]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id cb69e44c-82e5-11e9-8fc3-d7a4e6e4b3c5; Thu, 30 May 2019 14:18:29 +0000 (UTC) Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id E54BD305FFAB; Thu, 30 May 2019 17:18:25 +0300 (EEST) Received: from bitdefender.com (unknown [195.189.155.70]) by smtp.bitdefender.com (Postfix) with ESMTPSA id C20813051E7B; Thu, 30 May 2019 17:18:25 +0300 (EEST) X-Inumbo-ID: cb69e44c-82e5-11e9-8fc3-d7a4e6e4b3c5 From: Petre Pircalabu To: xen-devel@lists.xenproject.org Date: Thu, 30 May 2019 17:18:23 +0300 Message-Id: <2da0f80cc9af391f623466f8152a1728274a967b.1559224640.git.ppircalabu@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: Subject: [Xen-devel] [PATCH 9/9] xen-access: Add support for vm_event_ng interface X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Tamas K Lengyel , Ian Jackson , Wei Liu , Razvan Cojocaru MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Split xen-access in order to accommodate both vm_event interfaces (legacy and NG). By default, the legacy vm_event is selected but this can be changed by adding the '-n' flag in the command line. Signed-off-by: Petre Pircalabu --- tools/tests/xen-access/Makefile | 7 +- tools/tests/xen-access/vm-event-ng.c | 210 ++++++++++++++++++ tools/tests/xen-access/vm-event.c | 193 +++++++++++++++++ tools/tests/xen-access/xen-access.c | 408 +++++++++++++------------------= ---- tools/tests/xen-access/xen-access.h | 91 ++++++++ 5 files changed, 644 insertions(+), 265 deletions(-) create mode 100644 tools/tests/xen-access/vm-event-ng.c create mode 100644 tools/tests/xen-access/vm-event.c create mode 100644 tools/tests/xen-access/xen-access.h diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makef= ile index 131c9f3..17760d8 100644 --- a/tools/tests/xen-access/Makefile +++ b/tools/tests/xen-access/Makefile @@ -7,6 +7,7 @@ CFLAGS +=3D -DXC_WANT_COMPAT_DEVICEMODEL_API CFLAGS +=3D $(CFLAGS_libxenctrl) CFLAGS +=3D $(CFLAGS_libxenguest) CFLAGS +=3D $(CFLAGS_libxenevtchn) +CFLAGS +=3D $(CFLAGS_libxenforeignmemory) CFLAGS +=3D $(CFLAGS_xeninclude) =20 TARGETS-y :=3D xen-access @@ -25,8 +26,10 @@ clean: .PHONY: distclean distclean: clean =20 -xen-access: xen-access.o Makefile - $(CC) -o $@ $< $(LDFLAGS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LD= LIBS_libxenevtchn) +OBJS =3D xen-access.o vm-event.o vm-event-ng.o + +xen-access: $(OBJS) Makefile + $(CC) -o $@ $(OBJS) $(LDFLAGS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)= $(LDLIBS_libxenevtchn) $(LDLIBS_libxenforeignmemory) =20 install uninstall: =20 diff --git a/tools/tests/xen-access/vm-event-ng.c b/tools/tests/xen-access/= vm-event-ng.c new file mode 100644 index 0000000..9177cfc --- /dev/null +++ b/tools/tests/xen-access/vm-event-ng.c @@ -0,0 +1,210 @@ +/* + * vm-event-ng.c + * + * Copyright (c) 2019 Bitdefender S.R.L. + * + * Permission is hereby granted, free of charge, to any person obtaining a= copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation = the + * rights to use, copy, modify, merge, publish, distribute, sublicense, an= d/or + * sell copies of the Software, and to permit persons to whom the Software= is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included= in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL= THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#include +#include +#include +#include +#include +#include +#include "xen-access.h" + +#ifndef PFN_UP +#define PFN_UP(x) (((x) + XC_PAGE_SIZE-1) >> XC_PAGE_SHIFT) +#endif /* PFN_UP */ + +typedef struct vm_event_channels +{ + vm_event_t vme; + int num_vcpus; + xenforeignmemory_handle *fmem; + xenforeignmemory_resource_handle *fres; + struct vm_event_slot *slots; + int ports[0]; +} vm_event_channels_t; + +#define to_channels(_vme) container_of((_vme), vm_event_channels_t, vme) + +static int vm_event_channels_init(xc_interface *xch, xenevtchn_handle *xce, + domid_t domain_id, vm_event_ops_t *ops, + vm_event_t **vm_event) +{ + vm_event_channels_t *impl =3D NULL; + int rc, i, num_vcpus; + xc_dominfo_t info; + unsigned long nr_frames; + + /* Get the numbers of vcpus */ + rc =3D xc_domain_getinfo(xch, domain_id, 1, &info); + if ( rc !=3D 1 ) + { + ERROR("xc_domain_getinfo failed. rc =3D %d\n", rc); + return rc; + } + + num_vcpus =3D info.max_vcpu_id + 1; + + impl =3D (vm_event_channels_t *)calloc(1, sizeof(vm_event_channels_t) + + num_vcpus * sizeof(int)); + if ( !impl ) + return -ENOMEM; + + impl->num_vcpus =3D num_vcpus; + + impl->fmem =3D xenforeignmemory_open(0,0); + if ( !impl->fmem ) + { + rc =3D -errno; + goto err; + } + + rc =3D xc_monitor_ng_create(xch, domain_id); + if ( rc ) + { + ERROR("Failed to enable monitor"); + goto err; + } + + nr_frames =3D PFN_UP(num_vcpus * sizeof(struct vm_event_slot)); + + impl->fres =3D xenforeignmemory_map_resource(impl->fmem, domain_id, + XENMEM_resource_vm_event, + XEN_VM_EVENT_TYPE_MONITOR, = 0, + nr_frames, (void*)&impl->sl= ots, + PROT_READ | PROT_WRITE, 0); + if ( !impl->fres ) + { + ERROR("Failed to map vm_event resource"); + rc =3D -errno; + goto err; + } + + for ( i =3D 0; i < impl->num_vcpus; i++) + { + rc =3D xenevtchn_bind_interdomain(xce, domain_id, impl->slots[i].p= ort); + if ( rc < 0 ) + { + ERROR("Failed to bind vm_event_slot port for vcpu %d", i); + rc =3D -errno; + goto err; + } + + impl->ports[i] =3D rc; + } + + rc =3D xc_monitor_ng_set_state(xch, domain_id, true); + if ( rc < 0 ) + { + ERROR("Failed to start monitor rc =3D %d", rc); + goto err; + } + + + *vm_event =3D (vm_event_t*) impl; + return 0; + +err: + xc_monitor_ng_destroy(xch, domain_id); + xenforeignmemory_unmap_resource(impl->fmem, impl->fres); + xenforeignmemory_close(impl->fmem); + free(impl); + return rc; +} + +static int vcpu_id_by_port(vm_event_channels_t *impl, int port, int *vcpu_= id) +{ + int i; + + for ( i =3D 0; i < impl->num_vcpus; i++ ) + { + if ( port =3D=3D impl->ports[i] ) + { + *vcpu_id =3D i; + return 0; + } + } + + return -EINVAL; +} + +static int vm_event_channels_teardown(vm_event_t *vm_event) +{ + vm_event_channels_t *impl =3D to_channels(vm_event); + + xc_monitor_ng_destroy(impl->vme.xch, impl->vme.domain_id); + xenforeignmemory_unmap_resource(impl->fmem, impl->fres); + xenforeignmemory_close(impl->fmem); + + return 0; +} + +static bool vm_event_channels_get_request(vm_event_t *vm_event, vm_event_r= equest_t *req, int *port) +{ + int vcpu_id; + vm_event_channels_t *impl =3D to_channels(vm_event); + + if ( vcpu_id_by_port(impl, *port, &vcpu_id) !=3D 0 ) + return false; + + if ( impl->slots[vcpu_id].state !=3D STATE_VM_EVENT_SLOT_SUBMIT ) + return false; + + memcpy(req, &impl->slots[vcpu_id].u.req, sizeof(*req)); + + return true; +} + +static void vm_event_channels_put_response(vm_event_t *vm_event, vm_event_= response_t *rsp, int port) +{ + int vcpu_id; + vm_event_channels_t *impl =3D to_channels(vm_event); + + if ( vcpu_id_by_port(impl, port, &vcpu_id) !=3D 0 ) + return; + + memcpy(&impl->slots[vcpu_id].u.rsp, rsp, sizeof(*rsp)); + impl->slots[vcpu_id].state =3D STATE_VM_EVENT_SLOT_FINISH; +} + +static int vm_event_channels_notify_port(vm_event_t *vm_event, int port) +{ + return xenevtchn_notify(vm_event->xce, port); +} + +vm_event_ops_t channel_ops =3D { + .get_request =3D vm_event_channels_get_request, + .put_response =3D vm_event_channels_put_response, + .notify_port =3D vm_event_channels_notify_port, + .init =3D vm_event_channels_init, + .teardown =3D vm_event_channels_teardown +}; + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: +- */ diff --git a/tools/tests/xen-access/vm-event.c b/tools/tests/xen-access/vm-= event.c new file mode 100644 index 0000000..ffd5476 --- /dev/null +++ b/tools/tests/xen-access/vm-event.c @@ -0,0 +1,193 @@ +/* + * vm-event.c + * + * Copyright (c) 2019 Bitdefender S.R.L. + * + * Permission is hereby granted, free of charge, to any person obtaining a= copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation = the + * rights to use, copy, modify, merge, publish, distribute, sublicense, an= d/or + * sell copies of the Software, and to permit persons to whom the Software= is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included= in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL= THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#include +#include +#include +#include +#include +#include "xen-access.h" + +typedef struct vm_event_ring { + vm_event_t vme; + int port; + vm_event_back_ring_t back_ring; + uint32_t evtchn_port; + void *ring_page; +} vm_event_ring_t; + +#define to_ring(_vme) container_of((_vme), vm_event_ring_t, vme) + +static int vm_event_ring_init(xc_interface *xch, xenevtchn_handle *xce, + domid_t domain_id, vm_event_ops_t *ops, + vm_event_t **vm_event) +{ + vm_event_ring_t *impl; + int rc; + + impl =3D (vm_event_ring_t*) calloc (1, sizeof(vm_event_ring_t)); + if ( !impl ) + return -ENOMEM; + + /* Enable mem_access */ + impl->ring_page =3D xc_monitor_enable(xch, domain_id, &impl->evtchn_po= rt); + if ( impl->ring_page =3D=3D NULL ) + { + switch ( errno ) { + case EBUSY: + ERROR("xenaccess is (or was) active on this domain"); + break; + case ENODEV: + ERROR("EPT not supported for this guest"); + break; + default: + perror("Error enabling mem_access"); + break; + } + rc =3D -errno; + goto err; + } + + /* Bind event notification */ + rc =3D xenevtchn_bind_interdomain(xce, domain_id, impl->evtchn_port); + if ( rc < 0 ) + { + ERROR("Failed to bind event channel"); + munmap(impl->ring_page, XC_PAGE_SIZE); + xc_monitor_disable(xch, domain_id); + goto err; + } + + impl->port =3D rc; + + /* Initialise ring */ + SHARED_RING_INIT((vm_event_sring_t *)impl->ring_page); + BACK_RING_INIT(&impl->back_ring, (vm_event_sring_t *)impl->ring_page, + XC_PAGE_SIZE); + + *vm_event =3D (vm_event_t*) impl; + return 0; + +err: + free(impl); + return rc; +} + +static int vm_event_ring_teardown(vm_event_t *vm_event) +{ + vm_event_ring_t *impl =3D to_ring(vm_event); + int rc; + + if ( impl->ring_page ) + munmap(impl->ring_page, XC_PAGE_SIZE); + + /* Tear down domain xenaccess in Xen */ + rc =3D xc_monitor_disable(vm_event->xch, vm_event->domain_id); + if ( rc !=3D 0 ) + { + ERROR("Error tearing down domain xenaccess in xen"); + return rc; + } + + /* Unbind VIRQ */ + rc =3D xenevtchn_unbind(vm_event->xce, impl->port); + if ( rc !=3D 0 ) + { + ERROR("Error unbinding event port"); + return rc; + } + + return 0; +} + +/* + * Note that this function is not thread safe. + */ +static bool vm_event_ring_get_request(vm_event_t *vm_event, vm_event_reque= st_t *req, int *port) +{ + vm_event_back_ring_t *back_ring; + RING_IDX req_cons; + vm_event_ring_t *impl =3D to_ring(vm_event); + + if ( !RING_HAS_UNCONSUMED_REQUESTS(&impl->back_ring) ) + return false; + + back_ring =3D &impl->back_ring; + req_cons =3D back_ring->req_cons; + + /* Copy request */ + memcpy(req, RING_GET_REQUEST(back_ring, req_cons), sizeof(*req)); + req_cons++; + + /* Update ring */ + back_ring->req_cons =3D req_cons; + back_ring->sring->req_event =3D req_cons + 1; + + *port =3D impl->port; + + return true; +} + +/* + * Note that this function is not thread safe. + */ +static void vm_event_ring_put_response(vm_event_t *vm_event, vm_event_resp= onse_t *rsp, int port) +{ + vm_event_back_ring_t *back_ring; + RING_IDX rsp_prod; + vm_event_ring_t *impl =3D to_ring(vm_event); + + back_ring =3D &impl->back_ring; + rsp_prod =3D back_ring->rsp_prod_pvt; + + /* Copy response */ + memcpy(RING_GET_RESPONSE(back_ring, rsp_prod), rsp, sizeof(*rsp)); + rsp_prod++; + + /* Update ring */ + back_ring->rsp_prod_pvt =3D rsp_prod; + RING_PUSH_RESPONSES(back_ring); +} + +static int vm_event_ring_notify_port(vm_event_t *vm_event, int port) +{ + return xenevtchn_notify(vm_event->xce, port); +} + +vm_event_ops_t ring_ops =3D { + .get_request =3D vm_event_ring_get_request, + .put_response =3D vm_event_ring_put_response, + .notify_port =3D vm_event_ring_notify_port, + .init =3D vm_event_ring_init, + .teardown =3D vm_event_ring_teardown +}; + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/x= en-access.c index 6aaee16..267d163 100644 --- a/tools/tests/xen-access/xen-access.c +++ b/tools/tests/xen-access/xen-access.c @@ -35,12 +35,8 @@ #include #include #include -#include #include - -#include -#include -#include +#include =20 #include =20 @@ -51,9 +47,7 @@ #define START_PFN 0ULL #endif =20 -#define DPRINTF(a, b...) fprintf(stderr, a, ## b) -#define ERROR(a, b...) fprintf(stderr, a "\n", ## b) -#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno)) +#include "xen-access.h" =20 /* From xen/include/asm-x86/processor.h */ #define X86_TRAP_DEBUG 1 @@ -62,32 +56,14 @@ /* From xen/include/asm-x86/x86-defns.h */ #define X86_CR4_PGE 0x00000080 /* enable global pages */ =20 -typedef struct vm_event { - domid_t domain_id; - xenevtchn_handle *xce_handle; - int port; - vm_event_back_ring_t back_ring; - uint32_t evtchn_port; - void *ring_page; -} vm_event_t; - -typedef struct xenaccess { - xc_interface *xc_handle; - - xen_pfn_t max_gpfn; - - vm_event_t vm_event; -} xenaccess_t; - static int interrupted; -bool evtchn_bind =3D 0, evtchn_open =3D 0, mem_access_enable =3D 0; =20 static void close_handler(int sig) { interrupted =3D sig; } =20 -int xc_wait_for_event_or_timeout(xc_interface *xch, xenevtchn_handle *xce,= unsigned long ms) +static int xc_wait_for_event_or_timeout(xc_interface *xch, xenevtchn_handl= e *xce, unsigned long ms) { struct pollfd fd =3D { .fd =3D xenevtchn_fd(xce), .events =3D POLLIN |= POLLERR }; int port; @@ -128,160 +104,86 @@ int xc_wait_for_event_or_timeout(xc_interface *xch, = xenevtchn_handle *xce, unsig return -errno; } =20 -int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess) +static int vm_event_teardown(vm_event_t *vm_event) { int rc; =20 - if ( xenaccess =3D=3D NULL ) + if ( vm_event =3D=3D NULL ) return 0; =20 - /* Tear down domain xenaccess in Xen */ - if ( xenaccess->vm_event.ring_page ) - munmap(xenaccess->vm_event.ring_page, XC_PAGE_SIZE); - - if ( mem_access_enable ) - { - rc =3D xc_monitor_disable(xenaccess->xc_handle, - xenaccess->vm_event.domain_id); - if ( rc !=3D 0 ) - { - ERROR("Error tearing down domain xenaccess in xen"); - return rc; - } - } - - /* Unbind VIRQ */ - if ( evtchn_bind ) - { - rc =3D xenevtchn_unbind(xenaccess->vm_event.xce_handle, - xenaccess->vm_event.port); - if ( rc !=3D 0 ) - { - ERROR("Error unbinding event port"); - return rc; - } - } + rc =3D vm_event->ops->teardown(vm_event); + if ( rc !=3D 0 ) + return rc; =20 /* Close event channel */ - if ( evtchn_open ) + rc =3D xenevtchn_close(vm_event->xce); + if ( rc !=3D 0 ) { - rc =3D xenevtchn_close(xenaccess->vm_event.xce_handle); - if ( rc !=3D 0 ) - { - ERROR("Error closing event channel"); - return rc; - } + ERROR("Error closing event channel"); + return rc; } =20 /* Close connection to Xen */ - rc =3D xc_interface_close(xenaccess->xc_handle); + rc =3D xc_interface_close(vm_event->xch); if ( rc !=3D 0 ) { ERROR("Error closing connection to xen"); return rc; } - xenaccess->xc_handle =3D NULL; - - free(xenaccess); =20 return 0; } =20 -xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id) +static vm_event_t *vm_event_init(domid_t domain_id, vm_event_ops_t *ops) { - xenaccess_t *xenaccess =3D 0; + vm_event_t *vm_event; xc_interface *xch; + xenevtchn_handle *xce; + xen_pfn_t max_gpfn; int rc; =20 + if ( !ops ) + return NULL; + xch =3D xc_interface_open(NULL, NULL, 0); if ( !xch ) - goto err_iface; + goto err; =20 DPRINTF("xenaccess init\n"); - *xch_r =3D xch; - - /* Allocate memory */ - xenaccess =3D malloc(sizeof(xenaccess_t)); - memset(xenaccess, 0, sizeof(xenaccess_t)); - - /* Open connection to xen */ - xenaccess->xc_handle =3D xch; - - /* Set domain id */ - xenaccess->vm_event.domain_id =3D domain_id; - - /* Enable mem_access */ - xenaccess->vm_event.ring_page =3D - xc_monitor_enable(xenaccess->xc_handle, - xenaccess->vm_event.domain_id, - &xenaccess->vm_event.evtchn_port); - if ( xenaccess->vm_event.ring_page =3D=3D NULL ) - { - switch ( errno ) { - case EBUSY: - ERROR("xenaccess is (or was) active on this domain"); - break; - case ENODEV: - ERROR("EPT not supported for this guest"); - break; - default: - perror("Error enabling mem_access"); - break; - } - goto err; - } - mem_access_enable =3D 1; =20 /* Open event channel */ - xenaccess->vm_event.xce_handle =3D xenevtchn_open(NULL, 0); - if ( xenaccess->vm_event.xce_handle =3D=3D NULL ) + xce =3D xenevtchn_open(NULL, 0); + if ( !xce ) { ERROR("Failed to open event channel"); goto err; } - evtchn_open =3D 1; - - /* Bind event notification */ - rc =3D xenevtchn_bind_interdomain(xenaccess->vm_event.xce_handle, - xenaccess->vm_event.domain_id, - xenaccess->vm_event.evtchn_port); - if ( rc < 0 ) - { - ERROR("Failed to bind event channel"); - goto err; - } - evtchn_bind =3D 1; - xenaccess->vm_event.port =3D rc; - - /* Initialise ring */ - SHARED_RING_INIT((vm_event_sring_t *)xenaccess->vm_event.ring_page); - BACK_RING_INIT(&xenaccess->vm_event.back_ring, - (vm_event_sring_t *)xenaccess->vm_event.ring_page, - XC_PAGE_SIZE); =20 /* Get max_gpfn */ - rc =3D xc_domain_maximum_gpfn(xenaccess->xc_handle, - xenaccess->vm_event.domain_id, - &xenaccess->max_gpfn); - + rc =3D xc_domain_maximum_gpfn(xch, domain_id, &max_gpfn); if ( rc ) { ERROR("Failed to get max gpfn"); goto err; } + DPRINTF("max_gpfn =3D %"PRI_xen_pfn"\n", max_gpfn); + + rc =3D ops->init(xch, xce, domain_id, ops, &vm_event); + if ( rc < 0 ) + goto err; =20 - DPRINTF("max_gpfn =3D %"PRI_xen_pfn"\n", xenaccess->max_gpfn); + vm_event->xch =3D xch; + vm_event->xce =3D xce; + vm_event->domain_id =3D domain_id; + vm_event->ops =3D ops; + vm_event->max_gpfn =3D max_gpfn; =20 - return xenaccess; + return vm_event; =20 err: - rc =3D xenaccess_teardown(xch, xenaccess); - if ( rc ) - { - ERROR("Failed to teardown xenaccess structure!\n"); - } + xenevtchn_close(xce); + xc_interface_close(xch); =20 - err_iface: return NULL; } =20 @@ -299,26 +201,6 @@ int control_singlestep( } =20 /* - * Note that this function is not thread safe. - */ -static void get_request(vm_event_t *vm_event, vm_event_request_t *req) -{ - vm_event_back_ring_t *back_ring; - RING_IDX req_cons; - - back_ring =3D &vm_event->back_ring; - req_cons =3D back_ring->req_cons; - - /* Copy request */ - memcpy(req, RING_GET_REQUEST(back_ring, req_cons), sizeof(*req)); - req_cons++; - - /* Update ring */ - back_ring->req_cons =3D req_cons; - back_ring->sring->req_event =3D req_cons + 1; -} - -/* * X86 control register names */ static const char* get_x86_ctrl_reg_name(uint32_t index) @@ -336,29 +218,9 @@ static const char* get_x86_ctrl_reg_name(uint32_t inde= x) return names[index]; } =20 -/* - * Note that this function is not thread safe. - */ -static void put_response(vm_event_t *vm_event, vm_event_response_t *rsp) -{ - vm_event_back_ring_t *back_ring; - RING_IDX rsp_prod; - - back_ring =3D &vm_event->back_ring; - rsp_prod =3D back_ring->rsp_prod_pvt; - - /* Copy response */ - memcpy(RING_GET_RESPONSE(back_ring, rsp_prod), rsp, sizeof(*rsp)); - rsp_prod++; - - /* Update ring */ - back_ring->rsp_prod_pvt =3D rsp_prod; - RING_PUSH_RESPONSES(back_ring); -} - void usage(char* progname) { - fprintf(stderr, "Usage: %s [-m] write|exec", progname); + fprintf(stderr, "Usage: %s [-m] [-n] write|exec", progname= ); #if defined(__i386__) || defined(__x86_64__) fprintf(stderr, "|breakpoint|altp2m_write|altp2m_exec|debug|cp= uid|desc_access|write_ctrlreg_cr4|altp2m_write_no_gpt"); #elif defined(__arm__) || defined(__aarch64__) @@ -368,19 +230,22 @@ void usage(char* progname) "\n" "Logs first page writes, execs, or breakpoint traps that occur= on the domain.\n" "\n" - "-m requires this program to run, or else the domain may pause= \n"); + "-m requires this program to run, or else the domain may pause= \n" + "-n uses the per-vcpu channels vm_event interface\n"); } =20 +extern vm_event_ops_t ring_ops; +extern vm_event_ops_t channel_ops; + int main(int argc, char *argv[]) { struct sigaction act; domid_t domain_id; - xenaccess_t *xenaccess; + vm_event_t *vm_event; vm_event_request_t req; vm_event_response_t rsp; int rc =3D -1; int rc1; - xc_interface *xch; xenmem_access_t default_access =3D XENMEM_access_rwx; xenmem_access_t after_first_access =3D XENMEM_access_rwx; int memaccess =3D 0; @@ -395,106 +260,122 @@ int main(int argc, char *argv[]) int write_ctrlreg_cr4 =3D 0; int altp2m_write_no_gpt =3D 0; uint16_t altp2m_view_id =3D 0; + int new_interface =3D 0; =20 char* progname =3D argv[0]; - argv++; - argc--; + char* command; + int c; + int option_index; + struct option long_options[] =3D + { + { "mem-access-listener", no_argument, 0, 'm' }, + { "new-interface", no_argument, 0, 'n' } + }; =20 - if ( argc =3D=3D 3 && argv[0][0] =3D=3D '-' ) + while(1) { - if ( !strcmp(argv[0], "-m") ) - required =3D 1; - else + c =3D getopt_long(argc, argv, "mn", long_options, &option_index); + if ( c =3D=3D -1 ) + break; + + switch (c) { - usage(progname); - return -1; + case 'm': + required =3D 1; + break; + + case 'n': + new_interface =3D 1; + break; + + default: + usage(progname); + return -1; } - argv++; - argc--; } =20 - if ( argc !=3D 2 ) + if ( argc - optind !=3D 2 ) { usage(progname); return -1; } =20 - domain_id =3D atoi(argv[0]); - argv++; - argc--; + domain_id =3D atoi(argv[optind++]); + command =3D argv[optind]; =20 - if ( !strcmp(argv[0], "write") ) + if ( !strcmp(command, "write") ) { default_access =3D XENMEM_access_rx; after_first_access =3D XENMEM_access_rwx; memaccess =3D 1; } - else if ( !strcmp(argv[0], "exec") ) + else if ( !strcmp(command, "exec") ) { default_access =3D XENMEM_access_rw; after_first_access =3D XENMEM_access_rwx; memaccess =3D 1; } #if defined(__i386__) || defined(__x86_64__) - else if ( !strcmp(argv[0], "breakpoint") ) + else if ( !strcmp(command, "breakpoint") ) { breakpoint =3D 1; } - else if ( !strcmp(argv[0], "altp2m_write") ) + else if ( !strcmp(command, "altp2m_write") ) { default_access =3D XENMEM_access_rx; altp2m =3D 1; memaccess =3D 1; } - else if ( !strcmp(argv[0], "altp2m_exec") ) + else if ( !strcmp(command, "altp2m_exec") ) { default_access =3D XENMEM_access_rw; altp2m =3D 1; memaccess =3D 1; } - else if ( !strcmp(argv[0], "altp2m_write_no_gpt") ) + else if ( !strcmp(command, "altp2m_write_no_gpt") ) { default_access =3D XENMEM_access_rw; altp2m_write_no_gpt =3D 1; memaccess =3D 1; altp2m =3D 1; } - else if ( !strcmp(argv[0], "debug") ) + else if ( !strcmp(command, "debug") ) { debug =3D 1; } - else if ( !strcmp(argv[0], "cpuid") ) + else if ( !strcmp(command, "cpuid") ) { cpuid =3D 1; } - else if ( !strcmp(argv[0], "desc_access") ) + else if ( !strcmp(command, "desc_access") ) { desc_access =3D 1; } - else if ( !strcmp(argv[0], "write_ctrlreg_cr4") ) + else if ( !strcmp(command, "write_ctrlreg_cr4") ) { write_ctrlreg_cr4 =3D 1; } #elif defined(__arm__) || defined(__aarch64__) - else if ( !strcmp(argv[0], "privcall") ) + else if ( !strcmp(command, "privcall") ) { privcall =3D 1; } #endif else { - usage(argv[0]); + usage(command); return -1; } =20 - xenaccess =3D xenaccess_init(&xch, domain_id); - if ( xenaccess =3D=3D NULL ) + vm_event =3D vm_event_init(domain_id, + (new_interface) ? &channel_ops : &ring_ops); + if ( vm_event =3D=3D NULL ) { - ERROR("Error initialising xenaccess"); + ERROR("Error initialising vm_event"); return 1; } =20 - DPRINTF("starting %s %u\n", argv[0], domain_id); + DPRINTF("starting %s %u\n", command, domain_id); =20 /* ensure that if we get a signal, we'll do cleanup, then exit */ act.sa_handler =3D close_handler; @@ -506,7 +387,7 @@ int main(int argc, char *argv[]) sigaction(SIGALRM, &act, NULL); =20 /* Set whether the access listener is required */ - rc =3D xc_domain_set_access_required(xch, domain_id, required); + rc =3D xc_domain_set_access_required(vm_event->xch, domain_id, require= d); if ( rc < 0 ) { ERROR("Error %d setting mem_access listener required\n", rc); @@ -521,13 +402,13 @@ int main(int argc, char *argv[]) =20 if( altp2m_write_no_gpt ) { - rc =3D xc_monitor_inguest_pagefault(xch, domain_id, 1); + rc =3D xc_monitor_inguest_pagefault(vm_event->xch, domain_id, = 1); if ( rc < 0 ) { ERROR("Error %d setting inguest pagefault\n", rc); goto exit; } - rc =3D xc_monitor_emul_unimplemented(xch, domain_id, 1); + rc =3D xc_monitor_emul_unimplemented(vm_event->xch, domain_id,= 1); if ( rc < 0 ) { ERROR("Error %d failed to enable emul unimplemented\n", rc= ); @@ -535,14 +416,15 @@ int main(int argc, char *argv[]) } } =20 - rc =3D xc_altp2m_set_domain_state( xch, domain_id, 1 ); + rc =3D xc_altp2m_set_domain_state( vm_event->xch, domain_id, 1 ); if ( rc < 0 ) { ERROR("Error %d enabling altp2m on domain!\n", rc); goto exit; } =20 - rc =3D xc_altp2m_create_view( xch, domain_id, default_access, &alt= p2m_view_id ); + rc =3D xc_altp2m_create_view( vm_event->xch, domain_id, default_ac= cess, + &altp2m_view_id ); if ( rc < 0 ) { ERROR("Error %d creating altp2m view!\n", rc); @@ -552,24 +434,24 @@ int main(int argc, char *argv[]) DPRINTF("altp2m view created with id %u\n", altp2m_view_id); DPRINTF("Setting altp2m mem_access permissions.. "); =20 - for(; gfn < xenaccess->max_gpfn; ++gfn) + for(; gfn < vm_event->max_gpfn; ++gfn) { - rc =3D xc_altp2m_set_mem_access( xch, domain_id, altp2m_view_i= d, gfn, - default_access); + rc =3D xc_altp2m_set_mem_access( vm_event->xch, domain_id, + altp2m_view_id, gfn, default_ac= cess); if ( !rc ) perm_set++; } =20 DPRINTF("done! Permissions set on %lu pages.\n", perm_set); =20 - rc =3D xc_altp2m_switch_to_view( xch, domain_id, altp2m_view_id ); + rc =3D xc_altp2m_switch_to_view( vm_event->xch, domain_id, altp2m_= view_id ); if ( rc < 0 ) { ERROR("Error %d switching to altp2m view!\n", rc); goto exit; } =20 - rc =3D xc_monitor_singlestep( xch, domain_id, 1 ); + rc =3D xc_monitor_singlestep( vm_event->xch, domain_id, 1 ); if ( rc < 0 ) { ERROR("Error %d failed to enable singlestep monitoring!\n", rc= ); @@ -580,15 +462,15 @@ int main(int argc, char *argv[]) if ( memaccess && !altp2m ) { /* Set the default access type and convert all pages to it */ - rc =3D xc_set_mem_access(xch, domain_id, default_access, ~0ull, 0); + rc =3D xc_set_mem_access(vm_event->xch, domain_id, default_access,= ~0ull, 0); if ( rc < 0 ) { ERROR("Error %d setting default mem access type\n", rc); goto exit; } =20 - rc =3D xc_set_mem_access(xch, domain_id, default_access, START_PFN, - (xenaccess->max_gpfn - START_PFN) ); + rc =3D xc_set_mem_access(vm_event->xch, domain_id, default_access,= START_PFN, + (vm_event->max_gpfn - START_PFN) ); =20 if ( rc < 0 ) { @@ -600,7 +482,7 @@ int main(int argc, char *argv[]) =20 if ( breakpoint ) { - rc =3D xc_monitor_software_breakpoint(xch, domain_id, 1); + rc =3D xc_monitor_software_breakpoint(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d setting breakpoint trapping with vm_event\n", = rc); @@ -610,7 +492,7 @@ int main(int argc, char *argv[]) =20 if ( debug ) { - rc =3D xc_monitor_debug_exceptions(xch, domain_id, 1, 1); + rc =3D xc_monitor_debug_exceptions(vm_event->xch, domain_id, 1, 1); if ( rc < 0 ) { ERROR("Error %d setting debug exception listener with vm_event= \n", rc); @@ -620,7 +502,7 @@ int main(int argc, char *argv[]) =20 if ( cpuid ) { - rc =3D xc_monitor_cpuid(xch, domain_id, 1); + rc =3D xc_monitor_cpuid(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d setting cpuid listener with vm_event\n", rc); @@ -630,7 +512,7 @@ int main(int argc, char *argv[]) =20 if ( desc_access ) { - rc =3D xc_monitor_descriptor_access(xch, domain_id, 1); + rc =3D xc_monitor_descriptor_access(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d setting descriptor access listener with vm_eve= nt\n", rc); @@ -640,7 +522,7 @@ int main(int argc, char *argv[]) =20 if ( privcall ) { - rc =3D xc_monitor_privileged_call(xch, domain_id, 1); + rc =3D xc_monitor_privileged_call(vm_event->xch, domain_id, 1); if ( rc < 0 ) { ERROR("Error %d setting privileged call trapping with vm_event= \n", rc); @@ -651,7 +533,7 @@ int main(int argc, char *argv[]) if ( write_ctrlreg_cr4 ) { /* Mask the CR4.PGE bit so no events will be generated for global = TLB flushes. */ - rc =3D xc_monitor_write_ctrlreg(xch, domain_id, VM_EVENT_X86_CR4, = 1, 1, + rc =3D xc_monitor_write_ctrlreg(vm_event->xch, domain_id, VM_EVENT= _X86_CR4, 1, 1, X86_CR4_PGE, 1); if ( rc < 0 ) { @@ -663,41 +545,43 @@ int main(int argc, char *argv[]) /* Wait for access */ for (;;) { + int port =3D 0; + if ( interrupted ) { /* Unregister for every event */ DPRINTF("xenaccess shutting down on signal %d\n", interrupted); =20 if ( breakpoint ) - rc =3D xc_monitor_software_breakpoint(xch, domain_id, 0); + rc =3D xc_monitor_software_breakpoint(vm_event->xch, domai= n_id, 0); if ( debug ) - rc =3D xc_monitor_debug_exceptions(xch, domain_id, 0, 0); + rc =3D xc_monitor_debug_exceptions(vm_event->xch, domain_i= d, 0, 0); if ( cpuid ) - rc =3D xc_monitor_cpuid(xch, domain_id, 0); + rc =3D xc_monitor_cpuid(vm_event->xch, domain_id, 0); if ( desc_access ) - rc =3D xc_monitor_descriptor_access(xch, domain_id, 0); + rc =3D xc_monitor_descriptor_access(vm_event->xch, domain_= id, 0); if ( write_ctrlreg_cr4 ) - rc =3D xc_monitor_write_ctrlreg(xch, domain_id, VM_EVENT_X= 86_CR4, 0, 0, 0, 0); + rc =3D xc_monitor_write_ctrlreg(vm_event->xch, domain_id, = VM_EVENT_X86_CR4, 0, 0, 0, 0); =20 if ( privcall ) - rc =3D xc_monitor_privileged_call(xch, domain_id, 0); + rc =3D xc_monitor_privileged_call(vm_event->xch, domain_id= , 0); =20 if ( altp2m ) { - rc =3D xc_altp2m_switch_to_view( xch, domain_id, 0 ); - rc =3D xc_altp2m_destroy_view(xch, domain_id, altp2m_view_= id); - rc =3D xc_altp2m_set_domain_state(xch, domain_id, 0); - rc =3D xc_monitor_singlestep(xch, domain_id, 0); + rc =3D xc_altp2m_switch_to_view( vm_event->xch, domain_id,= 0 ); + rc =3D xc_altp2m_destroy_view(vm_event->xch, domain_id, al= tp2m_view_id); + rc =3D xc_altp2m_set_domain_state(vm_event->xch, domain_id= , 0); + rc =3D xc_monitor_singlestep(vm_event->xch, domain_id, 0); } else { - rc =3D xc_set_mem_access(xch, domain_id, XENMEM_access_rwx= , ~0ull, 0); - rc =3D xc_set_mem_access(xch, domain_id, XENMEM_access_rwx= , START_PFN, - (xenaccess->max_gpfn - START_PFN) ); + rc =3D xc_set_mem_access(vm_event->xch, domain_id, XENMEM_= access_rwx, ~0ull, 0); + rc =3D xc_set_mem_access(vm_event->xch, domain_id, XENMEM_= access_rwx, START_PFN, + (vm_event->max_gpfn - START_PFN) ); } =20 shutting_down =3D 1; } =20 - rc =3D xc_wait_for_event_or_timeout(xch, xenaccess->vm_event.xce_h= andle, 100); + rc =3D xc_wait_for_event_or_timeout(vm_event->xch, vm_event->xce, = 100); if ( rc < -1 ) { ERROR("Error getting event"); @@ -709,10 +593,10 @@ int main(int argc, char *argv[]) DPRINTF("Got event from Xen\n"); } =20 - while ( RING_HAS_UNCONSUMED_REQUESTS(&xenaccess->vm_event.back_rin= g) ) - { - get_request(&xenaccess->vm_event, &req); + port =3D rc; =20 + while ( vm_event->ops->get_request(vm_event, &req, &port) ) + { if ( req.version !=3D VM_EVENT_INTERFACE_VERSION ) { ERROR("Error: vm_event interface version mismatch!\n"); @@ -735,7 +619,7 @@ int main(int argc, char *argv[]) * At shutdown we have already reset all the permissio= ns so really no use getting it again. */ xenmem_access_t access; - rc =3D xc_get_mem_access(xch, domain_id, req.u.mem_acc= ess.gfn, &access); + rc =3D xc_get_mem_access(vm_event->xch, domain_id, req= .u.mem_access.gfn, &access); if (rc < 0) { ERROR("Error %d getting mem_access event\n", rc); @@ -768,7 +652,7 @@ int main(int argc, char *argv[]) } else if ( default_access !=3D after_first_access ) { - rc =3D xc_set_mem_access(xch, domain_id, after_first_a= ccess, + rc =3D xc_set_mem_access(vm_event->xch, domain_id, aft= er_first_access, req.u.mem_access.gfn, 1); if (rc < 0) { @@ -788,7 +672,7 @@ int main(int argc, char *argv[]) req.vcpu_id); =20 /* Reinject */ - rc =3D xc_hvm_inject_trap(xch, domain_id, req.vcpu_id, + rc =3D xc_hvm_inject_trap(vm_event->xch, domain_id, req.vc= pu_id, X86_TRAP_INT3, req.u.software_breakpoint.type, -1, req.u.software_breakpoint.insn_len= gth, 0); @@ -833,7 +717,7 @@ int main(int argc, char *argv[]) req.u.debug_exception.insn_length); =20 /* Reinject */ - rc =3D xc_hvm_inject_trap(xch, domain_id, req.vcpu_id, + rc =3D xc_hvm_inject_trap(vm_event->xch, domain_id, req.vc= pu_id, X86_TRAP_DEBUG, req.u.debug_exception.type, -1, req.u.debug_exception.insn_length, @@ -896,17 +780,15 @@ int main(int argc, char *argv[]) } =20 /* Put the response on the ring */ - put_response(&xenaccess->vm_event, &rsp); - } - - /* Tell Xen page is ready */ - rc =3D xenevtchn_notify(xenaccess->vm_event.xce_handle, - xenaccess->vm_event.port); + put_response(vm_event, &rsp, port); =20 - if ( rc !=3D 0 ) - { - ERROR("Error resuming page"); - interrupted =3D -1; + /* Tell Xen page is ready */ + rc =3D notify_port(vm_event, port); + if ( rc !=3D 0 ) + { + ERROR("Error resuming page"); + interrupted =3D -1; + } } =20 if ( shutting_down ) @@ -919,13 +801,13 @@ exit: { uint32_t vcpu_id; for ( vcpu_id =3D 0; vcpu_idxch, domain_id, vcpu_id, 0= ); } =20 - /* Tear down domain xenaccess */ - rc1 =3D xenaccess_teardown(xch, xenaccess); + /* Tear down domain */ + rc1 =3D vm_event_teardown(vm_event); if ( rc1 !=3D 0 ) - ERROR("Error tearing down xenaccess"); + ERROR("Error tearing down vm_event"); =20 if ( rc =3D=3D 0 ) rc =3D rc1; diff --git a/tools/tests/xen-access/xen-access.h b/tools/tests/xen-access/x= en-access.h new file mode 100644 index 0000000..9fc640c --- /dev/null +++ b/tools/tests/xen-access/xen-access.h @@ -0,0 +1,91 @@ +/* + * xen-access.h + * + * Copyright (c) 2019 Bitdefender S.R.L. + * + * Permission is hereby granted, free of charge, to any person obtaining a= copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation = the + * rights to use, copy, modify, merge, publish, distribute, sublicense, an= d/or + * sell copies of the Software, and to permit persons to whom the Software= is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included= in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS= OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL= THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + */ + +#ifndef XEN_ACCESS_H +#define XEN_ACCESS_H + +#include +#include +#include + +#ifndef container_of +#define container_of(ptr, type, member) ({ \ + const typeof( ((type *)0)->member ) *__mptr =3D (ptr); \ + (type *)( (char *)__mptr - offsetof(type,member) );}) +#endif /* container_of */ + +#define DPRINTF(a, b...) fprintf(stderr, a, ## b) +#define ERROR(a, b...) fprintf(stderr, a "\n", ## b) +#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno)) + +struct vm_event_ops; + +typedef struct vm_event { + xc_interface *xch; + domid_t domain_id; + xenevtchn_handle *xce; + xen_pfn_t max_gpfn; + struct vm_event_ops *ops; +} vm_event_t; + +typedef struct vm_event_ops { + int (*init)(xc_interface *, xenevtchn_handle *, domid_t, + struct vm_event_ops *, vm_event_t **); + int (*teardown)(vm_event_t *); + bool (*get_request)(vm_event_t *, vm_event_request_t *, int *); + void (*put_response)(vm_event_t *, vm_event_response_t *, int); + int (*notify_port)(vm_event_t *, int port); +} vm_event_ops_t; + +static inline bool get_request(vm_event_t *vm_event, vm_event_request_t *r= eq, + int *port) +{ + return ( vm_event ) ? vm_event->ops->get_request(vm_event, req, port) : + false; +} + +static inline void put_response(vm_event_t *vm_event, vm_event_response_t = *rsp, int port) +{ + if ( vm_event ) + vm_event->ops->put_response(vm_event, rsp, port); +} + +static inline int notify_port(vm_event_t *vm_event, int port) +{ + if ( !vm_event ) + return -EINVAL; + + return vm_event->ops->notify_port(vm_event, port); +} + +#endif /* XEN_ACCESS_H */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel