From nobody Mon Feb 9 11:30:24 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780822; cv=none; d=zohomail.com; s=zohoarc; b=eqfzTI3JgdkDIKb0HsGh958GuW38diw+LNDEC3Yh4UYSxVpGbj9peqrrLgbmPazh5tKU6LEp5GqN6ZEoAmiRnnfA2pvCcpf+YVeHxM9N7moqZgR7c3tV7HKlOpSvGjrDXj0EWZzB3W7KNpUoftRvqMCA6O5fEbuRVYoI13Rga4M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780822; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=ATLX0ix+70BJdDYt/n8NIsAS6V9Lo7VwWj16nLbs74s=; b=bP5JZ0u3Rso/0sut0hrSX2gTw3BygIsmF1/ijuEXvMOvZ+dPxlqWALd+XRtCgmL/l5/FGsieCLFizZpGv9eCs3AJXuQeiwoFtSNzv0HCtMp6LnuAM36seI6fIeNdXdCIafcwu9ncbUr5yxYkwufyEaaqjf+74+E0vUnvbVKy1f8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780822503828.4058263916104; Thu, 15 Oct 2020 09:53:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7608.20039 (Exim 4.92) (envelope-from ) id 1kT6VB-00079D-2G; Thu, 15 Oct 2020 16:53:17 +0000 Received: by outflank-mailman (output) from mailman id 7608.20039; Thu, 15 Oct 2020 16:53:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VA-00078y-PL; Thu, 15 Oct 2020 16:53:16 +0000 Received: by outflank-mailman (input) for mailman id 7608; Thu, 15 Oct 2020 16:53:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6O8-0004yr-79 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:00 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 35a4e4e2-ae98-4920-98eb-57504424b4da; Thu, 15 Oct 2020 16:45:07 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id l2so4404490lfk.0 for ; Thu, 15 Oct 2020 09:45:07 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:04 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6O8-0004yr-79 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:00 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 35a4e4e2-ae98-4920-98eb-57504424b4da; Thu, 15 Oct 2020 16:45:07 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id l2so4404490lfk.0 for ; Thu, 15 Oct 2020 09:45:07 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:04 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 35a4e4e2-ae98-4920-98eb-57504424b4da DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ATLX0ix+70BJdDYt/n8NIsAS6V9Lo7VwWj16nLbs74s=; b=mZHr8xPWPC6BwR4Rvshv8pcGslgwoHeHR995f+dQp2HBfJc3N2sA66iK3KBHJZYxkN T/Q1JrQPnyjBHR2/3yAjBjkIeMIIhJDrHm6IP+qt2S/XUEmQu8rlcooJaRRD5y9wIMmu GnbdX09mpqOk1z3qj4Gz2tsgs/gQP1uOzFdLwpLIzmT7DwvmQF79h7+7kLz6iuWimTBa HvDIiqenfWrnRu+gHpWE3cIGhbBoWsjoUwsAr+7bPmnf3RIZmNiA55r0XSTOxfEgP5nl VD6KDMs895Kks5NfhecJPYNTrIbvzNyOOv8nx3zVd8lawUE/AIhUmZlcqlSpT4YWjmuO 5IgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ATLX0ix+70BJdDYt/n8NIsAS6V9Lo7VwWj16nLbs74s=; b=nYwGJmitEhoZxmrhN5XLsnacY3FYcKGXoM8CRHM9LiAye77C/0DSyi4JMAxdNj1Dfn HyNDdUX4mndwyWGJkp8a2AtlVgWo/VGEifgLcrNNJrPQDkmF9X9QPmawjl8zne3BUT/D 1xEZ52CG0PzzsgAhAtGfI9qYGBq4b0f8Y22jsiZm0WNI3ARrDvLw8en52p8csFJrAfsS S9UFBWBvj14g4j7x2mxe4NNfmWzQUd5oMdZVCWK6ME5E/iTflkuJbrdaxqJpZuOzbnKK MuRHqGiJs0eksU68hEjj2FxtjrWnQzG4vMDKDsWYggEp9aOnjhFtHkoCuIp0RniEToh8 LRiA== X-Gm-Message-State: AOAM533elyY6Ld87fcNXerFvGQ0JhNloYdK8C8KfAn4uzcGrGjQxOj2J ThbnGq0+PPemW4PVmak1FOrDGj8O+2bZAQ== X-Google-Smtp-Source: ABdhPJzoVD12a1fmtxYj1gr0tCGo+X9eLMlAepPqLgCiZ1fzx12+msmYl926taDLXf7MlR7mvynsKg== X-Received: by 2002:a19:cc8f:: with SMTP id c137mr1396381lfg.476.1602780305418; Thu, 15 Oct 2020 09:45:05 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names Date: Thu, 15 Oct 2020 19:44:23 +0300 Message-Id: <1602780274-29141-13-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch removes "hvm" prefixes and infixes from IOREQ related function names in the common code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch --- xen/arch/x86/hvm/emulate.c | 6 +- xen/arch/x86/hvm/hvm.c | 10 +- xen/arch/x86/hvm/io.c | 6 +- xen/arch/x86/hvm/stdvga.c | 4 +- xen/arch/x86/hvm/vmx/vvmx.c | 2 +- xen/common/dm.c | 28 ++--- xen/common/ioreq.c | 240 ++++++++++++++++++++----------------= ---- xen/common/memory.c | 2 +- xen/include/asm-x86/hvm/ioreq.h | 16 +-- xen/include/xen/ioreq.h | 58 +++++----- 10 files changed, 186 insertions(+), 186 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index f6a4eef..54cd493 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -261,7 +261,7 @@ static int hvmemul_do_io( * an ioreq server that can handle it. * * Rules: - * A> PIO or MMIO accesses run through hvm_select_ioreq_server() to + * A> PIO or MMIO accesses run through select_ioreq_server() to * choose the ioreq server by range. If no server is found, the ac= cess * is ignored. * @@ -323,7 +323,7 @@ static int hvmemul_do_io( } =20 if ( !s ) - s =3D hvm_select_ioreq_server(currd, &p); + s =3D select_ioreq_server(currd, &p); =20 /* If there is no suitable backing DM, just ignore accesses */ if ( !s ) @@ -333,7 +333,7 @@ static int hvmemul_do_io( } else { - rc =3D hvm_send_ioreq(s, &p, 0); + rc =3D send_ioreq(s, &p, 0); if ( rc !=3D X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state =3D STATE_IOREQ_NONE; else if ( !ioreq_needs_completion(&vio->io_req) ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 341093b..1e788b5 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -546,7 +546,7 @@ void hvm_do_resume(struct vcpu *v) =20 pt_restore_timer(v); =20 - if ( !handle_hvm_io_completion(v) ) + if ( !handle_io_completion(v) ) return; =20 if ( unlikely(v->arch.vm_event) ) @@ -677,7 +677,7 @@ int hvm_domain_initialise(struct domain *d) register_g2m_portio_handler(d); register_vpci_portio_handler(d); =20 - hvm_ioreq_init(d); + ioreq_init(d); =20 hvm_init_guest_time(d); =20 @@ -739,7 +739,7 @@ void hvm_domain_relinquish_resources(struct domain *d) =20 viridian_domain_deinit(d); =20 - hvm_destroy_all_ioreq_servers(d); + destroy_all_ioreq_servers(d); =20 msixtbl_pt_cleanup(d); =20 @@ -1582,7 +1582,7 @@ int hvm_vcpu_initialise(struct vcpu *v) if ( rc ) goto fail5; =20 - rc =3D hvm_all_ioreq_servers_add_vcpu(d, v); + rc =3D all_ioreq_servers_add_vcpu(d, v); if ( rc !=3D 0 ) goto fail6; =20 @@ -1618,7 +1618,7 @@ void hvm_vcpu_destroy(struct vcpu *v) { viridian_vcpu_deinit(v); =20 - hvm_all_ioreq_servers_remove_vcpu(v->domain, v); + all_ioreq_servers_remove_vcpu(v->domain, v); =20 if ( hvm_altp2m_supported() ) altp2m_vcpu_destroy(v); diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 36584de..2d03ffe 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -60,7 +60,7 @@ void send_timeoffset_req(unsigned long timeoff) if ( timeoff =3D=3D 0 ) return; =20 - if ( hvm_broadcast_ioreq(&p, true) !=3D 0 ) + if ( broadcast_ioreq(&p, true) !=3D 0 ) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } =20 @@ -74,7 +74,7 @@ void send_invalidate_req(void) .data =3D ~0UL, /* flush all */ }; =20 - if ( hvm_broadcast_ioreq(&p, false) !=3D 0 ) + if ( broadcast_ioreq(&p, false) !=3D 0 ) gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); } =20 @@ -155,7 +155,7 @@ bool handle_pio(uint16_t port, unsigned int size, int d= ir) * We should not advance RIP/EIP if the domain is shutting down or * if X86EMUL_RETRY has been returned by an internal handler. */ - if ( curr->domain->is_shutting_down || !hvm_io_pending(curr) ) + if ( curr->domain->is_shutting_down || !io_pending(curr) ) return false; break; =20 diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index bafb3f6..cb1cc7f 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handl= er *handler, } =20 done: - srv =3D hvm_select_ioreq_server(current->domain, &p); + srv =3D select_ioreq_server(current->domain, &p); if ( !srv ) return X86EMUL_UNHANDLEABLE; =20 - return hvm_send_ioreq(srv, &p, 1); + return send_ioreq(srv, &p, 1); } =20 static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 3a37e9e..d5a17f12 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1516,7 +1516,7 @@ void nvmx_switch_guest(void) * don't want to continue as this setup is not implemented nor support= ed * as of right now. */ - if ( hvm_io_pending(v) ) + if ( io_pending(v) ) return; /* * a softirq may interrupt us between a virtual vmentry is diff --git a/xen/common/dm.c b/xen/common/dm.c index 36e01a2..f3a8353 100644 --- a/xen/common/dm.c +++ b/xen/common/dm.c @@ -100,8 +100,8 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad[0] || data->pad[1] || data->pad[2] ) break; =20 - rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); + rc =3D create_ioreq_server(d, data->handle_bufioreq, + &data->id); break; } =20 @@ -117,12 +117,12 @@ static int dm_op(const struct dmop_args *op_args) if ( data->flags & ~valid_flags ) break; =20 - rc =3D hvm_get_ioreq_server_info(d, data->id, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : (unsigned long *)&data->iore= q_gfn, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : (unsigned long *)&data->bufi= oreq_gfn, - &data->bufioreq_port); + rc =3D get_ioreq_server_info(d, data->id, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->ioreq_gf= n, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->bufioreq= _gfn, + &data->bufioreq_port); break; } =20 @@ -135,8 +135,8 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; =20 - rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); + rc =3D map_io_range_to_ioreq_server(d, data->id, data->type, + data->start, data->end); break; } =20 @@ -149,8 +149,8 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; =20 - rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, - data->start, data->end); + rc =3D unmap_io_range_from_ioreq_server(d, data->id, data->type, + data->start, data->end); break; } =20 @@ -163,7 +163,7 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; =20 - rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + rc =3D set_ioreq_server_state(d, data->id, !!data->enabled); break; } =20 @@ -176,7 +176,7 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; =20 - rc =3D hvm_destroy_ioreq_server(d, data->id); + rc =3D destroy_ioreq_server(d, data->id); break; } =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 57ddaaa..98fffae 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -58,7 +58,7 @@ struct ioreq_server *get_ioreq_server(const struct domain= *d, * Iterate over all possible ioreq servers. * * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). + * ioreq servers are favoured in select_ioreq_server(). * This is a semantic that previously existed when ioreq servers * were held in a linked list. */ @@ -105,12 +105,12 @@ static struct ioreq_vcpu *get_pending_vcpu(const stru= ct vcpu *v, return NULL; } =20 -bool hvm_io_pending(struct vcpu *v) +bool io_pending(struct vcpu *v) { return get_pending_vcpu(v, NULL); } =20 -static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) +static bool wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state =3D STATE_IOREQ_NONE; unsigned int state =3D p->state; @@ -167,7 +167,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, iore= q_t *p) return true; } =20 -bool handle_hvm_io_completion(struct vcpu *v) +bool handle_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; struct vcpu_io *vio =3D &v->io; @@ -182,7 +182,7 @@ bool handle_hvm_io_completion(struct vcpu *v) } =20 sv =3D get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + if ( sv && !wait_for_io(sv, get_ioreq(s, v)) ) return false; =20 vio->io_req.state =3D ioreq_needs_completion(&vio->io_req) ? @@ -207,13 +207,13 @@ bool handle_hvm_io_completion(struct vcpu *v) vio->io_req.dir); =20 default: - return arch_hvm_io_completion(io_completion); + return arch_io_completion(io_completion); } =20 return true; } =20 -static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) +static gfn_t alloc_legacy_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -229,7 +229,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_se= rver *s) return INVALID_GFN; } =20 -static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s) +static gfn_t alloc_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -244,11 +244,11 @@ static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server = *s) * If we are out of 'normal' GFNs then we may still have a 'legacy' * GFN available. */ - return hvm_alloc_legacy_ioreq_gfn(s); + return alloc_legacy_ioreq_gfn(s); } =20 -static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s, - gfn_t gfn) +static bool free_legacy_ioreq_gfn(struct ioreq_server *s, + gfn_t gfn) { struct domain *d =3D s->target; unsigned int i; @@ -265,21 +265,21 @@ static bool hvm_free_legacy_ioreq_gfn(struct ioreq_se= rver *s, return true; } =20 -static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) +static void free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d =3D s->target; unsigned int i =3D gfn_x(gfn) - d->ioreq_gfn.base; =20 ASSERT(!gfn_eq(gfn, INVALID_GFN)); =20 - if ( !hvm_free_legacy_ioreq_gfn(s, gfn) ) + if ( !free_legacy_ioreq_gfn(s, gfn) ) { ASSERT(i < sizeof(d->ioreq_gfn.mask) * 8); set_bit(i, &d->ioreq_gfn.mask); } } =20 -static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf) +static void unmap_ioreq_gfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 @@ -289,11 +289,11 @@ static void hvm_unmap_ioreq_gfn(struct ioreq_server *= s, bool buf) destroy_ring_for_helper(&iorp->va, iorp->page); iorp->page =3D NULL; =20 - hvm_free_ioreq_gfn(s, iorp->gfn); + free_ioreq_gfn(s, iorp->gfn); iorp->gfn =3D INVALID_GFN; } =20 -static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) +static int map_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; @@ -303,7 +303,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bo= ol buf) { /* * If a page has already been allocated (which will happen on - * demand if hvm_get_ioreq_server_frame() is called), then + * demand if get_ioreq_server_frame() is called), then * mapping a guest frame is not permitted. */ if ( gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -315,7 +315,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bo= ol buf) if ( d->is_dying ) return -EINVAL; =20 - iorp->gfn =3D hvm_alloc_ioreq_gfn(s); + iorp->gfn =3D alloc_ioreq_gfn(s); =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return -ENOMEM; @@ -324,12 +324,12 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, = bool buf) &iorp->va); =20 if ( rc ) - hvm_unmap_ioreq_gfn(s, buf); + unmap_ioreq_gfn(s, buf); =20 return rc; } =20 -static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) +static int alloc_ioreq_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page; @@ -338,7 +338,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, = bool buf) { /* * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then + * on demand if get_ioreq_server_info() is called), then * allocating a page is not permitted. */ if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -377,7 +377,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, = bool buf) return -ENOMEM; } =20 -static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) +static void free_ioreq_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page =3D iorp->page; @@ -416,7 +416,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) return found; } =20 -static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) +static void remove_ioreq_gfn(struct ioreq_server *s, bool buf) =20 { struct domain *d =3D s->target; @@ -431,7 +431,7 @@ static void hvm_remove_ioreq_gfn(struct ioreq_server *s= , bool buf) clear_page(iorp->va); } =20 -static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) +static int add_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; @@ -450,8 +450,8 @@ static int hvm_add_ioreq_gfn(struct ioreq_server *s, bo= ol buf) return rc; } =20 -static void hvm_update_ioreq_evtchn(struct ioreq_server *s, - struct ioreq_vcpu *sv) +static void update_ioreq_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); =20 @@ -466,8 +466,8 @@ static void hvm_update_ioreq_evtchn(struct ioreq_server= *s, #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) =20 -static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, - struct vcpu *v) +static int ioreq_server_add_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; int rc; @@ -502,7 +502,7 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_serve= r *s, list_add(&sv->list_entry, &s->ioreq_vcpu_list); =20 if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); + update_ioreq_evtchn(s, sv); =20 spin_unlock(&s->lock); return 0; @@ -518,8 +518,8 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_serve= r *s, return rc; } =20 -static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, - struct vcpu *v) +static void ioreq_server_remove_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; =20 @@ -546,7 +546,7 @@ static void hvm_ioreq_server_remove_vcpu(struct ioreq_s= erver *s, spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) +static void ioreq_server_remove_all_vcpus(struct ioreq_server *s) { struct ioreq_vcpu *sv, *next; =20 @@ -572,49 +572,49 @@ static void hvm_ioreq_server_remove_all_vcpus(struct = ioreq_server *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_map_pages(struct ioreq_server *s) +static int ioreq_server_map_pages(struct ioreq_server *s) { int rc; =20 - rc =3D hvm_map_ioreq_gfn(s, false); + rc =3D map_ioreq_gfn(s, false); =20 if ( !rc && HANDLE_BUFIOREQ(s) ) - rc =3D hvm_map_ioreq_gfn(s, true); + rc =3D map_ioreq_gfn(s, true); =20 if ( rc ) - hvm_unmap_ioreq_gfn(s, false); + unmap_ioreq_gfn(s, false); =20 return rc; } =20 -static void hvm_ioreq_server_unmap_pages(struct ioreq_server *s) +static void ioreq_server_unmap_pages(struct ioreq_server *s) { - hvm_unmap_ioreq_gfn(s, true); - hvm_unmap_ioreq_gfn(s, false); + unmap_ioreq_gfn(s, true); + unmap_ioreq_gfn(s, false); } =20 -static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) +static int ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; =20 - rc =3D hvm_alloc_ioreq_mfn(s, false); + rc =3D alloc_ioreq_mfn(s, false); =20 if ( !rc && (s->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc =3D hvm_alloc_ioreq_mfn(s, true); + rc =3D alloc_ioreq_mfn(s, true); =20 if ( rc ) - hvm_free_ioreq_mfn(s, false); + free_ioreq_mfn(s, false); =20 return rc; } =20 -static void hvm_ioreq_server_free_pages(struct ioreq_server *s) +static void ioreq_server_free_pages(struct ioreq_server *s) { - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); + free_ioreq_mfn(s, true); + free_ioreq_mfn(s, false); } =20 -static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) +static void ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; =20 @@ -622,8 +622,8 @@ static void hvm_ioreq_server_free_rangesets(struct iore= q_server *s) rangeset_destroy(s->range[i]); } =20 -static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, - ioservid_t id) +static int ioreq_server_alloc_rangesets(struct ioreq_server *s, + ioservid_t id) { unsigned int i; int rc; @@ -655,12 +655,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct io= req_server *s, return 0; =20 fail: - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); =20 return rc; } =20 -static void hvm_ioreq_server_enable(struct ioreq_server *s) +static void ioreq_server_enable(struct ioreq_server *s) { struct ioreq_vcpu *sv; =20 @@ -669,29 +669,29 @@ static void hvm_ioreq_server_enable(struct ioreq_serv= er *s) if ( s->enabled ) goto done; =20 - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + remove_ioreq_gfn(s, false); + remove_ioreq_gfn(s, true); =20 s->enabled =3D true; =20 list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) - hvm_update_ioreq_evtchn(s, sv); + update_ioreq_evtchn(s, sv); =20 done: spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_disable(struct ioreq_server *s) +static void ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); =20 if ( !s->enabled ) goto done; =20 - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + add_ioreq_gfn(s, true); + add_ioreq_gfn(s, false); =20 s->enabled =3D false; =20 @@ -699,9 +699,9 @@ static void hvm_ioreq_server_disable(struct ioreq_serve= r *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_init(struct ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) +static int ioreq_server_init(struct ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) { struct domain *currd =3D current->domain; struct vcpu *v; @@ -719,7 +719,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, s->ioreq.gfn =3D INVALID_GFN; s->bufioreq.gfn =3D INVALID_GFN; =20 - rc =3D hvm_ioreq_server_alloc_rangesets(s, id); + rc =3D ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; =20 @@ -727,7 +727,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, =20 for_each_vcpu ( d, v ) { - rc =3D hvm_ioreq_server_add_vcpu(s, v); + rc =3D ioreq_server_add_vcpu(s, v); if ( rc ) goto fail_add; } @@ -735,39 +735,39 @@ static int hvm_ioreq_server_init(struct ioreq_server = *s, return 0; =20 fail_add: - hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s); + ioreq_server_remove_all_vcpus(s); + ioreq_server_unmap_pages(s); =20 - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); =20 put_domain(s->emulator); return rc; } =20 -static void hvm_ioreq_server_deinit(struct ioreq_server *s) +static void ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); + ioreq_server_remove_all_vcpus(s); =20 /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. + * NOTE: It is safe to call both ioreq_server_unmap_pages() and + * ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages * are not mapped, leaving the page to be freed by the latter. * However if the pages are mapped then the former will set * the page_info pointer to NULL, meaning the latter will do * nothing. */ - hvm_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); + ioreq_server_unmap_pages(s); + ioreq_server_free_pages(s); =20 - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); =20 put_domain(s->emulator); } =20 -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) +int create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id) { struct ioreq_server *s; unsigned int i; @@ -795,11 +795,11 @@ int hvm_create_ioreq_server(struct domain *d, int buf= ioreq_handling, =20 /* * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. + * ioreq_server_init() since the target domain is paused. */ set_ioreq_server(d, i, s); =20 - rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); + rc =3D ioreq_server_init(s, d, bufioreq_handling, i); if ( rc ) { set_ioreq_server(d, i, NULL); @@ -822,7 +822,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, return rc; } =20 -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +int destroy_ioreq_server(struct domain *d, ioservid_t id) { struct ioreq_server *s; int rc; @@ -841,15 +841,15 @@ int hvm_destroy_ioreq_server(struct domain *d, ioserv= id_t id) =20 domain_pause(d); =20 - arch_hvm_destroy_ioreq_server(s); + arch_destroy_ioreq_server(s); =20 - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); =20 /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is paused. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); =20 domain_unpause(d); @@ -864,10 +864,10 @@ int hvm_destroy_ioreq_server(struct domain *d, ioserv= id_t id) return rc; } =20 -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) +int get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) { struct ioreq_server *s; int rc; @@ -886,7 +886,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, =20 if ( ioreq_gfn || bufioreq_gfn ) { - rc =3D hvm_ioreq_server_map_pages(s); + rc =3D ioreq_server_map_pages(s); if ( rc ) goto out; } @@ -911,8 +911,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, return rc; } =20 -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) +int get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) { struct ioreq_server *s; int rc; @@ -931,7 +931,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, if ( s->emulator !=3D current->domain ) goto out; =20 - rc =3D hvm_ioreq_server_alloc_pages(s); + rc =3D ioreq_server_alloc_pages(s); if ( rc ) goto out; =20 @@ -962,9 +962,9 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, return rc; } =20 -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +int map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -1014,9 +1014,9 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d= , ioservid_t id, return rc; } =20 -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +int unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -1066,8 +1066,8 @@ int hvm_unmap_io_range_from_ioreq_server(struct domai= n *d, ioservid_t id, return rc; } =20 -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) +int set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled) { struct ioreq_server *s; int rc; @@ -1087,9 +1087,9 @@ int hvm_set_ioreq_server_state(struct domain *d, iose= rvid_t id, domain_pause(d); =20 if ( enabled ) - hvm_ioreq_server_enable(s); + ioreq_server_enable(s); else - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); =20 domain_unpause(d); =20 @@ -1100,7 +1100,7 @@ int hvm_set_ioreq_server_state(struct domain *d, iose= rvid_t id, return rc; } =20 -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +int all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -1110,7 +1110,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - rc =3D hvm_ioreq_server_add_vcpu(s, v); + rc =3D ioreq_server_add_vcpu(s, v); if ( rc ) goto fail; } @@ -1127,7 +1127,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) if ( !s ) continue; =20 - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); } =20 spin_unlock_recursive(&d->ioreq_server.lock); @@ -1135,7 +1135,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) return rc; } =20 -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +void all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -1143,17 +1143,17 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domai= n *d, struct vcpu *v) spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); =20 spin_unlock_recursive(&d->ioreq_server.lock); } =20 -void hvm_destroy_all_ioreq_servers(struct domain *d) +void destroy_all_ioreq_servers(struct domain *d) { struct ioreq_server *s; unsigned int id; =20 - if ( !arch_hvm_ioreq_destroy(d) ) + if ( !arch_ioreq_destroy(d) ) return; =20 spin_lock_recursive(&d->ioreq_server.lock); @@ -1162,13 +1162,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); =20 /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is being destroyed. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); =20 xfree(s); @@ -1177,15 +1177,15 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->ioreq_server.lock); } =20 -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *select_ioreq_server(struct domain *d, + ioreq_t *p) { struct ioreq_server *s; uint8_t type; uint64_t addr; unsigned int id; =20 - if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) ) + if ( ioreq_server_get_type_addr(d, p, &type, &addr) ) return NULL; =20 FOR_EACH_IOREQ_SERVER(d, id, s) @@ -1233,7 +1233,7 @@ struct ioreq_server *hvm_select_ioreq_server(struct d= omain *d, return NULL; } =20 -static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) +static int send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) { struct domain *d =3D current->domain; struct ioreq_page *iorp; @@ -1326,8 +1326,8 @@ static int hvm_send_buffered_ioreq(struct ioreq_serve= r *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } =20 -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered) +int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered) { struct vcpu *curr =3D current; struct domain *d =3D curr->domain; @@ -1336,7 +1336,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *p= roto_p, ASSERT(s); =20 if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); + return send_buffered_ioreq(s, proto_p); =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return IOREQ_STATUS_RETRY; @@ -1386,7 +1386,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *p= roto_p, return IOREQ_STATUS_UNHANDLED; } =20 -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +unsigned int broadcast_ioreq(ioreq_t *p, bool buffered) { struct domain *d =3D current->domain; struct ioreq_server *s; @@ -1397,18 +1397,18 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool b= uffered) if ( !s->enabled ) continue; =20 - if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) + if ( send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) failed++; } =20 return failed; } =20 -void hvm_ioreq_init(struct domain *d) +void ioreq_init(struct domain *d) { spin_lock_init(&d->ioreq_server.lock); =20 - arch_hvm_ioreq_init(d); + arch_ioreq_init(d); } =20 /* diff --git a/xen/common/memory.c b/xen/common/memory.c index 83d800f..cf53ca3 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1070,7 +1070,7 @@ static int acquire_ioreq_server(struct domain *d, { mfn_t mfn; =20 - rc =3D hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + rc =3D get_ioreq_server_frame(d, id, frame + i, &mfn); if ( rc ) return rc; =20 diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 5ed977e..1340441 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -26,7 +26,7 @@ =20 #include =20 -static inline bool arch_hvm_io_completion(enum io_completion io_completion) +static inline bool arch_io_completion(enum io_completion io_completion) { switch ( io_completion ) { @@ -50,7 +50,7 @@ static inline bool arch_hvm_io_completion(enum io_complet= ion io_completion) } =20 /* Called when target domain is paused */ -static inline void arch_hvm_destroy_ioreq_server(struct ioreq_server *s) +static inline void arch_destroy_ioreq_server(struct ioreq_server *s) { p2m_set_ioreq_server(s->target, 0, s); } @@ -105,10 +105,10 @@ static inline int hvm_map_mem_type_to_ioreq_server(st= ruct domain *d, return rc; } =20 -static inline int hvm_ioreq_server_get_type_addr(const struct domain *d, - const ioreq_t *p, - uint8_t *type, - uint64_t *addr) +static inline int ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) { uint32_t cf8 =3D d->arch.hvm.pci_cf8; =20 @@ -164,12 +164,12 @@ static inline int hvm_access_cf8( return X86EMUL_UNHANDLEABLE; } =20 -static inline void arch_hvm_ioreq_init(struct domain *d) +static inline void arch_ioreq_init(struct domain *d) { register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); } =20 -static inline bool arch_hvm_ioreq_destroy(struct domain *d) +static inline bool arch_ioreq_destroy(struct domain *d) { if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) ) return false; diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 8451866..7b03ab5 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -81,39 +81,39 @@ static inline bool ioreq_needs_completion(const ioreq_t= *ioreq) (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); } =20 -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); +bool io_pending(struct vcpu *v); +bool handle_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); =20 -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, +int create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id); +int destroy_ioreq_server(struct domain *d, ioservid_t id); +int get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port); +int get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); +int map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); - -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); - -void hvm_ioreq_init(struct domain *d); +int set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled); + +int all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); +void all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); +void destroy_all_ioreq_servers(struct domain *d); + +struct ioreq_server *select_ioreq_server(struct domain *d, + ioreq_t *p); +int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int broadcast_ioreq(ioreq_t *p, bool buffered); + +void ioreq_init(struct domain *d); =20 #endif /* __XEN_IOREQ_H__ */ =20 --=20 2.7.4