From nobody Fri Apr 26 06:01:14 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488428; cv=none; d=zohomail.com; s=zohoarc; b=OrdJGPsbj6ZhWmmkcRKhMHjsPvv0i38kwCMQ3z+ss2r1nC7TFtmWccwUa6U5ZXTjWQkvNQnX+SnNSmyn/D4I/d5Ap/qeghm3kcZcMFuHxegrxQS5zUTQSuShVvGb7V9gn6R3Du8qiAxhoSNDbSw2V0kyUTVL5mnGHbQi3kUneNY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488428; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=Nv/8iJSp1q8sRY5B7cchG5UGicc6rqIROQrT0DQ8mUA=; b=nXZ15ZOZnQMKyQbKaASF4bcpHXBIrXDPjUzwnhgUGu5mWme1iMXSJcuKPp5kkTAqJvsszRQOmaCR+BZmXkLrkF6+6WnAQ7Hb5+fXI5cr4SrgVLYwbTxXIZQ6Hl50Yvq1DYZfq65qdkc186rEuQW3pE6LK36nfFQO4dsJEmtP7xU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488428810538.5962975753624; Tue, 12 Jan 2021 13:53:48 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66046.117196 (Exim 4.92) (envelope-from ) id 1kzRbY-0002vk-7J; Tue, 12 Jan 2021 21:53:32 +0000 Received: by outflank-mailman (output) from mailman id 66046.117196; Tue, 12 Jan 2021 21:53:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbY-0002vW-3b; Tue, 12 Jan 2021 21:53:32 +0000 Received: by outflank-mailman (input) for mailman id 66046; Tue, 12 Jan 2021 21:53:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbW-0002PK-HF for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:30 +0000 Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a6c37162-e18c-4853-9566-f49358826ad3; Tue, 12 Jan 2021 21:53:01 +0000 (UTC) Received: by mail-wm1-x333.google.com with SMTP id g8so3232415wme.1 for ; Tue, 12 Jan 2021 13:53:01 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:52:59 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a6c37162-e18c-4853-9566-f49358826ad3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Nv/8iJSp1q8sRY5B7cchG5UGicc6rqIROQrT0DQ8mUA=; b=eKrYEf+5C4xWpKwhKMKSC2cbrIa2A4izfMh0U+2KX3mfBuPlAD0RKkj2pOqvDRtlrK ioq3cC3sd8nSaGkviJEUexV0w9U9qfsOWoIwQ24uCYhXmI4PCQLvrA52uzopWIWMhgdq OWCIPJ2pccOMm8CtZXsjrzwr4nm/Wzvj1Zxe98pOUjHsvaLHBAD5ctlsyVNqkJT4ieIB z0U32LonBpMBPtG54KtP11DJTWJBGviD3EHi24CBGOd7LUpVS8LNV5XQEKlgZfxJJgz4 CK1uz0BE2iUJSwTLG1InoAeR0HGFJUlxA9+IaGOWVq713kN/zyen27kcGXwNaRWROCZ1 CFdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Nv/8iJSp1q8sRY5B7cchG5UGicc6rqIROQrT0DQ8mUA=; b=YgJbSYHSehagQTKG0VQm3LDqDwlgqajAfxpGEb+rTiGl1qYwRZAnT3IUC9LkjhcNBF pDM06WG2ptfcqxf8DVc+pdtueKTONgZhU2aoP7WvXGI3jyVyGbpYJ1pC2ePLii5LJEfc AK5sTUtb63z5JrklAB+RqXCVG9WcYtkN5ZA10Kg6/RQRKhJPULsGz88SrXF9z1GUcEWq LduJyEBaOUgds0AzQYje01jIeIrIfMl4MSWgs2O2D7AQOAbz+rvdvtUyhtWKXWSVAfbJ CmR4u/iUg+lSBBr3TmC9SPaUsH3jBC5g4SUeFFn82E6Iu5QzU09KhdCqIVXYTNRGM5mT oYLQ== X-Gm-Message-State: AOAM5326yf87bWhDZfg7WVWVv6Zrr4zyTKKN1Us/EZv730dtrRjmnXOs LM3o/rWY+qFIFXqwuQOQPaUJsRTEDWWwWg== X-Google-Smtp-Source: ABdhPJxIvbLLpF+dsmswCKSVLjr3PvGenln1zOeS0cRtUlcOrbsIIP4untz8w0HJrENg38g4vF63rw== X-Received: by 2002:a1c:e4c5:: with SMTP id b188mr1175711wmh.78.1610488379784; Tue, 12 Jan 2021 13:52:59 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V4 07/24] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common Date: Tue, 12 Jan 2021 23:52:15 +0200 Message-Id: <1610488352-18494-8-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these structs will be used on Arm as is. Move them to xen/ioreq.h and remove "hvm" prefixes. Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - remove "hvm" prefix Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific Changes V3 -> V4: - add Jan's A-b --- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/ioreq.c | 38 +++++++------- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/mm/p2m.c | 8 +-- xen/common/ioreq.c | 108 +++++++++++++++++++----------------= ---- xen/include/asm-x86/hvm/domain.h | 36 +------------ xen/include/asm-x86/p2m.h | 8 +-- xen/include/xen/ioreq.h | 54 ++++++++++++++++---- 8 files changed, 128 insertions(+), 128 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index c3487b5..4d62199 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -287,7 +287,7 @@ static int hvmemul_do_io( * However, there's no cheap approach to avoid above situations in= xen, * so the device model side needs to check the incoming ioreq even= t. */ - struct hvm_ioreq_server *s =3D NULL; + struct ioreq_server *s =3D NULL; p2m_type_t p2mt =3D p2m_invalid; =20 if ( is_mmio ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 177b964..8393922 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -63,7 +63,7 @@ bool arch_vcpu_ioreq_completion(enum hvm_io_completion io= _completion) return true; } =20 -static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -79,7 +79,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_= server *s) return INVALID_GFN; } =20 -static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -97,7 +97,7 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server = *s) return hvm_alloc_legacy_ioreq_gfn(s); } =20 -static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, +static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d =3D s->target; @@ -115,7 +115,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_= server *s, return true; } =20 -static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) +static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d =3D s->target; unsigned int i =3D gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; @@ -129,9 +129,9 @@ static void hvm_free_ioreq_gfn(struct hvm_ioreq_server = *s, gfn_t gfn) } } =20 -static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -143,10 +143,10 @@ static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_serv= er *s, bool buf) iorp->gfn =3D INVALID_GFN; } =20 -static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; int rc; =20 if ( iorp->page ) @@ -179,11 +179,11 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server = *s, bool buf) return rc; } =20 -static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) =20 { struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -194,10 +194,10 @@ static void hvm_remove_ioreq_gfn(struct hvm_ioreq_ser= ver *s, bool buf) clear_page(iorp->va); } =20 -static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; int rc; =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -213,7 +213,7 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s= , bool buf) return rc; } =20 -int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) +int arch_ioreq_server_map_pages(struct ioreq_server *s) { int rc; =20 @@ -228,40 +228,40 @@ int arch_ioreq_server_map_pages(struct hvm_ioreq_serv= er *s) return rc; } =20 -void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_unmap_pages(struct ioreq_server *s) { hvm_unmap_ioreq_gfn(s, true); hvm_unmap_ioreq_gfn(s, false); } =20 -void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +void arch_ioreq_server_enable(struct ioreq_server *s) { hvm_remove_ioreq_gfn(s, false); hvm_remove_ioreq_gfn(s, true); } =20 -void arch_ioreq_server_disable(struct hvm_ioreq_server *s) +void arch_ioreq_server_disable(struct ioreq_server *s) { hvm_add_ioreq_gfn(s, true); hvm_add_ioreq_gfn(s, false); } =20 /* Called when target domain is paused */ -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) +void arch_ioreq_server_destroy(struct ioreq_server *s) { p2m_set_ioreq_server(s->target, 0, s); } =20 /* Called with ioreq_server lock held */ int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags) { return p2m_set_ioreq_server(d, flags, s); } =20 void arch_ioreq_server_map_mem_type_completed(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags) { if ( flags =3D=3D 0 ) diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index 17dee74..ee13449 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler= *handler, .dir =3D IOREQ_WRITE, .data =3D data, }; - struct hvm_ioreq_server *srv; + struct ioreq_server *srv; =20 if ( !stdvga_cache_is_enabled(s) || !s->stdvga ) goto done; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index ad4bb94..71fda06 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -372,7 +372,7 @@ void p2m_memory_type_changed(struct domain *d) =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); int rc; @@ -420,11 +420,11 @@ int p2m_set_ioreq_server(struct domain *d, return rc; } =20 -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags) +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - struct hvm_ioreq_server *s; + struct ioreq_server *s; =20 spin_lock(&p2m->ioreq.lock); =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index a196e14..3f631ec 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -35,7 +35,7 @@ #include =20 static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); @@ -46,8 +46,8 @@ static void set_ioreq_server(struct domain *d, unsigned i= nt id, #define GET_IOREQ_SERVER(d, id) \ (d)->arch.hvm.ioreq_server.server[id] =20 -static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) +static struct ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) { if ( id >=3D MAX_NR_IOREQ_SERVERS ) return NULL; @@ -69,7 +69,7 @@ static struct hvm_ioreq_server *get_ioreq_server(const st= ruct domain *d, continue; \ else =20 -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v) { shared_iopage_t *p =3D s->ioreq.va; =20 @@ -79,16 +79,16 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, s= truct vcpu *v) return &p->vcpu_ioreq[v->vcpu_id]; } =20 -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **s= rvp) +static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct ioreq_server **srvp) { struct domain *d =3D v->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -111,7 +111,7 @@ bool hvm_io_pending(struct vcpu *v) return get_pending_vcpu(v, NULL); } =20 -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state =3D STATE_IOREQ_NONE; unsigned int state =3D p->state; @@ -172,8 +172,8 @@ bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; + struct ioreq_server *s; + struct ioreq_vcpu *sv; enum hvm_io_completion io_completion; =20 if ( has_vpci(d) && vpci_process_pending(v) ) @@ -214,9 +214,9 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } =20 -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page; =20 if ( iorp->page ) @@ -262,9 +262,9 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server = *s, bool buf) return -ENOMEM; } =20 -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page =3D iorp->page; =20 if ( !page ) @@ -281,7 +281,7 @@ static void hvm_free_ioreq_mfn(struct hvm_ioreq_server = *s, bool buf) =20 bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { - const struct hvm_ioreq_server *s; + const struct ioreq_server *s; unsigned int id; bool found =3D false; =20 @@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) return found; } =20 -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) +static void hvm_update_ioreq_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); =20 @@ -314,13 +314,13 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_= server *s, } } =20 -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; int rc; =20 - sv =3D xzalloc(struct hvm_ioreq_vcpu); + sv =3D xzalloc(struct ioreq_vcpu); =20 rc =3D -ENOMEM; if ( !sv ) @@ -366,10 +366,10 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq= _server *s, return rc; } =20 -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, +static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 spin_lock(&s->lock); =20 @@ -394,9 +394,9 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ior= eq_server *s, spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv, *next; + struct ioreq_vcpu *sv, *next; =20 spin_lock(&s->lock); =20 @@ -420,7 +420,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hv= m_ioreq_server *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; =20 @@ -435,13 +435,13 @@ static int hvm_ioreq_server_alloc_pages(struct hvm_io= req_server *s) return rc; } =20 -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_pages(struct ioreq_server *s) { hvm_free_ioreq_mfn(s, true); hvm_free_ioreq_mfn(s, false); } =20 -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; =20 @@ -449,7 +449,7 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_= ioreq_server *s) rangeset_destroy(s->range[i]); } =20 -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, ioservid_t id) { unsigned int i; @@ -487,9 +487,9 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_= ioreq_server *s, return rc; } =20 -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_enable(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 spin_lock(&s->lock); =20 @@ -509,7 +509,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_se= rver *s) spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); =20 @@ -524,7 +524,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_s= erver *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_init(struct ioreq_server *s, struct domain *d, int bufioreq_handling, ioservid_t id) { @@ -569,7 +569,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_serve= r *s, return rc; } =20 -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); @@ -594,14 +594,14 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_= server *s) int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, ioservid_t *id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int i; int rc; =20 if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) return -EINVAL; =20 - s =3D xzalloc(struct hvm_ioreq_server); + s =3D xzalloc(struct ioreq_server); if ( !s ) return -ENOMEM; =20 @@ -649,7 +649,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, =20 int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -694,7 +694,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, unsigned long *bufioreq_gfn, evtchn_port_t *bufioreq_port) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -739,7 +739,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, unsigned long idx, mfn_t *mfn) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 ASSERT(is_hvm_domain(d)); @@ -791,7 +791,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; =20 @@ -843,7 +843,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; =20 @@ -902,7 +902,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 if ( type !=3D HVMMEM_ioreq_server ) @@ -937,7 +937,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, bool enabled) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -970,7 +970,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioserv= id_t id, =20 int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; int rc; =20 @@ -1005,7 +1005,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) =20 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1018,7 +1018,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain = *d, struct vcpu *v) =20 void hvm_destroy_all_ioreq_servers(struct domain *d) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; =20 if ( !arch_ioreq_server_destroy_all(d) ) @@ -1045,10 +1045,10 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; uint8_t type; uint64_t addr; unsigned int id; @@ -1101,10 +1101,10 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, return NULL; } =20 -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) { struct domain *d =3D current->domain; - struct hvm_ioreq_page *iorp; + struct ioreq_page *iorp; buffered_iopage_t *pg; buf_ioreq_t bp =3D { .data =3D p->data, .addr =3D p->addr, @@ -1194,12 +1194,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq= _server *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } =20 -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered) { struct vcpu *curr =3D current; struct domain *d =3D curr->domain; - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 ASSERT(s); =20 @@ -1257,7 +1257,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_= t *proto_p, unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) { struct domain *d =3D current->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id, failed =3D 0; =20 FOR_EACH_IOREQ_SERVER(d, id, s) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 9d247ba..1c4ca47 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -30,40 +30,6 @@ =20 #include =20 -struct hvm_ioreq_page { - gfn_t gfn; - struct page_info *page; - void *va; -}; - -struct hvm_ioreq_vcpu { - struct list_head list_entry; - struct vcpu *vcpu; - evtchn_port_t ioreq_evtchn; - bool pending; -}; - -#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) -#define MAX_NR_IO_RANGES 256 - -struct hvm_ioreq_server { - struct domain *target, *emulator; - - /* Lock to serialize toolstack modifications */ - spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; - struct rangeset *range[NR_IO_RANGE_TYPES]; - bool enabled; - uint8_t bufioreq_handling; -}; - #ifdef CONFIG_MEM_SHARING struct mem_sharing_domain { @@ -110,7 +76,7 @@ struct hvm_domain { /* Lock protects all other values in the sub-struct and the default */ struct { spinlock_t lock; - struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; } ioreq_server; =20 /* Cached CF8 for guest PCI config cycles */ diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 6447696..7df2878 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -363,7 +363,7 @@ struct p2m_domain { * ioreq server who's responsible for the emulation of * gfns with specific p2m type(for now, p2m_ioreq_server). */ - struct hvm_ioreq_server *server; + struct ioreq_server *server; /* * flags specifies whether read, write or both operations * are to be emulated by an ioreq server. @@ -937,9 +937,9 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type= _t p2mt, mfn_t mfn) } =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s); -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags); + struct ioreq_server *s); +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags); =20 static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt, p2m_type_t ot, mfn_t nfn, mfn_t ofn, diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index aeea67e..bc79c37 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -21,6 +21,40 @@ =20 #include =20 +struct ioreq_page { + gfn_t gfn; + struct page_info *page; + void *va; +}; + +struct ioreq_vcpu { + struct list_head list_entry; + struct vcpu *vcpu; + evtchn_port_t ioreq_evtchn; + bool pending; +}; + +#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) +#define MAX_NR_IO_RANGES 256 + +struct ioreq_server { + struct domain *target, *emulator; + + /* Lock to serialize toolstack modifications */ + spinlock_t lock; + + struct ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + struct rangeset *range[NR_IO_RANGE_TYPES]; + bool enabled; + uint8_t bufioreq_handling; +}; + static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) { return unlikely(p->df) ? @@ -75,9 +109,9 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, str= uct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); void hvm_destroy_all_ioreq_servers(struct domain *d); =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered); unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); =20 @@ -85,16 +119,16 @@ void hvm_ioreq_init(struct domain *d); =20 bool arch_ioreq_complete_mmio(void); bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); -int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_enable(struct hvm_ioreq_server *s); -void arch_ioreq_server_disable(struct hvm_ioreq_server *s); -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); +int arch_ioreq_server_map_pages(struct ioreq_server *s); +void arch_ioreq_server_unmap_pages(struct ioreq_server *s); +void arch_ioreq_server_enable(struct ioreq_server *s); +void arch_ioreq_server_disable(struct ioreq_server *s); +void arch_ioreq_server_destroy(struct ioreq_server *s); int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags); void arch_ioreq_server_map_mem_type_completed(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags); bool arch_ioreq_server_destroy_all(struct domain *d); bool arch_ioreq_server_get_type_addr(const struct domain *d, --=20 2.7.4