From nobody Mon Feb 9 00:30:54 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527368; cv=none; d=zoho.com; s=zohoarc; b=c/hdspuZWffCv5/NAFHfk2MSNhU6XLg2xz7wE9Leu3m8GbdiyK29wW9f4Awvw7eShqq+tlZErzoT5Wgzm8LLNjPq6nSSuhf2KTVn9IJnmLSGF1KnzvY0sY0yJtx0hThpu27TTF87gQ5bG+kLUHAmYB40/RhWjh7Th+jYRGkZ4kU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527368; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=WiiAehnkBh9jKkvmJFaltCHYKa3a7KIvs9HKjb3q0r4=; b=nsHI0QBZuThSY/WHwLLWOktU1TSDZ6FvbsUbXtasztRz7UhDc74qeGv0jpDKIrQZdO7/pkxk45Fnm2/JnKeucc7lh0681bKI+1mSZSiQjSPpsSVFpnKpSTw3ApCGoBxoiu3CitRar3+2CHvvzJo3cxkns8A4pP4Io+fn2Ou4TNE= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527368277920.6930230140936; Tue, 3 Sep 2019 09:16:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSH-0001tU-Cl; Tue, 03 Sep 2019 16:14:53 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSF-0001ss-Jy for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:14:51 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f49add0a-ce65-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:14:50 +0000 (UTC) X-Inumbo-ID: f49add0a-ce65-11e9-ab97-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527291; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aWZABy/i9B5DGCC5tc4gS/wgzUInYjMNWvjcV6vkaiY=; b=Ggrk6KZSrHaAw8722WTgnCu6EYdXte509oWUvtLWIBfPuWfVzTmq7xZY RgRp7pMVguSb2G7GRGPIzbY/X9aGJFDcuGDuoli3DQ7NoXk9jXgHLi3mT vmYSAYmZ1Gt/rhh7XVx+YHd6Ap05tE/dRkotW2UvxJ5F6ySCsyTPkmij0 A=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: pafoqxmZTwFemsXBsiJ4wWkz4IJH4xM5LuKibUdmMSTSMRDcbz/b5ArkMdccd2RS6nZ2S7Yups oK6hqk9Mv+1RhfAzn1CXt4N2Nc/0TGsfeHTfS2On6xy6nP5QfxIySP7vC3nWRkpQ4wVVdZg95/ 3Ntzg0LZOgJwx0KcGw5Spi0a/8t1PcdGGolMsJB9SHylBu5jj3F6fxw8a72QPZ6UuHfMAlUBUs Erv75eAonoyAgqLQF14KBYXt5t4GWxZEi9tG+TuU6UdS3M9rm/g53kwQjjd763zHPrIuE9gXb6 RzA= X-SBRS: 2.7 X-MesageID: 5068896 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5068896" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:22 +0200 Message-ID: <20190903161428.7159-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 05/11] ioreq: add internal ioreq initialization support X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Add support for internal ioreq servers to initialization and deinitialization routines, prevent some functions from being executed against internal ioreq servers and add guards to only allow internal callers to modify internal ioreq servers. External callers (ie: from hypercalls) are only allowed to deal with external ioreq servers. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v1: - Do not pass an 'internal' parameter to most functions, and instead use the id to key whether an ioreq server is internal or external. - Prevent enabling an internal server without a handler. --- xen/arch/x86/hvm/dm.c | 17 ++- xen/arch/x86/hvm/ioreq.c | 173 +++++++++++++++++++------------ xen/include/asm-x86/hvm/domain.h | 5 +- xen/include/asm-x86/hvm/ioreq.h | 8 +- 4 files changed, 135 insertions(+), 68 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index c2fca9f729..6a3682e58c 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -417,7 +417,7 @@ static int dm_op(const struct dmop_args *op_args) break; =20 rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); + &data->id, false); break; } =20 @@ -450,6 +450,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, data->start, data->end); @@ -464,6 +467,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, data->start, data->end); @@ -481,6 +487,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EOPNOTSUPP; if ( !hap_enabled(d) ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 if ( first_gfn =3D=3D 0 ) rc =3D hvm_map_mem_type_to_ioreq_server(d, data->id, @@ -528,6 +537,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); break; @@ -541,6 +553,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_destroy_ioreq_server(d, data->id); break; diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 95492bc111..dbc5e6b4c5 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -59,10 +59,11 @@ static struct hvm_ioreq_server *get_ioreq_server(const = struct domain *d, /* * Iterate over all possible ioreq servers. * - * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). - * This is a semantic that previously existed when ioreq servers - * were held in a linked list. + * NOTE: The iteration is backwards such that internal and more recently + * created external ioreq servers are favoured in + * hvm_select_ioreq_server(). + * This is a semantic that previously existed for external servers w= hen + * ioreq servers were held in a linked list. */ #define FOR_EACH_IOREQ_SERVER(d, id, s) \ for ( (id) =3D MAX_NR_IOREQ_SERVERS; (id) !=3D 0; ) \ @@ -70,6 +71,12 @@ static struct hvm_ioreq_server *get_ioreq_server(const s= truct domain *d, continue; \ else =20 +#define FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) \ + for ( (id) =3D MAX_NR_EXTERNAL_IOREQ_SERVERS; (id) !=3D 0; ) \ + if ( !(s =3D GET_IOREQ_SERVER(d, --(id))) ) \ + continue; \ + else + static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) { shared_iopage_t *p =3D s->ioreq.va; @@ -86,7 +93,7 @@ bool hvm_io_pending(struct vcpu *v) struct hvm_ioreq_server *s; unsigned int id; =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 @@ -190,7 +197,7 @@ bool handle_hvm_io_completion(struct vcpu *v) return false; } =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 @@ -430,7 +437,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) { @@ -688,7 +695,7 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_= ioreq_server *s, return rc; } =20 -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s, bool inter= nal) { struct hvm_ioreq_vcpu *sv; =20 @@ -697,29 +704,40 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_= server *s) if ( s->enabled ) goto done; =20 - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + if ( !internal ) + { + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); =20 - s->enabled =3D true; + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + } + else if ( !s->handler ) + { + ASSERT_UNREACHABLE(); + goto done; + } =20 - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); + s->enabled =3D true; =20 done: spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s, bool inte= rnal) { spin_lock(&s->lock); =20 if ( !s->enabled ) goto done; =20 - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + if ( !internal ) + { + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); + } =20 s->enabled =3D false; =20 @@ -736,33 +754,39 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_ser= ver *s, int rc; =20 s->target =3D d; - - get_knownalive_domain(currd); - s->emulator =3D currd; - spin_lock_init(&s->lock); - INIT_LIST_HEAD(&s->ioreq_vcpu_list); - spin_lock_init(&s->bufioreq_lock); - - s->ioreq.gfn =3D INVALID_GFN; - s->bufioreq.gfn =3D INVALID_GFN; =20 rc =3D hvm_ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; =20 - s->bufioreq_handling =3D bufioreq_handling; - - for_each_vcpu ( d, v ) + if ( !hvm_ioreq_is_internal(id) ) { - rc =3D hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail_add; + get_knownalive_domain(currd); + + s->emulator =3D currd; + INIT_LIST_HEAD(&s->ioreq_vcpu_list); + spin_lock_init(&s->bufioreq_lock); + + s->ioreq.gfn =3D INVALID_GFN; + s->bufioreq.gfn =3D INVALID_GFN; + + s->bufioreq_handling =3D bufioreq_handling; + + for_each_vcpu ( d, v ) + { + rc =3D hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail_add; + } } + else + s->handler =3D NULL; =20 return 0; =20 fail_add: + ASSERT(!hvm_ioreq_is_internal(id)); hvm_ioreq_server_remove_all_vcpus(s); hvm_ioreq_server_unmap_pages(s); =20 @@ -772,30 +796,34 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_ser= ver *s, return rc; } =20 -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s, bool inter= nal) { ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); - - /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. - */ - hvm_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); =20 hvm_ioreq_server_free_rangesets(s); =20 - put_domain(s->emulator); + if ( !internal ) + { + hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latte= r. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ + hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); + + put_domain(s->emulator); + } } =20 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) + ioservid_t *id, bool internal) { struct hvm_ioreq_server *s; unsigned int i; @@ -811,7 +839,9 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, domain_pause(d); spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) + for ( i =3D (internal ? MAX_NR_EXTERNAL_IOREQ_SERVERS : 0); + i < (internal ? MAX_NR_IOREQ_SERVERS : MAX_NR_EXTERNAL_IOREQ_SER= VERS); + i++ ) { if ( !GET_IOREQ_SERVER(d, i) ) break; @@ -821,6 +851,9 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, if ( i >=3D MAX_NR_IOREQ_SERVERS ) goto fail; =20 + ASSERT((internal && + i >=3D MAX_NR_EXTERNAL_IOREQ_SERVERS && i < MAX_NR_IOREQ_SERVE= RS) || + (!internal && i < MAX_NR_EXTERNAL_IOREQ_SERVERS)); /* * It is safe to call set_ioreq_server() prior to * hvm_ioreq_server_init() since the target domain is paused. @@ -864,20 +897,21 @@ int hvm_destroy_ioreq_server(struct domain *d, ioserv= id_t id) goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + /* NB: internal servers cannot be destroyed. */ + if ( hvm_ioreq_is_internal(id) || s->emulator !=3D current->domain ) goto out; =20 domain_pause(d); =20 p2m_set_ioreq_server(d, 0, id); =20 - hvm_ioreq_server_disable(s); + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); =20 /* * It is safe to call hvm_ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is paused. */ - hvm_ioreq_server_deinit(s); + hvm_ioreq_server_deinit(s, hvm_ioreq_is_internal(id)); set_ioreq_server(d, id, NULL); =20 domain_unpause(d); @@ -909,7 +943,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + /* NB: don't allow fetching information from internal ioreq servers. */ + if ( hvm_ioreq_is_internal(id) || s->emulator !=3D current->domain ) goto out; =20 if ( ioreq_gfn || bufioreq_gfn ) @@ -956,7 +991,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( hvm_ioreq_is_internal(id) || s->emulator !=3D current->domain ) goto out; =20 rc =3D hvm_ioreq_server_alloc_pages(s); @@ -1010,7 +1045,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d= , ioservid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( !hvm_ioreq_is_internal(id) && s->emulator !=3D current->domain ) goto out; =20 switch ( type ) @@ -1068,7 +1103,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domai= n *d, ioservid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( !hvm_ioreq_is_internal(id) && s->emulator !=3D current->domain ) goto out; =20 switch ( type ) @@ -1128,6 +1163,14 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *= d, ioservid_t id, if ( !s ) goto out; =20 + /* + * NB: do not support mapping internal ioreq servers to memory types, = as + * the current internal ioreq servers don't need this feature and it's= not + * been tested. + */ + rc =3D -EINVAL; + if ( hvm_ioreq_is_internal(id) ) + goto out; rc =3D -EPERM; if ( s->emulator !=3D current->domain ) goto out; @@ -1163,15 +1206,15 @@ int hvm_set_ioreq_server_state(struct domain *d, io= servid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( !hvm_ioreq_is_internal(id) && s->emulator !=3D current->domain ) goto out; =20 domain_pause(d); =20 if ( enabled ) - hvm_ioreq_server_enable(s); + hvm_ioreq_server_enable(s, hvm_ioreq_is_internal(id)); else - hvm_ioreq_server_disable(s); + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); =20 domain_unpause(d); =20 @@ -1190,7 +1233,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { rc =3D hvm_ioreq_server_add_vcpu(s, v); if ( rc ) @@ -1202,7 +1245,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) return 0; =20 fail: - while ( id++ !=3D MAX_NR_IOREQ_SERVERS ) + while ( id++ !=3D MAX_NR_EXTERNAL_IOREQ_SERVERS ) { s =3D GET_IOREQ_SERVER(d, id); =20 @@ -1224,7 +1267,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain = *d, struct vcpu *v) =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) hvm_ioreq_server_remove_vcpu(s, v); =20 spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1241,13 +1284,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - hvm_ioreq_server_disable(s); + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); =20 /* * It is safe to call hvm_ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is being destroyed. */ - hvm_ioreq_server_deinit(s); + hvm_ioreq_server_deinit(s, hvm_ioreq_is_internal(id)); set_ioreq_server(d, id, NULL); =20 xfree(s); diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 9fbe83f45a..9f92838b6e 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -97,7 +97,10 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; =20 -#define MAX_NR_IOREQ_SERVERS 8 +#define MAX_NR_EXTERNAL_IOREQ_SERVERS 8 +#define MAX_NR_INTERNAL_IOREQ_SERVERS 1 +#define MAX_NR_IOREQ_SERVERS \ + (MAX_NR_EXTERNAL_IOREQ_SERVERS + MAX_NR_INTERNAL_IOREQ_SERVERS) =20 struct hvm_domain { /* Guest page range used for non-default ioreq servers */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 65491c48d2..c3917aa74d 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -24,7 +24,7 @@ bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); =20 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); + ioservid_t *id, bool internal); int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *ioreq_gfn, @@ -54,6 +54,12 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffer= ed); =20 void hvm_ioreq_init(struct domain *d); =20 +static inline bool hvm_ioreq_is_internal(unsigned int id) +{ + ASSERT(id < MAX_NR_IOREQ_SERVERS); + return id >=3D MAX_NR_EXTERNAL_IOREQ_SERVERS; +} + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel