From nobody Mon Apr 29 13:46:40 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1566399646; cv=none; d=zoho.com; s=zohoarc; b=f46ackQkSJXH5tylfP3cxp6AeuxUajrF0cRTOKLxnogIFF198gA62JyhWBWiGMsxM8ZvGCnvpGd6ej6JSAN4Zmf34W+SgviwzcR2wqQn2p3frLvx9HUyw8qypyYP46cyU45R1n4esYZSKKBlkZTZwAcyuoG9tjfXw1Lfe6zkNWE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1566399646; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=5QJGEYrtZHnVHqUSbJ3cb2a6ozWEaqyZDJbz1lrJbUA=; b=JG91Ehv3fK12JfZ9yq5a/jkqIiqDup5s+1ZNGYrchb4t/su6rcu8NOp+N8uDg/pMOAHGSFVHCv66RjwwmVEG3k0hqSV5gZ390h1Phyvqog8qeAKbw/T79ZlJK9xB7DZBYgxEn1AR6YuoDCjF+2xOeLyY3jwBz2MYEyhFS9JYRYk= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1566399646607625.4312972863556; Wed, 21 Aug 2019 08:00:46 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5K-00007Q-Br; Wed, 21 Aug 2019 14:59:38 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5J-00007L-5r for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:37 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4a53e92c-c424-11e9-b95f-bc764e2007e4; Wed, 21 Aug 2019 14:59:36 +0000 (UTC) X-Inumbo-ID: 4a53e92c-c424-11e9-b95f-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399576; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jfrm7P1uSFPGn/BnajpGvRnL4WEoI68X5NrCqeeJ/b8=; b=TpLdtbR/THrOnYT/H2CePRFhqcXNKn28JiEOt7kvwbqMskt3XiWnZ+ok edZz2F5AETrP2BobHxiGah5kRHN1UPAhB9cW4si/NX1Ec58D2Ru9fksoy poLg+zBPEACCNwW2J8TF+ojvgV9gRbD5GZvn4BxgiEAbKVtbKP+x81g15 Q=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: R1VVtMNFoZOb42hJE6J45f3WtQx++29mF4UIXerHXvKQytdWZZ2KyAokbUBibvcTo4ZkfXMcK7 /u8IUP4BwKRaqQ+5o34pB5Ha0u2r/b8rCQ3mYXFO2uJlFYH67ex3gC50isdfz+zAlxwDypO+wm 5ZYzdLZ0AJcSEuAiWOM2Rrm/ZCog7QMvgsYnPY6ps6WA5NQr8gE/4znYPaZbAH7nDj5Hwy7W+z a/E+Dh1WNA5sXiz06PWiW0CjZziX+TOaEMikcnylxKUSTeAqUjec1as45/H+SibeZTrG7+UqeY 880= X-SBRS: 2.7 X-MesageID: 4533606 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4533606" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:58:57 +0200 Message-ID: <20190821145903.45934-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 1/7] ioreq: add fields to allow internal ioreq servers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Internal ioreq servers are plain function handlers implemented inside of the hypervisor. Note that most fields used by current (external) ioreq servers are not needed for internal ones, and hence have been placed inside of a struct and packed in an union together with the only internal specific field, a function pointer to a handler. This is required in order to have PCI config accesses forwarded to external ioreq servers or to internal ones (ie: QEMU emulated devices vs vPCI passthrough), and is the first step in order to allow unprivileged domains to use vPCI. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/include/asm-x86/hvm/domain.h | 30 +++++++++++++++++++----------- 1 file changed, 19 insertions(+), 11 deletions(-) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 6c7c4f5aa6..f0be303517 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -52,21 +52,29 @@ struct hvm_ioreq_vcpu { #define MAX_NR_IO_RANGES 256 =20 struct hvm_ioreq_server { - struct domain *target, *emulator; - + struct domain *target; /* Lock to serialize toolstack modifications */ spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; struct rangeset *range[NR_IO_RANGE_TYPES]; bool enabled; - uint8_t bufioreq_handling; + bool internal; + + union { + struct { + struct domain *emulator; + struct hvm_ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct hvm_ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + uint8_t bufioreq_handling; + }; + struct { + int (*handler)(struct vcpu *v, ioreq_t *); + }; + }; }; =20 /* --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Mon Apr 29 13:46:40 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1566399647; cv=none; d=zoho.com; s=zohoarc; b=Uw1ForoUCgavYrbpGzrI/GB2o06T1qyIF63zgp/JOwC3seRuvGEAbAAptbAA7dfy9GewP7EGGydmyJW/BQykQRlDmNjpTm+qCrk6tDGmF1YugzbVp3dPBdIRHURBuBGKVaXWTO9S6itadxJ8HjAzhfMgTr+j3LVvlK6iDOlhEx4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1566399647; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=bNc7a9ukJB9OnfnG0SLoPfeHcqEjyETdVmDryWiBAIs=; b=olvXUZN6ky42pCjPpfGYUwcddeMg7ZjGDhPeTdDHBFZxp5GU0NbChAIbfw1XO3TxfJc0Wafk+fkrH/4ef5p0NAhGvlDsnnTFYfeQlP1GbF8GbwuEfk2JX6eM9YilTOyy3UWcysl32rov/m1AGsbfKmkHC37xWkQpzjoSWeo6Bug= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1566399647671131.46488972287375; Wed, 21 Aug 2019 08:00:47 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5P-00008L-TU; Wed, 21 Aug 2019 14:59:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5O-00007p-5W for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:42 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4b3f2e1e-c424-11e9-8980-bc764e2007e4; Wed, 21 Aug 2019 14:59:37 +0000 (UTC) X-Inumbo-ID: 4b3f2e1e-c424-11e9-8980-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399577; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5bT8VNZBi8pv4KOOjesiQnYptMnqBRVha5j2EfSm4GY=; b=LsCmpDQn86NiMcfeHZX0OP69Kk5ioTjVwZB2XpbP0rH9S3DJvn7fjS2w FExpL9igH9k0IyPC6ptH6jGsdy/oO08AhiV/hJjbdaBMaQTru3LVYLt5/ 2WloDlqU11DLOsr8VQXFUDCQTxbBr09gCKnm45uJqaTYjhkXeBHRkO4IC w=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: nCXijWtaoMBfqm+VIndD5WjgmqD16URcrxVUc/eP4rVfGQFzRKlW/bEesPhzOmJCqT78C4eT4k rwF+TdsPceBSaxVriVN+/JXo2cAALyOgC1UgVPGWzNxVhNZS1uS13L/w5Rsp9IjHZ56E8xlDUI yiGQw5HnMmj3SkhFnOIOO2nDm21U8NOW3YyZr+eLpKNMvmIBsN2k1bPdDF52at331l2aroQ/MB AsCQ+HT3kz/8fGd9+AaCO3PGXw/l0frgyamdGXO9pF9suKTota2pPQdZdftTuAJZDdWOrtURgO CXI= X-SBRS: 2.7 X-MesageID: 4717068 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4717068" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:58:58 +0200 Message-ID: <20190821145903.45934-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Add support for internal ioreq servers to initialization and deinitialization routines, prevent some functions from being executed against internal ioreq servers and add guards to only allow internal callers to modify internal ioreq servers. External callers (ie: from hypercalls) are only allowed to deal with external ioreq servers. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/hvm/dm.c | 9 +- xen/arch/x86/hvm/ioreq.c | 150 +++++++++++++++++++++----------- xen/include/asm-x86/hvm/ioreq.h | 8 +- 3 files changed, 108 insertions(+), 59 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index d6d0e8be89..5ca8b66d67 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -417,7 +417,7 @@ static int dm_op(const struct dmop_args *op_args) break; =20 rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); + &data->id, false); break; } =20 @@ -452,7 +452,7 @@ static int dm_op(const struct dmop_args *op_args) break; =20 rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); + data->start, data->end, fals= e); break; } =20 @@ -466,7 +466,8 @@ static int dm_op(const struct dmop_args *op_args) break; =20 rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, - data->start, data->end); + data->start, data->end, + false); break; } =20 @@ -529,7 +530,7 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; =20 - rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled, fa= lse); break; } =20 diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index a79cabb680..23ef9b0c02 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -89,6 +89,9 @@ bool hvm_io_pending(struct vcpu *v) { struct hvm_ioreq_vcpu *sv; =20 + if ( s->internal ) + continue; + list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) @@ -193,6 +196,9 @@ bool handle_hvm_io_completion(struct vcpu *v) { struct hvm_ioreq_vcpu *sv; =20 + if ( s->internal ) + continue; + list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) @@ -431,6 +437,9 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { + if ( s->internal ) + continue; + if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) { found =3D true; @@ -696,15 +705,18 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_= server *s) if ( s->enabled ) goto done; =20 - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + if ( !s->internal ) + { + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); =20 - s->enabled =3D true; + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + } =20 - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); + s->enabled =3D true; =20 done: spin_unlock(&s->lock); @@ -717,8 +729,11 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_= server *s) if ( !s->enabled ) goto done; =20 - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + if ( !s->internal ) + { + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); + } =20 s->enabled =3D false; =20 @@ -728,40 +743,47 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq= _server *s) =20 static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, struct domain *d, int bufioreq_handling, - ioservid_t id) + ioservid_t id, bool internal) { struct domain *currd =3D current->domain; struct vcpu *v; int rc; =20 + s->internal =3D internal; s->target =3D d; - - get_knownalive_domain(currd); - s->emulator =3D currd; - spin_lock_init(&s->lock); - INIT_LIST_HEAD(&s->ioreq_vcpu_list); - spin_lock_init(&s->bufioreq_lock); - - s->ioreq.gfn =3D INVALID_GFN; - s->bufioreq.gfn =3D INVALID_GFN; =20 rc =3D hvm_ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; =20 - s->bufioreq_handling =3D bufioreq_handling; - - for_each_vcpu ( d, v ) + if ( !internal ) { - rc =3D hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail_add; + get_knownalive_domain(currd); + + s->emulator =3D currd; + INIT_LIST_HEAD(&s->ioreq_vcpu_list); + spin_lock_init(&s->bufioreq_lock); + + s->ioreq.gfn =3D INVALID_GFN; + s->bufioreq.gfn =3D INVALID_GFN; + + s->bufioreq_handling =3D bufioreq_handling; + + for_each_vcpu ( d, v ) + { + rc =3D hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail_add; + } } + else + s->handler =3D NULL; =20 return 0; =20 fail_add: + ASSERT(!internal); hvm_ioreq_server_remove_all_vcpus(s); hvm_ioreq_server_unmap_pages(s); =20 @@ -774,27 +796,31 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_ser= ver *s, static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) { ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); - - /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. - */ - hvm_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); =20 hvm_ioreq_server_free_rangesets(s); =20 - put_domain(s->emulator); + if ( !s->internal ) + { + hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latte= r. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ + hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); + + put_domain(s->emulator); + } } =20 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) + ioservid_t *id, bool internal) { struct hvm_ioreq_server *s; unsigned int i; @@ -826,7 +852,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, */ set_ioreq_server(d, i, s); =20 - rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); + rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i, internal); if ( rc ) { set_ioreq_server(d, i, NULL); @@ -863,7 +889,8 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + /* NB: internal servers cannot be destroyed. */ + if ( s->internal || s->emulator !=3D current->domain ) goto out; =20 domain_pause(d); @@ -908,7 +935,11 @@ int hvm_get_ioreq_server_info(struct domain *d, ioserv= id_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + /* + * NB: don't allow external callers to fetch information about internal + * ioreq servers. + */ + if ( s->internal || s->emulator !=3D current->domain ) goto out; =20 if ( ioreq_gfn || bufioreq_gfn ) @@ -955,7 +986,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( s->internal || s->emulator !=3D current->domain ) goto out; =20 rc =3D hvm_ioreq_server_alloc_pages(s); @@ -991,7 +1022,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioser= vid_t id, =20 int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, - uint64_t end) + uint64_t end, bool internal) { struct hvm_ioreq_server *s; struct rangeset *r; @@ -1009,7 +1040,12 @@ int hvm_map_io_range_to_ioreq_server(struct domain *= d, ioservid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + /* + * NB: don't allow external callers to modify the ranges of internal + * servers. + */ + if ( (s->internal !=3D internal) || + (!internal && s->emulator !=3D current->domain) ) goto out; =20 switch ( type ) @@ -1043,7 +1079,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d= , ioservid_t id, =20 int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, - uint64_t end) + uint64_t end, bool internal) { struct hvm_ioreq_server *s; struct rangeset *r; @@ -1061,7 +1097,12 @@ int hvm_unmap_io_range_from_ioreq_server(struct doma= in *d, ioservid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + /* + * NB: don't allow external callers to modify the ranges of internal + * servers. + */ + if ( s->internal !=3D internal || + (!internal && s->emulator !=3D current->domain) ) goto out; =20 switch ( type ) @@ -1122,7 +1163,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d= , ioservid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( s->internal || s->emulator !=3D current->domain ) goto out; =20 rc =3D p2m_set_ioreq_server(d, flags, s); @@ -1142,7 +1183,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d= , ioservid_t id, } =20 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) + bool enabled, bool internal) { struct hvm_ioreq_server *s; int rc; @@ -1156,7 +1197,8 @@ int hvm_set_ioreq_server_state(struct domain *d, iose= rvid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( s->internal !=3D internal || + (!internal && s->emulator !=3D current->domain) ) goto out; =20 domain_pause(d); @@ -1185,6 +1227,8 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { + if ( s->internal ) + continue; rc =3D hvm_ioreq_server_add_vcpu(s, v); if ( rc ) goto fail; @@ -1218,7 +1262,11 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain= *d, struct vcpu *v) spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( s->internal ) + continue; hvm_ioreq_server_remove_vcpu(s, v); + } =20 spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index e2588e912f..e8119b26a6 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -24,7 +24,7 @@ bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); =20 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); + ioservid_t *id, bool internal); int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *ioreq_gfn, @@ -34,14 +34,14 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, unsigned long idx, mfn_t *mfn); int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, - uint64_t end); + uint64_t end, bool internal); int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, - uint64_t end); + uint64_t end, bool internal); int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags); int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); + bool enabled, bool internal); =20 int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Mon Apr 29 13:46:40 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1566399662; cv=none; d=zoho.com; s=zohoarc; b=WpUxJFoU6B+OI0X06z1JaAw2Hz126zoLpbmX+1wkEt3xTWrCYXksQdQFYRHE1cU/wKiDPsH3A1W4AhTtKsskJiZsDsEKvZj4LnADIAMbc/4Rj6q3SYP06pTQVfukQMVD6aYgxBROjLmq4JBZK6nsaB72yevE5eqY9/ZravJxHU0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1566399662; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=meCSgmedTdmgT9AEi5nq1HttJWTX9iB4qhCrzwLkxeg=; b=hSQxzHunWWehUu0MIBXesdfcW7zSpvCy2vbUsLXoPwcXdsYfDcCKziuhgeNEXu+WbQsLOspms9+iumWrBZxTympF4duQvVU6aoDuwR9RMkok3b90Sf9klB2SXRijfIP1HR6PEoCr/G44G6HAe5OuQHdBoKvtVtHiW/Vky+TwF8w= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1566399662921828.2880288106032; Wed, 21 Aug 2019 08:01:02 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5e-0000G4-FH; Wed, 21 Aug 2019 14:59:58 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5d-0000FF-64 for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:57 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 54cfbbd8-c424-11e9-b95f-bc764e2007e4; Wed, 21 Aug 2019 14:59:53 +0000 (UTC) X-Inumbo-ID: 54cfbbd8-c424-11e9-b95f-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bp6X/anh/MTojJZMBuIvQWLpXlQ3Ni3OlbaxVUsdQgk=; b=GsKEGZFoY7DbzorbvPD/nACS1G+cV62eOFrJY9NELKlfQkJjE2C/wEZI ACzX4IwTi+1QzT1FoqRlv5HtqAGfKBPuveAO/xMFkQaIi5a7wL8KYTr1D 4iXiWBy9yEmV4Fm8vK7afW7VmjuV7bdFMlEl6cDHAWuvRjqMWXbzrqNbj g=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 3ccSUqB6K55uMcL39IRUcroJGMer8rZxxWeV8cAnkD+vXhsqhpSUPho4+jfnufo7Flu/Xv/VxC PwuUFlCPzUVzT4qeB/V6dPP9RPa9UVpmE42n0sYfBdnuw+FQimFp1Xqf60+62Zupkz9Ftzma/0 EEwxjQUBo+rTC2j/FxZ2YDiQCA1MVXhZAe1qsQM+HNJhn6CA1q/+kG9B8yOf0g98etzq27zNc8 sQSuNxMpxNkL3ydgVSTl3NaGCO7BjlYINIfRgiorAq6+YI6kOAvaJyRWUSq4aSCiHnkhgEm8uI r80= X-SBRS: 2.7 X-MesageID: 4749749 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4749749" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:58:59 +0200 Message-ID: <20190821145903.45934-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Internal ioreq servers are always processed first, and ioreqs are dispatched by calling the handler function. If no internal servers have registered for an ioreq it's then forwarded to external callers. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/hvm/ioreq.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 23ef9b0c02..3fb6fe9585 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1305,6 +1305,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, uint8_t type; uint64_t addr; unsigned int id; + bool internal =3D true; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) return NULL; @@ -1345,11 +1346,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, addr =3D p->addr; } =20 + retry: FOR_EACH_IOREQ_SERVER(d, id, s) { struct rangeset *r; =20 - if ( !s->enabled ) + if ( !s->enabled || s->internal !=3D internal ) continue; =20 r =3D s->range[type]; @@ -1387,6 +1389,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(str= uct domain *d, } } =20 + if ( internal ) + { + internal =3D false; + goto retry; + } + return NULL; } =20 @@ -1492,9 +1500,18 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq= _t *proto_p, =20 ASSERT(s); =20 + if ( s->internal && buffered ) + { + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + if ( buffered ) return hvm_send_buffered_ioreq(s, proto_p); =20 + if ( s->internal ) + return s->handler(curr, proto_p); + if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return X86EMUL_RETRY; =20 --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Mon Apr 29 13:46:40 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1566399678; cv=none; d=zoho.com; s=zohoarc; b=SuMazVUBqwX+3fYQil2xKTq00RXYZe79NzLez7sUy/TdmU4i093wTt6UYcDESoryquoHUQqYxJlLhLjbhTJPIUsrxHfm65sB0H55BUYMlZ3ZB4RvCaYRZTW7EjBzyVXu82U5f1zwvzNsRQlhDQS+viaAGGDmozSSPRS3zA5QOG4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1566399678; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=xDvfOykmCSxn7B2lA1BY5Xwp7Vv97YT9z3DECVROPAs=; b=OJLnKzupiJWfyEslWUHLcbhEKhZIYZ6C7mAOYTXbFVFaZXZc63+zIuvC/iXj/GHVIVfNBbcv7jS5E3lsb8nBw/6Qoq3SbmcZI9xI0JSGQliNqnUiksyGhDpWjt4jNj+h/V7ej2CJjlYlS9e34vFiVBLxCS27qjTG4+2A8A7yt1A= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1566399678267749.8959985108811; Wed, 21 Aug 2019 08:01:18 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5j-0000XI-Q4; Wed, 21 Aug 2019 15:00:03 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5i-0000NF-77 for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 15:00:02 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 564af23e-c424-11e9-b95f-bc764e2007e4; Wed, 21 Aug 2019 14:59:55 +0000 (UTC) X-Inumbo-ID: 564af23e-c424-11e9-b95f-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399595; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v3ayr7cSwt+nKou4nlbITu5aA2jVTt5P5LjCoxlSv8Q=; b=hcPmnkdmfu8XKOnBtGJQh50ZLY1V3L1/CsVDA/iB48bKruiTdRer5YTx TLE7twhryx7V1CVxEZD4vWj2zoDIuHTq5G7iEps2tHQalPuT0ivVzAG++ yRV+yzoNZJV3cAQMaPLKQOvOBNWJvLq/bKin/PUKKHRdPdbOwtB7ILomi Y=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: Vld7Dk3ZHvQzKMhTS0uhmKGoOlElbgZd4wdzWSg/lvbhs+wtlHO7nw+BMZX0MtHQW3M92sOO6J HGnhUFhHbPa7+5Hc8wBreV0ecOWDUHH9uhRa82WparDdTwd7yzWrODFG/Xa4TiW8Z17d+kP4Ab VJn1+/YSh/zjD+eP7kY6ZwgfZX3IO2VlDbx/4uS7RvxpY3X6ZkG9T+xp5D5sw/GhU+DgvaRLP/ 2xhzWKjJtivGspI4YP1rTKIiRwZkRElbUVASSH16A3AAeYv1euWpotYxPY104c3f/DYTHGtXdJ TX0= X-SBRS: 2.7 X-MesageID: 4749758 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4749758" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:00 +0200 Message-ID: <20190821145903.45934-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Provide a routine to register the handler for an internal ioreq server. Note the handler can only be set once. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/hvm/ioreq.c | 32 ++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/ioreq.h | 3 +++ 2 files changed, 35 insertions(+) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 3fb6fe9585..d8fea191aa 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -486,6 +486,38 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *= s, bool buf) return rc; } =20 +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, + int (*handler)(struct vcpu *v, ioreq_t *)) +{ + struct hvm_ioreq_server *s; + int rc =3D 0; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + s =3D get_ioreq_server(d, id); + if ( !s ) + { + rc =3D -ENOENT; + goto out; + } + if ( !s->internal ) + { + rc =3D -EINVAL; + goto out; + } + if ( s->handler !=3D NULL ) + { + rc =3D -EBUSY; + goto out; + } + + s->handler =3D handler; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, struct hvm_ioreq_vcpu *sv) { diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index e8119b26a6..2131c944d4 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -55,6 +55,9 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffere= d); =20 void hvm_ioreq_init(struct domain *d); =20 +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, + int (*handler)(struct vcpu *v, ioreq_t *)); + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Mon Apr 29 13:46:40 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1566399654; cv=none; d=zoho.com; s=zohoarc; b=byXdtbQzRJHvimRqvQqC3r5je2mQl8GxV0mcOYqY8w0lzXizHk45FWjHMkXjCU7jHvIo5XUllTkicSc4w2N3t/QprrQM+zl/fwruMRXdX4ax8MW3aa6661UUvPhFe6kB32VjyOMjl0RNALsqXsCyI8BDiq+0eqzg7iiy0rW8HWM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1566399654; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=AKeKZlsWm+tI55HXMYsDP4A+aJk0xsiZufvnuFPnTcw=; b=l9FQVNt9Fj7/sV80xrEWf/FQqcQ5Mt+s9ZYkndmSoZp2fk5khiUFN+d4kGExTwJ2MrUsalZnYGdeE+J2aUcRJQJHYBagLod9oTBBMLovxCLwgNe8rJg+WRL4j8I7Ja0fxZaD56ixe9N8HHC7ZEI9Ye7z8E7ws4uDfi4D95Bz0d0= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1566399654694603.5629700391743; Wed, 21 Aug 2019 08:00:54 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5U-00009s-Bw; Wed, 21 Aug 2019 14:59:48 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5T-00009V-6b for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:47 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4ea4f138-c424-11e9-8980-bc764e2007e4; Wed, 21 Aug 2019 14:59:43 +0000 (UTC) X-Inumbo-ID: 4ea4f138-c424-11e9-8980-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399582; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qVmlfldA1+402OzpCAHEkAeeGY3Z8Bn3dYHMyvyNz/Y=; b=Z4YZhQ3g0rjc8Ilejf6R+Eq1xC3Vnoa+sIiXzuBiXKYi7zz33kPo3aPU WrH7THtkRIH0/xM2ILq27nwEGCCxytD7jkeUMT7fzXrS4UFLAhpxZf4kq PQwoxRv1IQ+vkVHRh8eRhZILO/iYngx6wv9HhFDimwpZbaOF/5l+oksO2 0=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 09VmLrcV46hdc/5rjHRJMarFV0s0f235L3e8OAMOi+HfFFClapkUMmVNT+MxgxezMLjrO/nIOT 0KBLm+jOlRrq6DJ2nn/i1xe8U1IhqpZ5YjeGesKZWdldB4b6TCCcayfhk0Om9siPcke4iwVF3q uQnEjHQVmhBpbNOU+dflA7K1YSN04g3HnvlPjFVX2OuzBBfoZbiJFlYzJVQZkX1EstDx1fc6tf El9SgmoUpv0He7/dMiIyWH0fRfBosFxix9p7gRya8EEw3ByeBK9iee9G/AG05K3KbdzPp0g4ZP y1g= X-SBRS: 2.7 X-MesageID: 4717081 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4717081" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:01 +0200 Message-ID: <20190821145903.45934-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 5/7] ioreq: allow decoding accesses to MMCFG regions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Pick up on the infrastructure already added for vPCI and allow ioreq to decode accesses to MMCFG regions registered for a domain. This infrastructure is still only accessible from internal callers, so MMCFG regions can only be registered from the internal domain builder used by PVH dom0. Note that the vPCI infrastructure to decode and handle accesses to MMCFG regions will be removed in following patches when vPCI is switched to become an internal ioreq server. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 36 +++++--------- xen/arch/x86/hvm/ioreq.c | 88 +++++++++++++++++++++++++++++++-- xen/include/asm-x86/hvm/io.h | 12 ++++- xen/include/asm-x86/hvm/ioreq.h | 6 +++ 5 files changed, 113 insertions(+), 31 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 029eea3b85..b7a53377a5 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -741,7 +741,7 @@ void hvm_domain_destroy(struct domain *d) xfree(ioport); } =20 - destroy_vpci_mmcfg(d); + hvm_ioreq_free_mmcfg(d); } =20 static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h) diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index a5b0a23f06..6585767c03 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -279,6 +279,18 @@ unsigned int hvm_pci_decode_addr(unsigned int cf8, uns= igned int addr, return CF8_ADDR_LO(cf8) | (addr & 3); } =20 +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf) +{ + addr -=3D mmcfg->addr; + sbdf->bdf =3D MMCFG_BDF(addr); + sbdf->bus +=3D mmcfg->start_bus; + sbdf->seg =3D mmcfg->segment; + + return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); +} + + /* Do some sanity checks. */ static bool vpci_access_allowed(unsigned int reg, unsigned int len) { @@ -383,14 +395,6 @@ void register_vpci_portio_handler(struct domain *d) handler->ops =3D &vpci_portio_ops; } =20 -struct hvm_mmcfg { - struct list_head next; - paddr_t addr; - unsigned int size; - uint16_t segment; - uint8_t start_bus; -}; - /* Handlers to trap PCI MMCFG config accesses. */ static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d, paddr_t addr) @@ -558,22 +562,6 @@ int register_vpci_mmcfg_handler(struct domain *d, padd= r_t addr, return 0; } =20 -void destroy_vpci_mmcfg(struct domain *d) -{ - struct list_head *mmcfg_regions =3D &d->arch.hvm.mmcfg_regions; - - write_lock(&d->arch.hvm.mmcfg_lock); - while ( !list_empty(mmcfg_regions) ) - { - struct hvm_mmcfg *mmcfg =3D list_first_entry(mmcfg_regions, - struct hvm_mmcfg, next); - - list_del(&mmcfg->next); - xfree(mmcfg); - } - write_unlock(&d->arch.hvm.mmcfg_lock); -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index d8fea191aa..10c0f7a574 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -690,6 +690,22 @@ static void hvm_ioreq_server_free_rangesets(struct hvm= _ioreq_server *s) rangeset_destroy(s->range[i]); } =20 +void hvm_ioreq_free_mmcfg(struct domain *d) +{ + struct list_head *mmcfg_regions =3D &d->arch.hvm.mmcfg_regions; + + write_lock(&d->arch.hvm.mmcfg_lock); + while ( !list_empty(mmcfg_regions) ) + { + struct hvm_mmcfg *mmcfg =3D list_first_entry(mmcfg_regions, + struct hvm_mmcfg, next); + + list_del(&mmcfg->next); + xfree(mmcfg); + } + write_unlock(&d->arch.hvm.mmcfg_lock); +} + static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, ioservid_t id) { @@ -1329,6 +1345,19 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } =20 +static const struct hvm_mmcfg *mmcfg_find(const struct domain *d, + paddr_t addr) +{ + const struct hvm_mmcfg *mmcfg; + + list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) + if ( addr >=3D mmcfg->addr && addr < mmcfg->addr + mmcfg->size ) + return mmcfg; + + return NULL; +} + + struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, ioreq_t *p) { @@ -1338,27 +1367,34 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, uint64_t addr; unsigned int id; bool internal =3D true; + const struct hvm_mmcfg *mmcfg; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) return NULL; =20 cf8 =3D d->arch.hvm.pci_cf8; =20 - if ( p->type =3D=3D IOREQ_TYPE_PIO && - (p->addr & ~3) =3D=3D 0xcfc && - CF8_ENABLED(cf8) ) + read_lock(&d->arch.hvm.mmcfg_lock); + if ( (p->type =3D=3D IOREQ_TYPE_PIO && + (p->addr & ~3) =3D=3D 0xcfc && + CF8_ENABLED(cf8)) || + (p->type =3D=3D IOREQ_TYPE_COPY && + (mmcfg =3D mmcfg_find(d, p->addr)) !=3D NULL) ) { uint32_t x86_fam; pci_sbdf_t sbdf; unsigned int reg; =20 - reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf); + reg =3D p->type =3D=3D IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p= ->addr, + &sbdf) + : hvm_mmcfg_decode_addr(mmcfg, p->= addr, + &sbdf); =20 /* PCI config data cycle */ type =3D XEN_DMOP_IO_RANGE_PCI; addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ - if ( CF8_ADDR_HI(cf8) && + if ( p->type =3D=3D IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && (x86_fam =3D get_cpu_family( d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 && @@ -1377,6 +1413,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; addr =3D p->addr; } + read_unlock(&d->arch.hvm.mmcfg_lock); =20 retry: FOR_EACH_IOREQ_SERVER(d, id, s) @@ -1629,6 +1666,47 @@ void hvm_ioreq_init(struct domain *d) register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); } =20 +int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg) +{ + struct hvm_mmcfg *mmcfg, *new; + + if ( start_bus > end_bus ) + return -EINVAL; + + new =3D xmalloc(struct hvm_mmcfg); + if ( !new ) + return -ENOMEM; + + new->addr =3D addr + (start_bus << 20); + new->start_bus =3D start_bus; + new->segment =3D seg; + new->size =3D (end_bus - start_bus + 1) << 20; + + write_lock(&d->arch.hvm.mmcfg_lock); + list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) + if ( new->addr < mmcfg->addr + mmcfg->size && + mmcfg->addr < new->addr + new->size ) + { + int ret =3D -EEXIST; + + if ( new->addr =3D=3D mmcfg->addr && + new->start_bus =3D=3D mmcfg->start_bus && + new->segment =3D=3D mmcfg->segment && + new->size =3D=3D mmcfg->size ) + ret =3D 0; + write_unlock(&d->arch.hvm.mmcfg_lock); + xfree(new); + return ret; + } + + list_add(&new->next, &d->arch.hvm.mmcfg_regions); + write_unlock(&d->arch.hvm.mmcfg_lock); + + return 0; +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 7ceb119b64..26f0489171 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -165,9 +165,19 @@ void stdvga_deinit(struct domain *d); =20 extern void hvm_dpci_msi_eoi(struct domain *d, int vector); =20 -/* Decode a PCI port IO access into a bus/slot/func/reg. */ +struct hvm_mmcfg { + struct list_head next; + paddr_t addr; + unsigned int size; + uint16_t segment; + uint8_t start_bus; +}; + +/* Decode a PCI port IO or MMCFG access into a bus/slot/func/reg. */ unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr, pci_sbdf_t *sbdf); +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf); =20 /* * HVM port IO handler that performs forwarding of guest IO ports into mac= hine diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 2131c944d4..10b9586885 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -58,6 +58,12 @@ void hvm_ioreq_init(struct domain *d); int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, int (*handler)(struct vcpu *v, ioreq_t *)); =20 +int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg); + +void hvm_ioreq_free_mmcfg(struct domain *d); + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Mon Apr 29 13:46:40 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1566399663; cv=none; d=zoho.com; s=zohoarc; b=YeTUlRFPjGw3SG7X3LHLCD4n67uFijzdWb0b+bVuiEJBMXSL2Y6+NWIAZ5t++OMA28ilTGDQz1skUOzcbpQic7Mb8rVVChO43sb8rg6vclqmMlYZnMES/bI1hEeK4nLx/Idh6ptaw8TASKrtmrgh/yby+wGrPf5hmjdFmWShyNQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1566399663; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=yFAhjGfz0J69iGv4Jd/Ytmx2VZrYYDlBEZl8u88NK6I=; b=EJtUG66ARbNc8WJBx4LVcVuVTKyLy9+pEuQBwR1I8F5Kzh6x5XzMM5uRmuUGPLLk6B9vVzz+4FGjn42CLVr3/NkxN3FPdyeVCPxnucW7GxwdK/FYq2ddYyJynRGujY3VCTfa46+7y9XP1jgXeXNUUoakskmsIZoVTyVNhxehBt8= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1566399663689980.76485669158; Wed, 21 Aug 2019 08:01:03 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5Z-0000Be-Mj; Wed, 21 Aug 2019 14:59:53 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5Y-0000BA-6L for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:52 +0000 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 51a0715a-c424-11e9-8980-bc764e2007e4; Wed, 21 Aug 2019 14:59:48 +0000 (UTC) X-Inumbo-ID: 51a0715a-c424-11e9-8980-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399588; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q95w2jTSGI/Q7ICrkONG2X5OzZS4B+rwKu4FumIZXbU=; b=cn2myorww1pDaeiAZq0bGFk2+gCTKzHcB2w1wMKvpfvHWq3w+jhWV1Dz czj2sTkSYsrp1+JImB+rVZ22IgDVi8hVQrYFrUSHm5QGnBtXgaz/BUVPA hL3ZBqDbTToh86XbVgrH2yh2TknmKKmQCzCdMgjWFM5XdF0miqc+E5mo0 U=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 0MeGxGiiMIts12U/PVfF8FACXhq7YzVH0CPStD5qZuTkyHtUkussFYun/N8QQ2prWPssZqK82f yTMkWghDohikRCcsdvz6HTrveoa3lGjjnuncRIMpkkyXXCrEATrTRqs/s0zl0KE3b9dbDe4784 sRErGcnbd75TQ5iYGUuFhufATGLaF7hUrSiMiSL6GQTzdbj3bUgr1Oz8xui4f8tsw4+mbyHTNQ unpVlPH5TWAsFHkXJR1pITCXdhJmZNBS8DBl2bV1EYPWvLRxW9mFvhxwjjc6iu306+SzE7h1ER XtI= X-SBRS: 2.7 X-MesageID: 4782721 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4782721" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:02 +0200 Message-ID: <20190821145903.45934-7-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 6/7] vpci: register as an internal ioreq server X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Switch vPCI to become an internal ioreq server, and hence drop all the vPCI specific decoding and trapping to PCI IO ports and MMCFG regions. This allows to unify the vPCI code with the ioreq infrastructure, opening the door for domains having PCI accesses handled by vPCI and other ioreq servers at the same time. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/hvm/dom0_build.c | 9 +- xen/arch/x86/hvm/hvm.c | 5 +- xen/arch/x86/hvm/io.c | 272 ---------------------------- xen/arch/x86/hvm/ioreq.c | 5 + xen/arch/x86/physdev.c | 7 +- xen/drivers/passthrough/x86/iommu.c | 2 +- xen/drivers/vpci/vpci.c | 54 ++++++ xen/include/asm-x86/hvm/io.h | 2 +- xen/include/xen/vpci.h | 3 + 9 files changed, 77 insertions(+), 282 deletions(-) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 8845399ae9..7925189fed 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -29,6 +29,7 @@ =20 #include #include +#include #include #include #include @@ -1117,10 +1118,10 @@ static void __hwdom_init pvh_setup_mmcfg(struct dom= ain *d) =20 for ( i =3D 0; i < pci_mmcfg_config_num; i++ ) { - rc =3D register_vpci_mmcfg_handler(d, pci_mmcfg_config[i].address, - pci_mmcfg_config[i].start_bus_num= ber, - pci_mmcfg_config[i].end_bus_numbe= r, - pci_mmcfg_config[i].pci_segment); + rc =3D hvm_ioreq_register_mmcfg(d, pci_mmcfg_config[i].address, + pci_mmcfg_config[i].start_bus_number, + pci_mmcfg_config[i].end_bus_number, + pci_mmcfg_config[i].pci_segment); if ( rc ) printk("Unable to setup MMCFG handler at %#lx for segment %u\n= ", pci_mmcfg_config[i].address, diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index b7a53377a5..3fcf46779b 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -644,10 +644,13 @@ int hvm_domain_initialise(struct domain *d) d->arch.hvm.io_bitmap =3D hvm_io_bitmap; =20 register_g2m_portio_handler(d); - register_vpci_portio_handler(d); =20 hvm_ioreq_init(d); =20 + rc =3D vpci_register_ioreq(d); + if ( rc ) + goto fail1; + hvm_init_guest_time(d); =20 d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON] =3D SHUTDOWN_reboot; diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 6585767c03..9c323d17ef 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -290,278 +290,6 @@ unsigned int hvm_mmcfg_decode_addr(const struct hvm_m= mcfg *mmcfg, return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); } =20 - -/* Do some sanity checks. */ -static bool vpci_access_allowed(unsigned int reg, unsigned int len) -{ - /* Check access size. */ - if ( len !=3D 1 && len !=3D 2 && len !=3D 4 && len !=3D 8 ) - return false; - - /* Check that access is size aligned. */ - if ( (reg & (len - 1)) ) - return false; - - return true; -} - -/* vPCI config space IO ports handlers (0xcf8/0xcfc). */ -static bool vpci_portio_accept(const struct hvm_io_handler *handler, - const ioreq_t *p) -{ - return (p->addr =3D=3D 0xcf8 && p->size =3D=3D 4) || (p->addr & ~3) = =3D=3D 0xcfc; -} - -static int vpci_portio_read(const struct hvm_io_handler *handler, - uint64_t addr, uint32_t size, uint64_t *data) -{ - const struct domain *d =3D current->domain; - unsigned int reg; - pci_sbdf_t sbdf; - uint32_t cf8; - - *data =3D ~(uint64_t)0; - - if ( addr =3D=3D 0xcf8 ) - { - ASSERT(size =3D=3D 4); - *data =3D d->arch.hvm.pci_cf8; - return X86EMUL_OKAY; - } - - ASSERT((addr & ~3) =3D=3D 0xcfc); - cf8 =3D ACCESS_ONCE(d->arch.hvm.pci_cf8); - if ( !CF8_ENABLED(cf8) ) - return X86EMUL_UNHANDLEABLE; - - reg =3D hvm_pci_decode_addr(cf8, addr, &sbdf); - - if ( !vpci_access_allowed(reg, size) ) - return X86EMUL_OKAY; - - *data =3D vpci_read(sbdf, reg, size); - - return X86EMUL_OKAY; -} - -static int vpci_portio_write(const struct hvm_io_handler *handler, - uint64_t addr, uint32_t size, uint64_t data) -{ - struct domain *d =3D current->domain; - unsigned int reg; - pci_sbdf_t sbdf; - uint32_t cf8; - - if ( addr =3D=3D 0xcf8 ) - { - ASSERT(size =3D=3D 4); - d->arch.hvm.pci_cf8 =3D data; - return X86EMUL_OKAY; - } - - ASSERT((addr & ~3) =3D=3D 0xcfc); - cf8 =3D ACCESS_ONCE(d->arch.hvm.pci_cf8); - if ( !CF8_ENABLED(cf8) ) - return X86EMUL_UNHANDLEABLE; - - reg =3D hvm_pci_decode_addr(cf8, addr, &sbdf); - - if ( !vpci_access_allowed(reg, size) ) - return X86EMUL_OKAY; - - vpci_write(sbdf, reg, size, data); - - return X86EMUL_OKAY; -} - -static const struct hvm_io_ops vpci_portio_ops =3D { - .accept =3D vpci_portio_accept, - .read =3D vpci_portio_read, - .write =3D vpci_portio_write, -}; - -void register_vpci_portio_handler(struct domain *d) -{ - struct hvm_io_handler *handler; - - if ( !has_vpci(d) ) - return; - - handler =3D hvm_next_io_handler(d); - if ( !handler ) - return; - - handler->type =3D IOREQ_TYPE_PIO; - handler->ops =3D &vpci_portio_ops; -} - -/* Handlers to trap PCI MMCFG config accesses. */ -static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d, - paddr_t addr) -{ - const struct hvm_mmcfg *mmcfg; - - list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) - if ( addr >=3D mmcfg->addr && addr < mmcfg->addr + mmcfg->size ) - return mmcfg; - - return NULL; -} - -bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr) -{ - return vpci_mmcfg_find(d, addr); -} - -static unsigned int vpci_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, - paddr_t addr, pci_sbdf_t *sbdf) -{ - addr -=3D mmcfg->addr; - sbdf->bdf =3D MMCFG_BDF(addr); - sbdf->bus +=3D mmcfg->start_bus; - sbdf->seg =3D mmcfg->segment; - - return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); -} - -static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr) -{ - struct domain *d =3D v->domain; - bool found; - - read_lock(&d->arch.hvm.mmcfg_lock); - found =3D vpci_mmcfg_find(d, addr); - read_unlock(&d->arch.hvm.mmcfg_lock); - - return found; -} - -static int vpci_mmcfg_read(struct vcpu *v, unsigned long addr, - unsigned int len, unsigned long *data) -{ - struct domain *d =3D v->domain; - const struct hvm_mmcfg *mmcfg; - unsigned int reg; - pci_sbdf_t sbdf; - - *data =3D ~0ul; - - read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D vpci_mmcfg_find(d, addr); - if ( !mmcfg ) - { - read_unlock(&d->arch.hvm.mmcfg_lock); - return X86EMUL_RETRY; - } - - reg =3D vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf); - read_unlock(&d->arch.hvm.mmcfg_lock); - - if ( !vpci_access_allowed(reg, len) || - (reg + len) > PCI_CFG_SPACE_EXP_SIZE ) - return X86EMUL_OKAY; - - /* - * According to the PCIe 3.1A specification: - * - Configuration Reads and Writes must usually be DWORD or smaller - * in size. - * - Because Root Complex implementations are not required to support - * accesses to a RCRB that cross DW boundaries [...] software - * should take care not to cause the generation of such accesses - * when accessing a RCRB unless the Root Complex will support the - * access. - * Xen however supports 8byte accesses by splitting them into two - * 4byte accesses. - */ - *data =3D vpci_read(sbdf, reg, min(4u, len)); - if ( len =3D=3D 8 ) - *data |=3D (uint64_t)vpci_read(sbdf, reg + 4, 4) << 32; - - return X86EMUL_OKAY; -} - -static int vpci_mmcfg_write(struct vcpu *v, unsigned long addr, - unsigned int len, unsigned long data) -{ - struct domain *d =3D v->domain; - const struct hvm_mmcfg *mmcfg; - unsigned int reg; - pci_sbdf_t sbdf; - - read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D vpci_mmcfg_find(d, addr); - if ( !mmcfg ) - { - read_unlock(&d->arch.hvm.mmcfg_lock); - return X86EMUL_RETRY; - } - - reg =3D vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf); - read_unlock(&d->arch.hvm.mmcfg_lock); - - if ( !vpci_access_allowed(reg, len) || - (reg + len) > PCI_CFG_SPACE_EXP_SIZE ) - return X86EMUL_OKAY; - - vpci_write(sbdf, reg, min(4u, len), data); - if ( len =3D=3D 8 ) - vpci_write(sbdf, reg + 4, 4, data >> 32); - - return X86EMUL_OKAY; -} - -static const struct hvm_mmio_ops vpci_mmcfg_ops =3D { - .check =3D vpci_mmcfg_accept, - .read =3D vpci_mmcfg_read, - .write =3D vpci_mmcfg_write, -}; - -int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, - unsigned int start_bus, unsigned int end_b= us, - unsigned int seg) -{ - struct hvm_mmcfg *mmcfg, *new; - - ASSERT(is_hardware_domain(d)); - - if ( start_bus > end_bus ) - return -EINVAL; - - new =3D xmalloc(struct hvm_mmcfg); - if ( !new ) - return -ENOMEM; - - new->addr =3D addr + (start_bus << 20); - new->start_bus =3D start_bus; - new->segment =3D seg; - new->size =3D (end_bus - start_bus + 1) << 20; - - write_lock(&d->arch.hvm.mmcfg_lock); - list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) - if ( new->addr < mmcfg->addr + mmcfg->size && - mmcfg->addr < new->addr + new->size ) - { - int ret =3D -EEXIST; - - if ( new->addr =3D=3D mmcfg->addr && - new->start_bus =3D=3D mmcfg->start_bus && - new->segment =3D=3D mmcfg->segment && - new->size =3D=3D mmcfg->size ) - ret =3D 0; - write_unlock(&d->arch.hvm.mmcfg_lock); - xfree(new); - return ret; - } - - if ( list_empty(&d->arch.hvm.mmcfg_regions) ) - register_mmio_handler(d, &vpci_mmcfg_ops); - - list_add(&new->next, &d->arch.hvm.mmcfg_regions); - write_unlock(&d->arch.hvm.mmcfg_lock); - - return 0; -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 10c0f7a574..b2582bd3a0 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1707,6 +1707,11 @@ int hvm_ioreq_register_mmcfg(struct domain *d, paddr= _t addr, return 0; } =20 +bool hvm_is_mmcfg_address(const struct domain *d, paddr_t addr) +{ + return mmcfg_find(d, addr); +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c index 3a3c15890b..a48b220fc3 100644 --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -562,9 +563,9 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(voi= d) arg) * For HVM (PVH) domains try to add the newly found MMCFG to t= he * domain. */ - ret =3D register_vpci_mmcfg_handler(currd, info.address, - info.start_bus, info.end_bus, - info.segment); + ret =3D hvm_ioreq_register_mmcfg(currd, info.address, + info.start_bus, info.end_bus, + info.segment); } =20 break; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index fd05075bb5..e0f3da91ce 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -244,7 +244,7 @@ static bool __hwdom_init hwdom_iommu_map(const struct d= omain *d, * TODO: runtime added MMCFG regions are not checked to make sure they * don't overlap with already mapped regions, thus preventing trapping. */ - if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) ) + if ( hvm_is_mmcfg_address(d, pfn_to_paddr(pfn)) ) return false; =20 return true; diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index 758d9420e7..510e3ee771 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -20,6 +20,8 @@ #include #include =20 +#include + /* Internal struct to store the emulated PCI registers. */ struct vpci_register { vpci_read_t *read; @@ -473,6 +475,58 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, uns= igned int size, spin_unlock(&pdev->vpci->lock); } =20 +static int ioreq_handler(struct vcpu *v, ioreq_t *req) +{ + pci_sbdf_t sbdf; + + if ( req->type !=3D IOREQ_TYPE_PCI_CONFIG || req->data_is_ptr ) + { + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + + sbdf.sbdf =3D req->addr >> 32; + + if ( req->dir ) + req->data =3D vpci_read(sbdf, req->addr, req->size); + else + vpci_write(sbdf, req->addr, req->size, req->data); + + return X86EMUL_OKAY; +} + +int vpci_register_ioreq(struct domain *d) +{ + ioservid_t id; + int rc; + + if ( !has_vpci(d) ) + return 0; + + rc =3D hvm_create_ioreq_server(d, HVM_IOREQSRV_BUFIOREQ_OFF, &id, true= ); + if ( rc ) + return rc; + + rc =3D hvm_add_ioreq_handler(d, id, ioreq_handler); + if ( rc ) + return rc; + + if ( is_hardware_domain(d) ) + { + /* Handle all devices in vpci. */ + rc =3D hvm_map_io_range_to_ioreq_server(d, id, XEN_DMOP_IO_RANGE_P= CI, + 0, ~(uint64_t)0, true); + if ( rc ) + return rc; + } + + rc =3D hvm_set_ioreq_server_state(d, id, true, true); + if ( rc ) + return rc; + + return rc; +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 26f0489171..75a24f33bc 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -196,7 +196,7 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr= _t addr, void destroy_vpci_mmcfg(struct domain *d); =20 /* Check if an address is between a MMCFG region for a domain. */ -bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr); +bool hvm_is_mmcfg_address(const struct domain *d, paddr_t addr); =20 #endif /* __ASM_X86_HVM_IO_H__ */ =20 diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index 4cf233c779..666dd1ca68 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -23,6 +23,9 @@ typedef int vpci_register_init_t(struct pci_dev *dev); static vpci_register_init_t *const x##_entry \ __used_section(".data.vpci." p) =3D x =20 +/* Register vPCI handler wit ioreq. */ +int vpci_register_ioreq(struct domain *d); + /* Add vPCI handlers to device. */ int __must_check vpci_add_handlers(struct pci_dev *dev); =20 --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Mon Apr 29 13:46:40 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1566399659; cv=none; d=zoho.com; s=zohoarc; b=ITKT9WyfOROLGOD/ZW+AaDWn1C0QXcTcrsgfXii54zZrn/B5UNVEuZcgrp6csRMCwPy2lQi3xfTotvviYtNO4OuKbRd4G5bO8LDcoP9JQOzlUD9mk3/c21vo+V0kwTCSlHyJHPYpc5c6DaMDm7MB4SEfwPTuYvUv+lLw3Pm9viY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1566399659; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=c1TRTKCz1DV12Tfo5hqQfITFOetz1Vz29wfynFT9910=; b=DF3RI2KECPRnkcgPDPVYpQNjmn4H3s322nVmuxttw2f3HvIPsZgG9mKSi8196XjIWXABqgLUJPmL7NtyGv1Y0wHCT7ZpTSWQxpUKzpi65Td6VISDzCvgsui4DavBy4FgT61GRERMdAGj1/SlNKbjRVTwz8sYjRV5J5AiC8lx+1s= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1566399659948676.3132495114768; Wed, 21 Aug 2019 08:00:59 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5b-0000D0-5E; Wed, 21 Aug 2019 14:59:55 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5Z-0000BY-Jq for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:53 +0000 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 53e413b0-c424-11e9-adc7-12813bfff9fa; Wed, 21 Aug 2019 14:59:52 +0000 (UTC) X-Inumbo-ID: 53e413b0-c424-11e9-adc7-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R2CKAGsb2RND/Knz+9b3zSn3LkXlYB+nbyj3ICh1Xi8=; b=FdZGet6UfGrmSXvq5B+o+lWyS0CHoM+pLQ7p93jRcQMRS/zrAVsfR9I3 ZKspYZvnEC8amvOY6RTcWefCa01wKByKQM6l7R0BnqJ9LBKwFZ0pOLMEB iUdWkHUvrLyRF7njbohBEUPd1+UCKSLYcoSKAWrbrSSZiQ1d152WufDP6 Q=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: djF5HpQgTe+j60NbmT3TXg5eBKEoot7a/Xa9zuEgV70tZYWLxhyze5S0MTafDIXitk7E6BNdlc DVRFkG8MYZqgsBTn8F6XkyMsmw0i0iZAUxjuw2KlWP0lK7qEBZ3XOzbVB4LLLgDJ6nNPOKQ9Cu rvSKbAgzBGJ2E5PkIDv7AmoHM1pDyLiAJe/nz7BBMbZW7l8CWnqRYIJMvtNHJ/q6MV+G5Kuo+7 3C9/Tr8LjtPskEmko3S+wMdgR19jxdXDE9cpsXHS7Xu1yaCPFwYb++qOpD2KQAqrl2Gp6IT6fS v90= X-SBRS: 2.7 X-MesageID: 4549117 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4549117" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:03 +0200 Message-ID: <20190821145903.45934-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) ...and switch vPCI to use this infrastructure for long running physmap modification operations. This allows to get rid of the vPCI specific modifications done to handle_hvm_io_completion and allows generalizing the support for long-running operations to other internal ioreq servers. Such support is implemented as a specific handler that can be registers by internal ioreq servers and that will be called to check for pending work. Returning true from this handler will prevent the vcpu from running until the handler returns false. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/hvm/ioreq.c | 55 ++++++++++++++++++++++++++++---- xen/drivers/vpci/vpci.c | 3 ++ xen/include/asm-x86/hvm/domain.h | 1 + xen/include/asm-x86/hvm/ioreq.h | 2 ++ 4 files changed, 55 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index b2582bd3a0..8e160a0a14 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -186,18 +186,29 @@ bool handle_hvm_io_completion(struct vcpu *v) enum hvm_io_completion io_completion; unsigned int id; =20 - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - FOR_EACH_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 if ( s->internal ) + { + if ( s->pending && s->pending(v) ) + { + /* + * Need to raise a scheduler irq in order to prevent the g= uest + * vcpu from resuming execution. + * + * Note this is not required for external ioreq operations + * because in that case the vcpu is marked as blocked, but= this + * cannot be done for long-running internal operations, si= nce + * it would prevent the vcpu from being scheduled and thus= the + * long running operation from finishing. + */ + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } continue; + } =20 list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -518,6 +529,38 @@ int hvm_add_ioreq_handler(struct domain *d, ioservid_t= id, return rc; } =20 +int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id, + bool (*pending)(struct vcpu *v)) +{ + struct hvm_ioreq_server *s; + int rc =3D 0; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + s =3D get_ioreq_server(d, id); + if ( !s ) + { + rc =3D -ENOENT; + goto out; + } + if ( !s->internal ) + { + rc =3D -EINVAL; + goto out; + } + if ( s->pending !=3D NULL ) + { + rc =3D -EBUSY; + goto out; + } + + s->pending =3D pending; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, struct hvm_ioreq_vcpu *sv) { diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index 510e3ee771..54b0f31612 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -508,6 +508,9 @@ int vpci_register_ioreq(struct domain *d) return rc; =20 rc =3D hvm_add_ioreq_handler(d, id, ioreq_handler); + if ( rc ) + return rc; + rc =3D hvm_add_ioreq_pending_handler(d, id, vpci_process_pending); if ( rc ) return rc; =20 diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index f0be303517..80a38ffe48 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -73,6 +73,7 @@ struct hvm_ioreq_server { }; struct { int (*handler)(struct vcpu *v, ioreq_t *); + bool (*pending)(struct vcpu *v); }; }; }; diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 10b9586885..cc3e27d059 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -57,6 +57,8 @@ void hvm_ioreq_init(struct domain *d); =20 int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, int (*handler)(struct vcpu *v, ioreq_t *)); +int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id, + bool (*pending)(struct vcpu *v)); =20 int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, unsigned int start_bus, unsigned int end_bus, --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel