From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850461; cv=none; d=zoho.com; s=zohoarc; b=kKEGI5jpoveiHIjtGANfdqmUfWWoSGAIPHkRHSKEhSlg8VBbknwLKaibsvZlqVDQ/AwH4t/LuW9DuuyYQ4GlTmYMcNQjPguXxCVO++bwjvMDI6qiur6nymfVKDUWJZLtbTF5ijhkWsniGglEpQhO9i7Sz3C7RnT4xOFXKqvOIpQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850461; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=SyJMh8tHhq1ApndP2nzCrw53xgC9X1x74+1I4DM6wE0=; b=L2masIZ9MJQjANnbxW9GvkZXjjgJPh9X/X2UQmL5y2WWlH52qzM+8guNwa8n0FzoPggRQDsFKjpLbO8YH+LDN4XqDcettfEAAhrkLUU4Q0Ce2yoeZoHDXKJcxHD78nXlmRZ+l5/zo60vufJnp5MTLe2uqR32CMnynnB9kqsnhVQ= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 156985046112093.69549388750818; Mon, 30 Sep 2019 06:34:21 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvnp-0005US-IN; Mon, 30 Sep 2019 13:33:25 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvno-0005TO-CO for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:24 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by localhost (Halon) with ESMTPS id dda73a2d-e386-11e9-96d3-12813bfff9fa; Mon, 30 Sep 2019 13:33:19 +0000 (UTC) X-Inumbo-ID: dda73a2d-e386-11e9-96d3-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850400; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4yOccHXsrLKixDZORdzfIHT5/uP4KQJuS4A78hqM23A=; b=bynA17zY44ZcE7OdaVs6SHLORCyZo2Le6KDdFlekvRS6m1EQXwQI1/oM PWqxReCVTWmXOEYvCqv/CVLC1IlVlIfvmvfA33jzx7RsPIVLWQ4Jt9g+9 viOfv+zfHK+93nqq6c/3M5uSt/Xz6rxDJqrvGD5W7vSeCRst8k0IAKLlG I=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: zF6D/gTLeKXu9/f/6Rzq9IQ2Kxzq9BqmZDzs59NLVMu64+Ri6FGMLWg0mO9TVlAasFI/pHZj5l KmiRqVYFQtKAtKLexr9Tz0dQvE/S8IWAk3DKvoUcC+Tqy/Tmj7XACVG3wrvuJSmJCIpLUOlwrZ zJQbzeMvWbLdXgQpN8v+C/S7s/skT/l76mSqs7Eb0/4s3SOyBH60okj4A6er6qLExr83dCb/p2 Crw2Rx1TXd9KsqRiMaHlifEM/Pf2VI8yNbupTokmDO/dw5KqPwMhExLWSqGq15jzfWa84KqDfY 4eE= X-SBRS: 2.7 X-MesageID: 6322260 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6322260" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:29 +0200 Message-ID: <20190930133238.49868-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 01/10] ioreq: terminate cf8 handling at hypervisor level X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Do not forward accesses to cf8 to external emulators, decoding of PCI accesses is handled by Xen, and emulators can request handling of config space accesses of devices using the provided ioreq interface. Fully terminate cf8 accesses at the hypervisor level, by improving the existing hvm_access_cf8 helper to also handle register reads, and always return X86EMUL_OKAY in order to terminate the emulation. Note that without this change in the absence of some external emulator that catches accesses to cf8 read requests to the register would misbehave, as the ioreq internal handler did not handle those. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Paul Durrant --- Changes since v2: - Allow ioreq servers to map 0xcf8 and 0xcfc, even if those are handled by the hypervisor. Changes since v1: - New in this version. --- xen/arch/x86/hvm/ioreq.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index d347144096..5e503ce498 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1518,11 +1518,15 @@ static int hvm_access_cf8( { struct domain *d =3D current->domain; =20 - if ( dir =3D=3D IOREQ_WRITE && bytes =3D=3D 4 ) + if ( bytes !=3D 4 ) + return X86EMUL_OKAY; + + if ( dir =3D=3D IOREQ_WRITE ) d->arch.hvm.pci_cf8 =3D *val; + else + *val =3D d->arch.hvm.pci_cf8; =20 - /* We always need to fall through to the catch all emulator */ - return X86EMUL_UNHANDLEABLE; + return X86EMUL_OKAY; } =20 void hvm_ioreq_init(struct domain *d) --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850461; cv=none; d=zoho.com; s=zohoarc; b=bl1V8Yba/oMMflCwC3xsbOSMmgaJPtjxNn1fvaU/V4CYB3MSQH46vA7E6kEfkva9UuqMVelm/6O0u5t/Bs/4vkLkEppl6ymPP1it6v34lEp29F2z+a34ci5B+1oRSU2VUVpuZBDVynBVpfxSIoxC+vHwYSEdnIZwkC/YidJ35wY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850461; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=VBGahCyQ+gQY31vUqxBZYcaoU4EgOVD54aCka0XLgF4=; b=emRtjE2toaPl/6AvpS/idjkchT6NAUUeOu/S8aM92rI+7TYvto+z6qOFpudj3raM+b+fZ9PnliUmxA6C9Uwjv9le3T2mbBLEbK8yXUds1mybwtOXgS/PkCnAAWVO17ZcICNxetY6nyGVZNmSjCCX1BwODiM4U/jIqHGsN5atZlE= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569850461430234.3822569798881; Mon, 30 Sep 2019 06:34:21 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvne-0005PD-LD; Mon, 30 Sep 2019 13:33:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvnd-0005P3-5R for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:13 +0000 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by localhost (Halon) with ESMTPS id d8b3e466-e386-11e9-97fb-bc764e2007e4; Mon, 30 Sep 2019 13:33:11 +0000 (UTC) X-Inumbo-ID: d8b3e466-e386-11e9-97fb-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850391; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=of/yyAIAxm+0TP/Swh4+PPiPARUlmHFzJANhoR72Gd8=; b=NmxdMoIbIkJYZx15wLbUaTPsbTsGOjcJRmb4xbB9HhWQhimIMCuz8edp rogv8l9NEfGxg4ngTRVIXjXseGQ04jVzr7JzKl2bwutnhvod7MPNdUTZb obW4l4cVtxxDWphVQtqb1nQobe8TuhMCso3W7v/2j+JivyGL79BKxh1BY 4=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: SC1WPeLCkDlV2y1n9KqtoxMvoVcs+/sgciqtOstY8Kg3rF3bNbSp4ZmNwKTrJeWmU+nEVfzoUn LbAMnIGXXDPUMPRaOCL1hRiwFr2+HvzbhasnbhoIIGneGlshFmGoiXiSANu/KXDmwJbGzusa5/ m5qxFuHp0CDnBM5QvmA6yyPIKeJGrsifO3i16AA2YPDR9eOh0cZ+zldY8IEEunb/WpQCcfN8Iy rxVU0NS46DXbPNKfB2clM8enQjN0Wqj2dmnoHOkmYtkzV/XkLmUbouORGBITEgc4am3UctDRE6 uPo= X-SBRS: 2.7 X-MesageID: 6602528 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6602528" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:30 +0200 Message-ID: <20190930133238.49868-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 02/10] ioreq: switch selection and forwarding to use ioservid_t X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Paul Durrant , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) hvm_select_ioreq_server and hvm_send_ioreq where both using hvm_ioreq_server directly, switch to use ioservid_t in order to select and forward ioreqs. This is a preparatory change, since future patches will use the ioreq server id in order to differentiate between internal and external ioreq servers. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Paul Durrant Acked-by: Jan Beulich --- Changes since v2: - Don't hardcode 0xffff for XEN_INVALID_IOSERVID. Changes since v1: - New in this version. --- xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/emulate.c | 14 +++++++------- xen/arch/x86/hvm/ioreq.c | 24 ++++++++++++------------ xen/arch/x86/hvm/stdvga.c | 8 ++++---- xen/arch/x86/mm/p2m.c | 20 ++++++++++---------- xen/include/asm-x86/hvm/ioreq.h | 5 ++--- xen/include/asm-x86/p2m.h | 9 ++++----- xen/include/public/hvm/dm_op.h | 1 + 8 files changed, 41 insertions(+), 42 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index d6d0e8be89..c2fca9f729 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -263,7 +263,7 @@ static int set_mem_type(struct domain *d, return -EOPNOTSUPP; =20 /* Do not change to HVMMEM_ioreq_server if no ioreq server mapped.= */ - if ( !p2m_get_ioreq_server(d, &flags) ) + if ( p2m_get_ioreq_server(d, &flags) =3D=3D XEN_INVALID_IOSERVID ) return -EINVAL; } =20 diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 637034b6a1..c37bd020c8 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -255,7 +255,7 @@ static int hvmemul_do_io( * However, there's no cheap approach to avoid above situations in= xen, * so the device model side needs to check the incoming ioreq even= t. */ - struct hvm_ioreq_server *s =3D NULL; + ioservid_t id =3D XEN_INVALID_IOSERVID; p2m_type_t p2mt =3D p2m_invalid; =20 if ( is_mmio ) @@ -268,9 +268,9 @@ static int hvmemul_do_io( { unsigned int flags; =20 - s =3D p2m_get_ioreq_server(currd, &flags); + id =3D p2m_get_ioreq_server(currd, &flags); =20 - if ( s =3D=3D NULL ) + if ( id =3D=3D XEN_INVALID_IOSERVID ) { rc =3D X86EMUL_RETRY; vio->io_req.state =3D STATE_IOREQ_NONE; @@ -290,18 +290,18 @@ static int hvmemul_do_io( } } =20 - if ( !s ) - s =3D hvm_select_ioreq_server(currd, &p); + if ( id =3D=3D XEN_INVALID_IOSERVID ) + id =3D hvm_select_ioreq_server(currd, &p); =20 /* If there is no suitable backing DM, just ignore accesses */ - if ( !s ) + if ( id =3D=3D XEN_INVALID_IOSERVID ) { rc =3D hvm_process_io_intercept(&null_handler, &p); vio->io_req.state =3D STATE_IOREQ_NONE; } else { - rc =3D hvm_send_ioreq(s, &p, 0); + rc =3D hvm_send_ioreq(id, &p, 0); if ( rc !=3D X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state =3D STATE_IOREQ_NONE; else if ( !hvm_ioreq_needs_completion(&vio->io_req) ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 5e503ce498..ed0142c4e1 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -39,6 +39,7 @@ static void set_ioreq_server(struct domain *d, unsigned i= nt id, { ASSERT(id < MAX_NR_IOREQ_SERVERS); ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + BUILD_BUG_ON(MAX_NR_IOREQ_SERVERS >=3D XEN_INVALID_IOSERVID); =20 d->arch.hvm.ioreq_server.server[id] =3D s; } @@ -868,7 +869,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) =20 domain_pause(d); =20 - p2m_set_ioreq_server(d, 0, s); + p2m_set_ioreq_server(d, 0, id); =20 hvm_ioreq_server_disable(s); =20 @@ -1125,7 +1126,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d= , ioservid_t id, if ( s->emulator !=3D current->domain ) goto out; =20 - rc =3D p2m_set_ioreq_server(d, flags, s); + rc =3D p2m_set_ioreq_server(d, flags, id); =20 out: spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1249,8 +1250,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +ioservid_t hvm_select_ioreq_server(struct domain *d, ioreq_t *p) { struct hvm_ioreq_server *s; uint32_t cf8; @@ -1259,7 +1259,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, unsigned int id; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) - return NULL; + return XEN_INVALID_IOSERVID; =20 cf8 =3D d->arch.hvm.pci_cf8; =20 @@ -1314,7 +1314,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, start =3D addr; end =3D start + p->size - 1; if ( rangeset_contains_range(r, start, end) ) - return s; + return id; =20 break; =20 @@ -1323,7 +1323,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, end =3D hvm_mmio_last_byte(p); =20 if ( rangeset_contains_range(r, start, end) ) - return s; + return id; =20 break; =20 @@ -1332,14 +1332,14 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, { p->type =3D IOREQ_TYPE_PCI_CONFIG; p->addr =3D addr; - return s; + return id; } =20 break; } } =20 - return NULL; + return XEN_INVALID_IOSERVID; } =20 static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) @@ -1435,12 +1435,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq= _server *s, ioreq_t *p) return X86EMUL_OKAY; } =20 -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered) +int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered) { struct vcpu *curr =3D current; struct domain *d =3D curr->domain; struct hvm_ioreq_vcpu *sv; + struct hvm_ioreq_server *s =3D get_ioreq_server(d, id); =20 ASSERT(s); =20 @@ -1506,7 +1506,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buf= fered) if ( !s->enabled ) continue; =20 - if ( hvm_send_ioreq(s, p, buffered) =3D=3D X86EMUL_UNHANDLEABLE ) + if ( hvm_send_ioreq(id, p, buffered) =3D=3D X86EMUL_UNHANDLEABLE ) failed++; } =20 diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index bd398dbb1b..a689269712 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler= *handler, .dir =3D IOREQ_WRITE, .data =3D data, }; - struct hvm_ioreq_server *srv; + ioservid_t id; =20 if ( !stdvga_cache_is_enabled(s) || !s->stdvga ) goto done; @@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handl= er *handler, } =20 done: - srv =3D hvm_select_ioreq_server(current->domain, &p); - if ( !srv ) + id =3D hvm_select_ioreq_server(current->domain, &p); + if ( id =3D=3D XEN_INVALID_IOSERVID ) return X86EMUL_UNHANDLEABLE; =20 - return hvm_send_ioreq(srv, &p, 1); + return hvm_send_ioreq(id, &p, 1); } =20 static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index e5e4349dea..c0edb9a319 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -102,6 +102,7 @@ static int p2m_initialise(struct domain *d, struct p2m_= domain *p2m) p2m_pt_init(p2m); =20 spin_lock_init(&p2m->ioreq.lock); + p2m->ioreq.server =3D XEN_INVALID_IOSERVID; =20 return ret; } @@ -361,7 +362,7 @@ void p2m_memory_type_changed(struct domain *d) =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s) + ioservid_t id) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); int rc; @@ -376,16 +377,16 @@ int p2m_set_ioreq_server(struct domain *d, if ( flags =3D=3D 0 ) { rc =3D -EINVAL; - if ( p2m->ioreq.server !=3D s ) + if ( p2m->ioreq.server !=3D id ) goto out; =20 - p2m->ioreq.server =3D NULL; + p2m->ioreq.server =3D XEN_INVALID_IOSERVID; p2m->ioreq.flags =3D 0; } else { rc =3D -EBUSY; - if ( p2m->ioreq.server !=3D NULL ) + if ( p2m->ioreq.server !=3D XEN_INVALID_IOSERVID ) goto out; =20 /* @@ -397,7 +398,7 @@ int p2m_set_ioreq_server(struct domain *d, if ( read_atomic(&p2m->ioreq.entry_count) ) goto out; =20 - p2m->ioreq.server =3D s; + p2m->ioreq.server =3D id; p2m->ioreq.flags =3D flags; } =20 @@ -409,19 +410,18 @@ int p2m_set_ioreq_server(struct domain *d, return rc; } =20 -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags) +ioservid_t p2m_get_ioreq_server(struct domain *d, unsigned int *flags) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - struct hvm_ioreq_server *s; + ioservid_t id; =20 spin_lock(&p2m->ioreq.lock); =20 - s =3D p2m->ioreq.server; + id =3D p2m->ioreq.server; *flags =3D p2m->ioreq.flags; =20 spin_unlock(&p2m->ioreq.lock); - return s; + return id; } =20 void p2m_enable_hardware_log_dirty(struct domain *d) diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index e2588e912f..65491c48d2 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -47,9 +47,8 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, stru= ct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); void hvm_destroy_all_ioreq_servers(struct domain *d); =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +ioservid_t hvm_select_ioreq_server(struct domain *d, ioreq_t *p); +int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered); unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); =20 diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 94285db1b4..99a1dab311 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -354,7 +354,7 @@ struct p2m_domain { * ioreq server who's responsible for the emulation of * gfns with specific p2m type(for now, p2m_ioreq_server). */ - struct hvm_ioreq_server *server; + ioservid_t server; /* * flags specifies whether read, write or both operations * are to be emulated by an ioreq server. @@ -819,7 +819,7 @@ static inline p2m_type_t p2m_recalc_type_range(bool rec= alc, p2m_type_t t, if ( !recalc || !p2m_is_changeable(t) ) return t; =20 - if ( t =3D=3D p2m_ioreq_server && p2m->ioreq.server !=3D NULL ) + if ( t =3D=3D p2m_ioreq_server && p2m->ioreq.server !=3D XEN_INVALID_I= OSERVID ) return t; =20 return p2m_is_logdirty_range(p2m, gfn_start, gfn_end) ? p2m_ram_logdir= ty @@ -938,9 +938,8 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type= _t p2mt, mfn_t mfn) } =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s); -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags); + ioservid_t id); +ioservid_t p2m_get_ioreq_server(struct domain *d, unsigned int *flags); =20 static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt, p2m_type_t ot, mfn_t nfn, mfn_t ofn, diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index d3b554d019..ee3963e54f 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -54,6 +54,7 @@ */ =20 typedef uint16_t ioservid_t; +#define XEN_INVALID_IOSERVID ((ioservid_t)~0) =20 /* * XEN_DMOP_create_ioreq_server: Instantiate a new IOREQ Server for a --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850457; cv=none; d=zoho.com; s=zohoarc; b=lpBlPidV85CbccuFb/wJH6TvbMiqUwUc8to/oAEJbuUDYbd45EpoTdPio5e42UPNNYkyTx57Ug4mxZSTHD+NduYJ0mNFu63rDeyiKazujuZTL0Yh+yxo7SepO5aHB0y9JQu3wuJNNrYsli9GYpmQbZFZhlg8y/ZnE93kNqmZ35w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850457; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=fZmVBRafmD7M2qUBajuDcetgh6/me4owiVCZtg3Uges=; b=P7hUyPRpaRsLe1M9V2nWCFej1o8mzIRmaa+PXe/VyEfAzg0WVDHLU5DRjx3j7CedTyCMPDPMs6h0Cl0NIXTbN7jJLmoScFQrlr0zeMRroUGTWFSS8vyGhHOpXSDsnd2loh/TE+I580Hjxwu9xcl5HRN8Se5ggSBjMGywMGucvFc= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 156985045763727.11299276368254; Mon, 30 Sep 2019 06:34:17 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvnf-0005PL-Ut; Mon, 30 Sep 2019 13:33:15 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvne-0005P8-DS for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:14 +0000 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by localhost (Halon) with ESMTPS id d983b632-e386-11e9-96d2-12813bfff9fa; Mon, 30 Sep 2019 13:33:13 +0000 (UTC) X-Inumbo-ID: d983b632-e386-11e9-96d2-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850393; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wJuOA8I65W5rXxtgxu2eoW29fdQ5R2Fu93hXN1/y6WU=; b=TImEeOvjh7MkGMkwLyD1TWF6SxwMSQsuSf+/R4D4CR2mqiDx4PJE4rTL a7Ph34sJxOuQytNLChVs4psQuW2Z9NmQQKKxQo3gOnTsc5rbFFb4mvmnu 9N/Musn/tKcZQvefQVPs+AVmjxP/d529muSEZoZyhmu44yrWPqEraKmvj Q=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: Fsy52a2/oE2xnXQHnuFLJNIttLkYZKG8uqKAYkwSCREazbQBAR4SDGQAa+usyqZLZmT6DGe5eQ e674HAbTuTbMUZohQ0e/s3KvmiOlwotvKw+7GEhwZlKN0hhX967QLG+cDIEdcIDq3xxbrt/zD2 4JlU37Q4bzutwTKCrrFPLRswv//er/I92J7SftyIg3z1IXyodrYNo0ugOyhhFuTLrU+IukSV0u ffHWW8v9r2Khsr3UYhra23Rk5DD/C7jJbGUO5cR557+jOvDIZdAeSzHnKtpAWygtVToIr1KLT4 aAQ= X-SBRS: 2.7 X-MesageID: 6602530 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6602530" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:31 +0200 Message-ID: <20190930133238.49868-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 03/10] ioreq: add fields to allow internal ioreq servers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Internal ioreq servers are plain function handlers implemented inside of the hypervisor. Note that most fields used by current (external) ioreq servers are not needed for internal ones, and hence have been placed inside of a struct and packed in an union together with the only internal specific field, a function pointer to a handler. This is required in order to have PCI config accesses forwarded to external ioreq servers or to internal ones (ie: QEMU emulated devices vs vPCI passthrough), and is the first step in order to allow unprivileged domains to use vPCI. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Paul Durrant Acked-by: Jan Beulich --- Changes since v2: - Drop the vcpu parameter from the handler. Changes since v1: - Do not add an internal field to the ioreq server struct, whether a server is internal or external can already be inferred from the id. - Add an extra parameter to the internal handler in order to pass user-provided opaque data to the handler. --- xen/include/asm-x86/hvm/domain.h | 30 +++++++++++++++++++----------- 1 file changed, 19 insertions(+), 11 deletions(-) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index bcc5621797..56a32e3e35 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -52,21 +52,29 @@ struct hvm_ioreq_vcpu { #define MAX_NR_IO_RANGES 256 =20 struct hvm_ioreq_server { - struct domain *target, *emulator; - + struct domain *target; /* Lock to serialize toolstack modifications */ spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; struct rangeset *range[NR_IO_RANGE_TYPES]; bool enabled; - uint8_t bufioreq_handling; + + union { + struct { + struct domain *emulator; + struct hvm_ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct hvm_ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + uint8_t bufioreq_handling; + }; + struct { + void *data; + int (*handler)(ioreq_t *, void *); + }; + }; }; =20 /* --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850460; cv=none; d=zoho.com; s=zohoarc; b=geyNLaE3sa2bXyW7LixTT1BwS+I6n6R1Z+Zn1fS/ufTHkMxsIjZmVanw8YNEljfxXJV+oajNZi7sx0vBoZrd5b6sBuF9ifCAFNLwObQo+XkTqq0OlbjIlT/XjW/tE9EiTHEpq2e5mIyBSjI9/OIFEBnrADPnAG3XEiN58j+Ztvs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850460; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=gUdR08xIZCJFdRiatt80efqgXoZqeTwFv1qwdr9XcwY=; b=MQ4ajhAs++A+xoXzIGbiD1X176CWoHxJHFjldaGKc8ZyevjrxqDMJf96WG6PjEEzbiVRxEO/p8S+65SgWyNjeZsNMdWtN6NV9boqYmF9D8IDy+EaNuW7ONyp4tviU+KB3+EZ+oNs88KyLSoWl1FAdADAWAvNwUQ1ILsicko84pg= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569850460195186.485566711018; Mon, 30 Sep 2019 06:34:20 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvnj-0005Q8-8J; Mon, 30 Sep 2019 13:33:19 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvni-0005Px-6h for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:18 +0000 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by localhost (Halon) with ESMTPS id db3b0bf6-e386-11e9-bf31-bc764e2007e4; Mon, 30 Sep 2019 13:33:15 +0000 (UTC) X-Inumbo-ID: db3b0bf6-e386-11e9-bf31-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9Blv00RFTQvzGYHBzr0vl7Y+GF0UID2L5jPkblmPtck=; b=hJpyaeQwYp1GA392OpRSNg/1Ha6kEaHPV6eLYuh47Nn9AqTWaRkYMNO2 BDkow3hDvIMoG5nv3cstX2a5g+LVHGbsKUOKAf2SWeR+1dSI1w5WxMqCe 522ymJT9uptIRGOzvJJ1xqmomehCYZjkC/JuHaFOoWTQ8NfY435U8irfd c=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: QxLVRtxr/fZKnqYWwdKUl1EiSI+thFKDHJBMAb6ogSusr6s8mzli6G7/cQ5W1NMB8hIq4Y84cr kOYf+rpw/ppNJalxdHgxYXDVfgZl4ik+y1HSv8v5DmQMvfr+sb4/0y3b652I6IvintQ015lg5Y LIqufaqrEeICZe2BZ9vY9MS2ynSv5fiqcXm34yONvowqVPdMiXE40MRoLmOXiYD6jve/Ranzws qRJ1G0atToZ9bK3HW/dF4ofXHT2bxUUnsOO8Aqod/v1jwJjDLj2K6jQ0kkMP5Qh0EiLeNUVfRk LOI= X-SBRS: 2.7 X-MesageID: 6256429 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6256429" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:32 +0200 Message-ID: <20190930133238.49868-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 04/10] ioreq: add internal ioreq initialization support X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Add support for internal ioreq servers to initialization and deinitialization routines, prevent some functions from being executed against internal ioreq servers and add guards to only allow internal callers to modify internal ioreq servers. External callers (ie: from hypercalls) are only allowed to deal with external ioreq servers. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v2: - Return early from hvm_ioreq_server_init and hvm_ioreq_server_deinit if server is internal. - hvm_destroy_ioreq_server, hvm_get_ioreq_server_info and hvm_map_mem_type_to_ioreq_server can only be used against external servers, hence add an assert to that effect. - Simplify ASSERT in hvm_create_ioreq_server. Changes since v1: - Do not pass an 'internal' parameter to most functions, and instead use the id to key whether an ioreq server is internal or external. - Prevent enabling an internal server without a handler. --- xen/arch/x86/hvm/dm.c | 17 ++++- xen/arch/x86/hvm/ioreq.c | 119 ++++++++++++++++++++----------- xen/include/asm-x86/hvm/domain.h | 5 +- xen/include/asm-x86/hvm/ioreq.h | 8 ++- 4 files changed, 105 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index c2fca9f729..6a3682e58c 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -417,7 +417,7 @@ static int dm_op(const struct dmop_args *op_args) break; =20 rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); + &data->id, false); break; } =20 @@ -450,6 +450,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, data->start, data->end); @@ -464,6 +467,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, data->start, data->end); @@ -481,6 +487,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EOPNOTSUPP; if ( !hap_enabled(d) ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 if ( first_gfn =3D=3D 0 ) rc =3D hvm_map_mem_type_to_ioreq_server(d, data->id, @@ -528,6 +537,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); break; @@ -541,6 +553,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_destroy_ioreq_server(d, data->id); break; diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index ed0142c4e1..cdbd4244a4 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -59,10 +59,11 @@ static struct hvm_ioreq_server *get_ioreq_server(const = struct domain *d, /* * Iterate over all possible ioreq servers. * - * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). - * This is a semantic that previously existed when ioreq servers - * were held in a linked list. + * NOTE: The iteration is backwards such that internal and more recently + * created external ioreq servers are favoured in + * hvm_select_ioreq_server(). + * This is a semantic that previously existed for external servers w= hen + * ioreq servers were held in a linked list. */ #define FOR_EACH_IOREQ_SERVER(d, id, s) \ for ( (id) =3D MAX_NR_IOREQ_SERVERS; (id) !=3D 0; ) \ @@ -70,6 +71,12 @@ static struct hvm_ioreq_server *get_ioreq_server(const s= truct domain *d, continue; \ else =20 +#define FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) \ + for ( (id) =3D MAX_NR_EXTERNAL_IOREQ_SERVERS; (id) !=3D 0; ) \ + if ( !(s =3D GET_IOREQ_SERVER(d, --(id))) ) \ + continue; \ + else + static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) { shared_iopage_t *p =3D s->ioreq.va; @@ -86,7 +93,7 @@ bool hvm_io_pending(struct vcpu *v) struct hvm_ioreq_server *s; unsigned int id; =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 @@ -190,7 +197,7 @@ bool handle_hvm_io_completion(struct vcpu *v) return false; } =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 @@ -430,7 +437,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) { @@ -688,7 +695,7 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_= ioreq_server *s, return rc; } =20 -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s, bool inter= nal) { struct hvm_ioreq_vcpu *sv; =20 @@ -697,29 +704,40 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_= server *s) if ( s->enabled ) goto done; =20 - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + if ( !internal ) + { + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); =20 - s->enabled =3D true; + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + } + else if ( !s->handler ) + { + ASSERT_UNREACHABLE(); + goto done; + } =20 - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); + s->enabled =3D true; =20 done: spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s, bool inte= rnal) { spin_lock(&s->lock); =20 if ( !s->enabled ) goto done; =20 - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + if ( !internal ) + { + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); + } =20 s->enabled =3D false; =20 @@ -736,21 +754,21 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_ser= ver *s, int rc; =20 s->target =3D d; + spin_lock_init(&s->lock); + + rc =3D hvm_ioreq_server_alloc_rangesets(s, id); + if ( hvm_ioreq_is_internal(id) || rc ) + return rc; =20 get_knownalive_domain(currd); - s->emulator =3D currd; =20 - spin_lock_init(&s->lock); + s->emulator =3D currd; INIT_LIST_HEAD(&s->ioreq_vcpu_list); spin_lock_init(&s->bufioreq_lock); =20 s->ioreq.gfn =3D INVALID_GFN; s->bufioreq.gfn =3D INVALID_GFN; =20 - rc =3D hvm_ioreq_server_alloc_rangesets(s, id); - if ( rc ) - return rc; - s->bufioreq_handling =3D bufioreq_handling; =20 for_each_vcpu ( d, v ) @@ -763,6 +781,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_serve= r *s, return 0; =20 fail_add: + ASSERT(!hvm_ioreq_is_internal(id)); hvm_ioreq_server_remove_all_vcpus(s); hvm_ioreq_server_unmap_pages(s); =20 @@ -772,9 +791,15 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_serv= er *s, return rc; } =20 -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s, bool inter= nal) { ASSERT(!s->enabled); + + hvm_ioreq_server_free_rangesets(s); + + if ( internal ) + return; + hvm_ioreq_server_remove_all_vcpus(s); =20 /* @@ -789,13 +814,11 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_= server *s) hvm_ioreq_server_unmap_pages(s); hvm_ioreq_server_free_pages(s); =20 - hvm_ioreq_server_free_rangesets(s); - put_domain(s->emulator); } =20 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) + ioservid_t *id, bool internal) { struct hvm_ioreq_server *s; unsigned int i; @@ -811,7 +834,9 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, domain_pause(d); spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) + for ( i =3D (internal ? MAX_NR_EXTERNAL_IOREQ_SERVERS : 0); + i < (internal ? MAX_NR_IOREQ_SERVERS : MAX_NR_EXTERNAL_IOREQ_SER= VERS); + i++ ) { if ( !GET_IOREQ_SERVER(d, i) ) break; @@ -821,6 +846,10 @@ int hvm_create_ioreq_server(struct domain *d, int bufi= oreq_handling, if ( i >=3D MAX_NR_IOREQ_SERVERS ) goto fail; =20 + ASSERT(i < MAX_NR_EXTERNAL_IOREQ_SERVERS + ? !internal + : internal && i < MAX_NR_IOREQ_SERVERS); + /* * It is safe to call set_ioreq_server() prior to * hvm_ioreq_server_init() since the target domain is paused. @@ -855,6 +884,8 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) struct hvm_ioreq_server *s; int rc; =20 + ASSERT(!hvm_ioreq_is_internal(id)); + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); @@ -864,6 +895,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) goto out; =20 rc =3D -EPERM; + /* NB: internal servers cannot be destroyed. */ if ( s->emulator !=3D current->domain ) goto out; =20 @@ -871,13 +903,13 @@ int hvm_destroy_ioreq_server(struct domain *d, ioserv= id_t id) =20 p2m_set_ioreq_server(d, 0, id); =20 - hvm_ioreq_server_disable(s); + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); =20 /* * It is safe to call hvm_ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is paused. */ - hvm_ioreq_server_deinit(s); + hvm_ioreq_server_deinit(s, false); set_ioreq_server(d, id, NULL); =20 domain_unpause(d); @@ -900,6 +932,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, struct hvm_ioreq_server *s; int rc; =20 + ASSERT(!hvm_ioreq_is_internal(id)); + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); @@ -909,6 +943,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, goto out; =20 rc =3D -EPERM; + /* NB: don't allow fetching information from internal ioreq servers. */ if ( s->emulator !=3D current->domain ) goto out; =20 @@ -956,7 +991,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( hvm_ioreq_is_internal(id) || s->emulator !=3D current->domain ) goto out; =20 rc =3D hvm_ioreq_server_alloc_pages(s); @@ -1010,7 +1045,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d= , ioservid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( !hvm_ioreq_is_internal(id) && s->emulator !=3D current->domain ) goto out; =20 switch ( type ) @@ -1062,7 +1097,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domai= n *d, ioservid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( !hvm_ioreq_is_internal(id) && s->emulator !=3D current->domain ) goto out; =20 switch ( type ) @@ -1108,6 +1143,8 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d= , ioservid_t id, struct hvm_ioreq_server *s; int rc; =20 + ASSERT(!hvm_ioreq_is_internal(id)); + if ( type !=3D HVMMEM_ioreq_server ) return -EINVAL; =20 @@ -1157,15 +1194,15 @@ int hvm_set_ioreq_server_state(struct domain *d, io= servid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( !hvm_ioreq_is_internal(id) && s->emulator !=3D current->domain ) goto out; =20 domain_pause(d); =20 if ( enabled ) - hvm_ioreq_server_enable(s); + hvm_ioreq_server_enable(s, hvm_ioreq_is_internal(id)); else - hvm_ioreq_server_disable(s); + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); =20 domain_unpause(d); =20 @@ -1184,7 +1221,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { rc =3D hvm_ioreq_server_add_vcpu(s, v); if ( rc ) @@ -1218,7 +1255,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain = *d, struct vcpu *v) =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) hvm_ioreq_server_remove_vcpu(s, v); =20 spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1235,13 +1272,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - hvm_ioreq_server_disable(s); + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); =20 /* * It is safe to call hvm_ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is being destroyed. */ - hvm_ioreq_server_deinit(s); + hvm_ioreq_server_deinit(s, hvm_ioreq_is_internal(id)); set_ioreq_server(d, id, NULL); =20 xfree(s); diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 56a32e3e35..f09ce9b417 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -97,7 +97,10 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; =20 -#define MAX_NR_IOREQ_SERVERS 8 +#define MAX_NR_EXTERNAL_IOREQ_SERVERS 8 +#define MAX_NR_INTERNAL_IOREQ_SERVERS 1 +#define MAX_NR_IOREQ_SERVERS \ + (MAX_NR_EXTERNAL_IOREQ_SERVERS + MAX_NR_INTERNAL_IOREQ_SERVERS) =20 struct hvm_domain { /* Guest page range used for non-default ioreq servers */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 65491c48d2..c3917aa74d 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -24,7 +24,7 @@ bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); =20 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); + ioservid_t *id, bool internal); int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *ioreq_gfn, @@ -54,6 +54,12 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffer= ed); =20 void hvm_ioreq_init(struct domain *d); =20 +static inline bool hvm_ioreq_is_internal(unsigned int id) +{ + ASSERT(id < MAX_NR_IOREQ_SERVERS); + return id >=3D MAX_NR_EXTERNAL_IOREQ_SERVERS; +} + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850464; cv=none; d=zoho.com; s=zohoarc; b=X7evI3AG9xLpBkHZf7xW9xM3jtyTuv9LP7zfPPxMn/3wTy+oUxmu/ZzOVLW8cwQdTs11Ck/TBAltLGRJzbbezUjC3QaPqEHLVNvINzRtO2f2dJR4ARIWoUGUZEXztO1lF8R9hsfyR9ifI2Qapt6nFq1/XgUIt8FqC/Z39z1P/o4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850464; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=FpYZlBOnCDUTzygP0SbqrMDE2P8HldPl2ofRNhdx58I=; b=fE7x9KF5tJqTXKgxQEoqQcBmclCzt0rsFVsqVzJ7WTchy1LF9dplV/iO2/ZqC6vGMq+P7oDr6WVJ/TrBukH2TDTQomikgS3k9O8caoCxYLt4WwxP2D+d3wnfKTOBdltjqTxrcRDBZEsnTUi5ghYEg6eNAN/SfxZ7rzcTmVrSANU= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569850464316996.69302262015; Mon, 30 Sep 2019 06:34:24 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvno-0005TK-7M; Mon, 30 Sep 2019 13:33:24 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvnn-0005Sm-6S for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:23 +0000 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by localhost (Halon) with ESMTPS id dc4de996-e386-11e9-bf31-bc764e2007e4; Mon, 30 Sep 2019 13:33:17 +0000 (UTC) X-Inumbo-ID: dc4de996-e386-11e9-bf31-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850398; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WsItbxEhyKTnYT2FimqgJ88ubvRHvuqWG6eh9vYSauE=; b=abWGhFx4x/XI/Cv9w9y90dIatqd0MJGX6pIGLNtxYlWMl0oE9aD+g+Dq q/Q5htysIqvMlnumPwnFMY/R457uG2AfAqe6BTeEOnebkTO7lvahDmqWp ANDNa2lu3GPsZkd/ihmUGgAG/2QDk/stKA/4Jykj18J3jbp+CTUWWHCa3 U=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: twMhnv1xz+gKmW6dwVRS9J9yKCCLVl9mhetTHTlbwiuPdI8XNpSsZ3WV56m999hvBtRj+/YwUE lLnfl+8IjOW0K4lQktPP1Od9IcjKrl+wanXRpeB/4TXHNxdgdv/RUzY/N0oFtTRupHuc7606X+ Fc/S/MFhb9r61GUQeHL8Op4CW2VZvNY1y8AlpyThY9pq7m/L+SGfLZbMwqy8S/DRmiwxemxtir 4qV7O0ofRtGPkw0wgF/UVOVsBDxk3qmRk6chMY+FtL5hUjOqHt595pAaq/zq6+oHcL/x9twnfE aU0= X-SBRS: 2.7 X-MesageID: 6256433 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6256433" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:33 +0200 Message-ID: <20190930133238.49868-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 05/10] ioreq: allow dispatching ioreqs to internal servers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Paul Durrant , Andrew Cooper , Paul Durrant , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Internal ioreq servers will be processed first due to the implementation of FOR_EACH_IOREQ_SERVER, and ioreqs are dispatched simply by calling the handler function. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Paul Durrant --- Changes since v2: - Have a single condition for buffered ioreqs. Changes since v1: - Avoid having to iterate twice over the list of ioreq servers since now internal servers are always processed first by FOR_EACH_IOREQ_SERVER. - Obtain ioreq server id using pointer arithmetic. --- xen/arch/x86/hvm/ioreq.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index cdbd4244a4..0649b7e02d 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1482,7 +1482,16 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, = bool buffered) ASSERT(s); =20 if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); + { + if ( likely(!hvm_ioreq_is_internal(id)) ) + return hvm_send_buffered_ioreq(s, proto_p); + + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + + if ( hvm_ioreq_is_internal(id) ) + return s->handler(proto_p, s->data); =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return X86EMUL_RETRY; --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850469; cv=none; d=zoho.com; s=zohoarc; b=Ga8uM4ecvI/ihDf49QyEOtziVZqDpgKeIPvAYjYO1GR7hzO9XVlAMzabTXEk/Xae5i9TTYP2dP7f8YF92/vvA0w63Pti3a46PpEaGPI5lsokuTX7b60i//GgmJvUTZRMWYZZDxESJr5H6bvfzEY0WNEoblc0DRA9PSXGPdNAJzQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850469; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=PinTSy1bDQVyZ233IZmXi4ZDePGNngesr8VkYGhbQiw=; b=UHXd/SPCW1VWcf1foRM39Kgtof0J9U9ROiWRnoi7fFOrocsZdgXl/svwfnFH4tDdDsoE6XFrQ1oNCrnpyxTe511kHJPbckWWzxW2YmKgsdK58512BcPYIhKU4TAUrEF2ExARNvf7OV0u7J+PQtBtUELjYHoaBTLNukv6Rwwc148= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569850469051142.28253100168672; Mon, 30 Sep 2019 06:34:29 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvns-0005We-UP; Mon, 30 Sep 2019 13:33:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvns-0005WB-6S for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:28 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by localhost (Halon) with ESMTPS id dd5cc320-e386-11e9-97fb-bc764e2007e4; Mon, 30 Sep 2019 13:33:19 +0000 (UTC) X-Inumbo-ID: dd5cc320-e386-11e9-97fb-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850399; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LGMf3Bye9W7unx9wzxf6qtCwgdaDW8X0eBe1IrGRPkc=; b=ATlEHOUmqb+KueGhK0I6qT80KSRPxdujevGxYr5rJ4pFSzixrCislHcN wuGMjPklz5RxcDNgbPCgTxKP1BTJVaDe3PHcG9/bELXIRGKI2sSHmemr6 kCCfxYZ2ngCXowRKPz+VzfNZQ4r9Ke69T4KD1iud8XpWMdR/0vkQUO+1O I=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 7A2Ty5E9OYjJ9HCkF9+9oeHe5WPTFpgQZLZT9jIAI8yu53eM6uS84O04cnRx066bLSXs/WShEp pwbA+SgRRQnVoNsum7MJvPn04wTj9kXHLBuCEVEByy758Va038CKFgNoJDqSB3n9uAr6OHSrFS fdBiNU6w6fIlRpohTVwIQX3j+WfupDXzdu3EGN7MgjgVsRZ0m6T/NF02Jd0M8fvz12BW/hnD8w 5QHLZcXOGOT+PaijwzVi1f3ZjiXcJS46wDT50lroNJfxkVAxzfsH6LITAfL12GAtZp6C6h7Ljt G5Y= X-SBRS: 2.7 X-MesageID: 6538592 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6538592" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:34 +0200 Message-ID: <20190930133238.49868-7-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 06/10] ioreq: allow registering internal ioreq server handler X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Provide a routine to register the handler for an internal ioreq server. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Paul Durrant --- Changes since v2: - s/hvm_add_ioreq_handler/hvm_set_ioreq_handler. - Do not goto the out label if ioreq is not internal. Changes since v1: - Allow to provide an opaque data parameter to pass to the handler. - Allow changing the handler as long as the server is not enabled. --- xen/arch/x86/hvm/ioreq.c | 32 ++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/ioreq.h | 4 ++++ 2 files changed, 36 insertions(+) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 0649b7e02d..57719c607c 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -485,6 +485,38 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *= s, bool buf) return rc; } =20 +int hvm_set_ioreq_handler(struct domain *d, ioservid_t id, + int (*handler)(ioreq_t *, void *), + void *data) +{ + struct hvm_ioreq_server *s; + int rc =3D 0; + + if ( !hvm_ioreq_is_internal(id) ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + s =3D get_ioreq_server(d, id); + if ( !s ) + { + rc =3D -ENOENT; + goto out; + } + if ( s->enabled ) + { + rc =3D -EBUSY; + goto out; + } + + s->handler =3D handler; + s->data =3D data; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, struct hvm_ioreq_vcpu *sv) { diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index c3917aa74d..bfd2b9925e 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -54,6 +54,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffer= ed); =20 void hvm_ioreq_init(struct domain *d); =20 +int hvm_set_ioreq_handler(struct domain *d, ioservid_t id, + int (*handler)(ioreq_t *, void *), + void *data); + static inline bool hvm_ioreq_is_internal(unsigned int id) { ASSERT(id < MAX_NR_IOREQ_SERVERS); --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850476; cv=none; d=zoho.com; s=zohoarc; b=JvtflHR53UpwIhdPIZ1HdScxfJDh0R5HIM2cOMDaqWOQUbuwMnRARJExq5jXTwgcoU20i5NYQKDVpX6ury3R+OmA+g+M3uwCOWszRdraQjBna8JGBfAW3hSlT9MzR3DJ7U/25CJssl+JfAD1nfamaBernYTbhO7r22MAxvuOODs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850476; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=ZqnptZ+Esr5vGv92BDjlTs8piRZ3BYcyWZZjBMV2sbQ=; b=RQECQkyeRMBKbs+p7/BLTR3SVwusd5JJ53QAq8/DZLyHvHQ4dN2wR4AknZ5rod1TJa96KOdczaaCJpBVdZdxTn8Te9MOR8+OuUB+BSQLRLhReJUDw0N4tqlbw+9SRE/3LMXRBQRO30fK/jxjgdOm8+2sTusBhFFEH8M1DnJzTso= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 156985047641226.3662001033739; Mon, 30 Sep 2019 06:34:36 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvnx-0005bU-W9; Mon, 30 Sep 2019 13:33:33 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvnx-0005ah-60 for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:33 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by localhost (Halon) with ESMTPS id de9173d0-e386-11e9-97fb-bc764e2007e4; Mon, 30 Sep 2019 13:33:21 +0000 (UTC) X-Inumbo-ID: de9173d0-e386-11e9-97fb-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850400; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sbKQ4jAblxD7x5cKYMbpA0wwvvCtJS1TSr1PWb+yeXc=; b=Wi8rBQLaJwLwGkfMZzddd1fjRA3oyizMbMFjGvDOVef4aZkumDajZh2r 2tb0akAaFt6xbyfHwjMBo3vqvKPvMZ384ee792mx49UIcy2BpcQ4YzOET CX0X3gDLnPKSaZCdIPz5rJAZa/14IYP9SuzeZivPnIj1X8bV7j/FPo6qY w=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: GZBQF3Qpmjs9kB/hvdGzhRHLcmlTXffaWozJaaWQLNCAFUpmU657/6FM4WIOoc0qfCXkJ1+ZFp mW5Y8ecEoLwxbGA1y7XFt4hCWvKrHBIAuFjj5lckhrMCnPCvX4mDjxr6YyfUmI95/XRQcV7ZR9 A59vJIIZPKeexdWiRCsGUK77ipqpZ64/48PUJddvaChbelSqT6TZvB8ellWZ01LBtjsdf7+8sA pinlbWeoc9Dgvz0Lrk6qpPNASVHsha26cxG5K6Lk1+a3QWXKnefVTbstvnIu5ajdM6NiWtq6Si mNU= X-SBRS: 2.7 X-MesageID: 6538595 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6538595" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:35 +0200 Message-ID: <20190930133238.49868-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 07/10] ioreq: allow decoding accesses to MMCFG regions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Pick up on the infrastructure already added for vPCI and allow ioreq to decode accesses to MMCFG regions registered for a domain. This infrastructure is still only accessible from internal callers, so MMCFG regions can only be registered from the internal domain builder used by PVH dom0. Note that the vPCI infrastructure to decode and handle accesses to MMCFG regions will be removed in following patches when vPCI is switched to become an internal ioreq server. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Paul Durrant --- Changes since v2: - Don't prevent mapping MCFG ranges by ioreq servers. Changes since v1: - Remove prototype for destroy_vpci_mmcfg. - Keep the code in io.c so PCI accesses to MMCFG regions can be decoded before ioreq processing. --- xen/arch/x86/hvm/dom0_build.c | 8 +-- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 79 ++++++++++++----------------- xen/arch/x86/hvm/ioreq.c | 18 +++++-- xen/arch/x86/physdev.c | 5 +- xen/drivers/passthrough/x86/iommu.c | 2 +- xen/include/asm-x86/hvm/io.h | 29 ++++++++--- 7 files changed, 75 insertions(+), 68 deletions(-) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 831325150b..b30042d8f3 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -1108,10 +1108,10 @@ static void __hwdom_init pvh_setup_mmcfg(struct dom= ain *d) =20 for ( i =3D 0; i < pci_mmcfg_config_num; i++ ) { - rc =3D register_vpci_mmcfg_handler(d, pci_mmcfg_config[i].address, - pci_mmcfg_config[i].start_bus_num= ber, - pci_mmcfg_config[i].end_bus_numbe= r, - pci_mmcfg_config[i].pci_segment); + rc =3D hvm_register_mmcfg(d, pci_mmcfg_config[i].address, + pci_mmcfg_config[i].start_bus_number, + pci_mmcfg_config[i].end_bus_number, + pci_mmcfg_config[i].pci_segment); if ( rc ) printk("Unable to setup MMCFG handler at %#lx for segment %u\n= ", pci_mmcfg_config[i].address, diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index c22cb39cf3..5348186c0c 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -753,7 +753,7 @@ void hvm_domain_destroy(struct domain *d) xfree(ioport); } =20 - destroy_vpci_mmcfg(d); + hvm_free_mmcfg(d); } =20 static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h) diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index a5b0a23f06..3334888136 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -279,6 +279,18 @@ unsigned int hvm_pci_decode_addr(unsigned int cf8, uns= igned int addr, return CF8_ADDR_LO(cf8) | (addr & 3); } =20 +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf) +{ + addr -=3D mmcfg->addr; + sbdf->bdf =3D MMCFG_BDF(addr); + sbdf->bus +=3D mmcfg->start_bus; + sbdf->seg =3D mmcfg->segment; + + return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); +} + + /* Do some sanity checks. */ static bool vpci_access_allowed(unsigned int reg, unsigned int len) { @@ -383,50 +395,14 @@ void register_vpci_portio_handler(struct domain *d) handler->ops =3D &vpci_portio_ops; } =20 -struct hvm_mmcfg { - struct list_head next; - paddr_t addr; - unsigned int size; - uint16_t segment; - uint8_t start_bus; -}; - /* Handlers to trap PCI MMCFG config accesses. */ -static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d, - paddr_t addr) -{ - const struct hvm_mmcfg *mmcfg; - - list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) - if ( addr >=3D mmcfg->addr && addr < mmcfg->addr + mmcfg->size ) - return mmcfg; - - return NULL; -} - -bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr) -{ - return vpci_mmcfg_find(d, addr); -} - -static unsigned int vpci_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, - paddr_t addr, pci_sbdf_t *sbdf) -{ - addr -=3D mmcfg->addr; - sbdf->bdf =3D MMCFG_BDF(addr); - sbdf->bus +=3D mmcfg->start_bus; - sbdf->seg =3D mmcfg->segment; - - return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); -} - static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr) { struct domain *d =3D v->domain; bool found; =20 read_lock(&d->arch.hvm.mmcfg_lock); - found =3D vpci_mmcfg_find(d, addr); + found =3D hvm_is_mmcfg_address(d, addr); read_unlock(&d->arch.hvm.mmcfg_lock); =20 return found; @@ -443,14 +419,14 @@ static int vpci_mmcfg_read(struct vcpu *v, unsigned l= ong addr, *data =3D ~0ul; =20 read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D vpci_mmcfg_find(d, addr); + mmcfg =3D hvm_mmcfg_find(d, addr); if ( !mmcfg ) { read_unlock(&d->arch.hvm.mmcfg_lock); return X86EMUL_RETRY; } =20 - reg =3D vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf); + reg =3D hvm_mmcfg_decode_addr(mmcfg, addr, &sbdf); read_unlock(&d->arch.hvm.mmcfg_lock); =20 if ( !vpci_access_allowed(reg, len) || @@ -485,14 +461,14 @@ static int vpci_mmcfg_write(struct vcpu *v, unsigned = long addr, pci_sbdf_t sbdf; =20 read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D vpci_mmcfg_find(d, addr); + mmcfg =3D hvm_mmcfg_find(d, addr); if ( !mmcfg ) { read_unlock(&d->arch.hvm.mmcfg_lock); return X86EMUL_RETRY; } =20 - reg =3D vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf); + reg =3D hvm_mmcfg_decode_addr(mmcfg, addr, &sbdf); read_unlock(&d->arch.hvm.mmcfg_lock); =20 if ( !vpci_access_allowed(reg, len) || @@ -512,9 +488,9 @@ static const struct hvm_mmio_ops vpci_mmcfg_ops =3D { .write =3D vpci_mmcfg_write, }; =20 -int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, - unsigned int start_bus, unsigned int end_b= us, - unsigned int seg) +int hvm_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg) { struct hvm_mmcfg *mmcfg, *new; =20 @@ -549,7 +525,7 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr= _t addr, return ret; } =20 - if ( list_empty(&d->arch.hvm.mmcfg_regions) ) + if ( list_empty(&d->arch.hvm.mmcfg_regions) && has_vpci(d) ) register_mmio_handler(d, &vpci_mmcfg_ops); =20 list_add(&new->next, &d->arch.hvm.mmcfg_regions); @@ -558,7 +534,7 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr= _t addr, return 0; } =20 -void destroy_vpci_mmcfg(struct domain *d) +void hvm_free_mmcfg(struct domain *d) { struct list_head *mmcfg_regions =3D &d->arch.hvm.mmcfg_regions; =20 @@ -574,6 +550,17 @@ void destroy_vpci_mmcfg(struct domain *d) write_unlock(&d->arch.hvm.mmcfg_lock); } =20 +const struct hvm_mmcfg *hvm_mmcfg_find(const struct domain *d, paddr_t add= r) +{ + const struct hvm_mmcfg *mmcfg; + + list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) + if ( addr >=3D mmcfg->addr && addr < mmcfg->addr + mmcfg->size ) + return mmcfg; + + return NULL; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 57719c607c..6b87a55db5 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1326,27 +1326,34 @@ ioservid_t hvm_select_ioreq_server(struct domain *d= , ioreq_t *p) uint8_t type; uint64_t addr; unsigned int id; + const struct hvm_mmcfg *mmcfg; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) return XEN_INVALID_IOSERVID; =20 cf8 =3D d->arch.hvm.pci_cf8; =20 - if ( p->type =3D=3D IOREQ_TYPE_PIO && - (p->addr & ~3) =3D=3D 0xcfc && - CF8_ENABLED(cf8) ) + read_lock(&d->arch.hvm.mmcfg_lock); + if ( (p->type =3D=3D IOREQ_TYPE_PIO && + (p->addr & ~3) =3D=3D 0xcfc && + CF8_ENABLED(cf8)) || + (p->type =3D=3D IOREQ_TYPE_COPY && + (mmcfg =3D hvm_mmcfg_find(d, p->addr)) !=3D NULL) ) { uint32_t x86_fam; pci_sbdf_t sbdf; unsigned int reg; =20 - reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf); + reg =3D p->type =3D=3D IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p= ->addr, + &sbdf) + : hvm_mmcfg_decode_addr(mmcfg, p->= addr, + &sbdf); =20 /* PCI config data cycle */ type =3D XEN_DMOP_IO_RANGE_PCI; addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ - if ( CF8_ADDR_HI(cf8) && + if ( p->type =3D=3D IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && (x86_fam =3D get_cpu_family( d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 && @@ -1365,6 +1372,7 @@ ioservid_t hvm_select_ioreq_server(struct domain *d, = ioreq_t *p) XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; addr =3D p->addr; } + read_unlock(&d->arch.hvm.mmcfg_lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c index 3a3c15890b..f61f66df5f 100644 --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -562,9 +562,8 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(voi= d) arg) * For HVM (PVH) domains try to add the newly found MMCFG to t= he * domain. */ - ret =3D register_vpci_mmcfg_handler(currd, info.address, - info.start_bus, info.end_bus, - info.segment); + ret =3D hvm_register_mmcfg(currd, info.address, info.start_bus, + info.end_bus, info.segment); } =20 break; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index 59905629e1..53cdbb45f0 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -152,7 +152,7 @@ static bool __hwdom_init hwdom_iommu_map(const struct d= omain *d, * TODO: runtime added MMCFG regions are not checked to make sure they * don't overlap with already mapped regions, thus preventing trapping. */ - if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) ) + if ( has_vpci(d) && hvm_is_mmcfg_address(d, pfn_to_paddr(pfn)) ) return false; =20 return true; diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 7ceb119b64..86ebbd1e7e 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -165,9 +165,19 @@ void stdvga_deinit(struct domain *d); =20 extern void hvm_dpci_msi_eoi(struct domain *d, int vector); =20 -/* Decode a PCI port IO access into a bus/slot/func/reg. */ +struct hvm_mmcfg { + struct list_head next; + paddr_t addr; + unsigned int size; + uint16_t segment; + uint8_t start_bus; +}; + +/* Decode a PCI port IO or MMCFG access into a bus/slot/func/reg. */ unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr, pci_sbdf_t *sbdf); +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf); =20 /* * HVM port IO handler that performs forwarding of guest IO ports into mac= hine @@ -178,15 +188,18 @@ void register_g2m_portio_handler(struct domain *d); /* HVM port IO handler for vPCI accesses. */ void register_vpci_portio_handler(struct domain *d); =20 -/* HVM MMIO handler for PCI MMCFG accesses. */ -int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, - unsigned int start_bus, unsigned int end_b= us, - unsigned int seg); -/* Destroy tracked MMCFG areas. */ -void destroy_vpci_mmcfg(struct domain *d); +/* HVM PCI MMCFG regions registration. */ +int hvm_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg); +void hvm_free_mmcfg(struct domain *d); +const struct hvm_mmcfg *hvm_mmcfg_find(const struct domain *d, paddr_t add= r); =20 /* Check if an address is between a MMCFG region for a domain. */ -bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr); +static inline bool hvm_is_mmcfg_address(const struct domain *d, paddr_t ad= dr) +{ + return hvm_mmcfg_find(d, addr); +} =20 #endif /* __ASM_X86_HVM_IO_H__ */ =20 --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850472; cv=none; d=zoho.com; s=zohoarc; b=QFfQ/tpOVq66xaq+Ok+vGGnbBS+MCOP62MV7tmAT6KeYX6n+Q92HOEJ/GZbRDusFniyJXBYrm4uXVvrHdOobRoK04Rz4hKYSSmxFbmZnsN2Qy/nIVZU0+ztHOo2MYmlg/4aukyiLUrLUUMYXCk5NhfM9gCLdWfuAERbS8tP7Uiw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850472; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=5hU6KyxT9frw8RHe0WP/s9unMYecrYlpIJQdkRkc9ls=; b=WUzxoOr7gYrbnZ8oD9S+DDlc71eQuPO7EiB+I7+UFVrFVBolr2ShE9C79zqsakvASQmSZH/GOJWu1Rvd/ZnnmDfD25o3vlGUdbY7+SJtmRrZ1N2azat1imFoci+Ni5Tnem2HhtmU2sOy3+051PWpzmhYRq3GUiGjJ0LNyNFWwUQ= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569850472413552.1539449268395; Mon, 30 Sep 2019 06:34:32 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvnv-0005Yk-Ch; Mon, 30 Sep 2019 13:33:31 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvnt-0005XE-EQ for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:29 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by localhost (Halon) with ESMTPS id e0f66090-e386-11e9-96d3-12813bfff9fa; Mon, 30 Sep 2019 13:33:25 +0000 (UTC) X-Inumbo-ID: e0f66090-e386-11e9-96d3-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850405; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IgKIu7yf7F/J3/6bpWPXUPKO2ikhnvpSyRXkQYfy870=; b=NmLx52bfr23S/k9HNyXFcj+HKDK8RRIR/rDDpKnsP1XIFSQCgauL9v/L nZHQFLK5/hhtzWBK3buyDcLwrehYjcj2Z5fTyZlmyIM8XFnh6ZQJSEZj4 ZMXB4qnNBQfY+uAH+CXHR36j7AQFz2V72ksqhldC9tN3fk/w0MZwdZfnt Q=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: T5u7crorbb86SFo35YfZlzULZTDrTpxRue15H9JQfPDoYsNEa9EC7fWAMRquigy2tiLLDbNzyS /BtldHWOlKpFSfeTFzVoRcPAtfWprXCJUMnZwL36R9w/RoU+YFw8uMXtHLPfyWSaBsCQo7giVX EK4xbh7JDXg+hYb78ifNAJL7TzaUOCzsfjvMTXHTuRHYhyKey6S7rN/0rwI38wyzECpN/0hFIa tM9N6UhN4TQ3DkZS5/9Sil+pfRv+cs3CBG2SxQnbxfeTwTKPncNETXsP5lC72SsRUh1vACFEit BUM= X-SBRS: 2.7 X-MesageID: 6490556 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6490556" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:36 +0200 Message-ID: <20190930133238.49868-9-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 08/10] vpci: register as an internal ioreq server X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Paul Durrant , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Switch vPCI to become an internal ioreq server, and hence drop all the vPCI specific decoding and trapping to PCI IO ports and MMCFG regions. This allows to unify the vPCI code with the ioreq infrastructure, opening the door for domains having PCI accesses handled by vPCI and other ioreq servers at the same time. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v2: - Remove stray addition of ioreq header to physdev.c. Changes since v1: - Remove prototypes for register_vpci_portio_handler and register_vpci_mmcfg_handler. - Re-add vpci check in hwdom_iommu_map. - Fix test harness. - Remove vpci_{read/write} prototypes and make the functions static. --- tools/tests/vpci/Makefile | 5 +- tools/tests/vpci/emul.h | 4 + xen/arch/x86/hvm/dom0_build.c | 1 + xen/arch/x86/hvm/hvm.c | 5 +- xen/arch/x86/hvm/io.c | 201 ---------------------------------- xen/drivers/vpci/vpci.c | 63 ++++++++++- xen/include/xen/vpci.h | 22 +--- 7 files changed, 79 insertions(+), 222 deletions(-) diff --git a/tools/tests/vpci/Makefile b/tools/tests/vpci/Makefile index 5075bc2be2..c365c4522a 100644 --- a/tools/tests/vpci/Makefile +++ b/tools/tests/vpci/Makefile @@ -25,7 +25,10 @@ install: =20 vpci.c: $(XEN_ROOT)/xen/drivers/vpci/vpci.c # Remove includes and add the test harness header - sed -e '/#include/d' -e '1s/^/#include "emul.h"/' <$< >$@ + sed -e '/#include/d' -e '1s/^/#include "emul.h"/' \ + -e 's/^static uint32_t read/uint32_t vpci_read/' \ + -e 's/^static void write/void vpci_write/' <$< >$@ + =20 list.h: $(XEN_ROOT)/xen/include/xen/list.h vpci.h: $(XEN_ROOT)/xen/include/xen/vpci.h diff --git a/tools/tests/vpci/emul.h b/tools/tests/vpci/emul.h index 2e1d3057c9..5a6494a797 100644 --- a/tools/tests/vpci/emul.h +++ b/tools/tests/vpci/emul.h @@ -125,6 +125,10 @@ typedef union { tx > ty ? tx : ty; \ }) =20 +uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size); +void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, + uint32_t data); + #endif =20 /* diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index b30042d8f3..dff4d6663c 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -29,6 +29,7 @@ =20 #include #include +#include #include #include #include diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 5348186c0c..c5c0e3fa2c 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -656,10 +656,13 @@ int hvm_domain_initialise(struct domain *d) d->arch.hvm.io_bitmap =3D hvm_io_bitmap; =20 register_g2m_portio_handler(d); - register_vpci_portio_handler(d); =20 hvm_ioreq_init(d); =20 + rc =3D vpci_register_ioreq(d); + if ( rc ) + goto fail1; + hvm_init_guest_time(d); =20 d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON] =3D SHUTDOWN_reboot; diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 3334888136..4c72e68a5b 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -290,204 +290,6 @@ unsigned int hvm_mmcfg_decode_addr(const struct hvm_m= mcfg *mmcfg, return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); } =20 - -/* Do some sanity checks. */ -static bool vpci_access_allowed(unsigned int reg, unsigned int len) -{ - /* Check access size. */ - if ( len !=3D 1 && len !=3D 2 && len !=3D 4 && len !=3D 8 ) - return false; - - /* Check that access is size aligned. */ - if ( (reg & (len - 1)) ) - return false; - - return true; -} - -/* vPCI config space IO ports handlers (0xcf8/0xcfc). */ -static bool vpci_portio_accept(const struct hvm_io_handler *handler, - const ioreq_t *p) -{ - return (p->addr =3D=3D 0xcf8 && p->size =3D=3D 4) || (p->addr & ~3) = =3D=3D 0xcfc; -} - -static int vpci_portio_read(const struct hvm_io_handler *handler, - uint64_t addr, uint32_t size, uint64_t *data) -{ - const struct domain *d =3D current->domain; - unsigned int reg; - pci_sbdf_t sbdf; - uint32_t cf8; - - *data =3D ~(uint64_t)0; - - if ( addr =3D=3D 0xcf8 ) - { - ASSERT(size =3D=3D 4); - *data =3D d->arch.hvm.pci_cf8; - return X86EMUL_OKAY; - } - - ASSERT((addr & ~3) =3D=3D 0xcfc); - cf8 =3D ACCESS_ONCE(d->arch.hvm.pci_cf8); - if ( !CF8_ENABLED(cf8) ) - return X86EMUL_UNHANDLEABLE; - - reg =3D hvm_pci_decode_addr(cf8, addr, &sbdf); - - if ( !vpci_access_allowed(reg, size) ) - return X86EMUL_OKAY; - - *data =3D vpci_read(sbdf, reg, size); - - return X86EMUL_OKAY; -} - -static int vpci_portio_write(const struct hvm_io_handler *handler, - uint64_t addr, uint32_t size, uint64_t data) -{ - struct domain *d =3D current->domain; - unsigned int reg; - pci_sbdf_t sbdf; - uint32_t cf8; - - if ( addr =3D=3D 0xcf8 ) - { - ASSERT(size =3D=3D 4); - d->arch.hvm.pci_cf8 =3D data; - return X86EMUL_OKAY; - } - - ASSERT((addr & ~3) =3D=3D 0xcfc); - cf8 =3D ACCESS_ONCE(d->arch.hvm.pci_cf8); - if ( !CF8_ENABLED(cf8) ) - return X86EMUL_UNHANDLEABLE; - - reg =3D hvm_pci_decode_addr(cf8, addr, &sbdf); - - if ( !vpci_access_allowed(reg, size) ) - return X86EMUL_OKAY; - - vpci_write(sbdf, reg, size, data); - - return X86EMUL_OKAY; -} - -static const struct hvm_io_ops vpci_portio_ops =3D { - .accept =3D vpci_portio_accept, - .read =3D vpci_portio_read, - .write =3D vpci_portio_write, -}; - -void register_vpci_portio_handler(struct domain *d) -{ - struct hvm_io_handler *handler; - - if ( !has_vpci(d) ) - return; - - handler =3D hvm_next_io_handler(d); - if ( !handler ) - return; - - handler->type =3D IOREQ_TYPE_PIO; - handler->ops =3D &vpci_portio_ops; -} - -/* Handlers to trap PCI MMCFG config accesses. */ -static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr) -{ - struct domain *d =3D v->domain; - bool found; - - read_lock(&d->arch.hvm.mmcfg_lock); - found =3D hvm_is_mmcfg_address(d, addr); - read_unlock(&d->arch.hvm.mmcfg_lock); - - return found; -} - -static int vpci_mmcfg_read(struct vcpu *v, unsigned long addr, - unsigned int len, unsigned long *data) -{ - struct domain *d =3D v->domain; - const struct hvm_mmcfg *mmcfg; - unsigned int reg; - pci_sbdf_t sbdf; - - *data =3D ~0ul; - - read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D hvm_mmcfg_find(d, addr); - if ( !mmcfg ) - { - read_unlock(&d->arch.hvm.mmcfg_lock); - return X86EMUL_RETRY; - } - - reg =3D hvm_mmcfg_decode_addr(mmcfg, addr, &sbdf); - read_unlock(&d->arch.hvm.mmcfg_lock); - - if ( !vpci_access_allowed(reg, len) || - (reg + len) > PCI_CFG_SPACE_EXP_SIZE ) - return X86EMUL_OKAY; - - /* - * According to the PCIe 3.1A specification: - * - Configuration Reads and Writes must usually be DWORD or smaller - * in size. - * - Because Root Complex implementations are not required to support - * accesses to a RCRB that cross DW boundaries [...] software - * should take care not to cause the generation of such accesses - * when accessing a RCRB unless the Root Complex will support the - * access. - * Xen however supports 8byte accesses by splitting them into two - * 4byte accesses. - */ - *data =3D vpci_read(sbdf, reg, min(4u, len)); - if ( len =3D=3D 8 ) - *data |=3D (uint64_t)vpci_read(sbdf, reg + 4, 4) << 32; - - return X86EMUL_OKAY; -} - -static int vpci_mmcfg_write(struct vcpu *v, unsigned long addr, - unsigned int len, unsigned long data) -{ - struct domain *d =3D v->domain; - const struct hvm_mmcfg *mmcfg; - unsigned int reg; - pci_sbdf_t sbdf; - - read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D hvm_mmcfg_find(d, addr); - if ( !mmcfg ) - { - read_unlock(&d->arch.hvm.mmcfg_lock); - return X86EMUL_RETRY; - } - - reg =3D hvm_mmcfg_decode_addr(mmcfg, addr, &sbdf); - read_unlock(&d->arch.hvm.mmcfg_lock); - - if ( !vpci_access_allowed(reg, len) || - (reg + len) > PCI_CFG_SPACE_EXP_SIZE ) - return X86EMUL_OKAY; - - vpci_write(sbdf, reg, min(4u, len), data); - if ( len =3D=3D 8 ) - vpci_write(sbdf, reg + 4, 4, data >> 32); - - return X86EMUL_OKAY; -} - -static const struct hvm_mmio_ops vpci_mmcfg_ops =3D { - .check =3D vpci_mmcfg_accept, - .read =3D vpci_mmcfg_read, - .write =3D vpci_mmcfg_write, -}; - int hvm_register_mmcfg(struct domain *d, paddr_t addr, unsigned int start_bus, unsigned int end_bus, unsigned int seg) @@ -525,9 +327,6 @@ int hvm_register_mmcfg(struct domain *d, paddr_t addr, return ret; } =20 - if ( list_empty(&d->arch.hvm.mmcfg_regions) && has_vpci(d) ) - register_mmio_handler(d, &vpci_mmcfg_ops); - list_add(&new->next, &d->arch.hvm.mmcfg_regions); write_unlock(&d->arch.hvm.mmcfg_lock); =20 diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index cbd1bac7fc..206fcadbc6 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -20,6 +20,8 @@ #include #include =20 +#include + /* Internal struct to store the emulated PCI registers. */ struct vpci_register { vpci_read_t *read; @@ -302,7 +304,7 @@ static uint32_t merge_result(uint32_t data, uint32_t ne= w, unsigned int size, return (data & ~(mask << (offset * 8))) | ((new & mask) << (offset * 8= )); } =20 -uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size) +static uint32_t read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size) { const struct domain *d =3D current->domain; const struct pci_dev *pdev; @@ -404,8 +406,8 @@ static void vpci_write_helper(const struct pci_dev *pde= v, r->private); } =20 -void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, - uint32_t data) +static void write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, + uint32_t data) { const struct domain *d =3D current->domain; const struct pci_dev *pdev; @@ -478,6 +480,61 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, uns= igned int size, spin_unlock(&pdev->vpci->lock); } =20 +#ifdef __XEN__ +static int ioreq_handler(ioreq_t *req, void *data) +{ + pci_sbdf_t sbdf; + + /* + * NB: certain requests of type different than PCI are broadcasted to = all + * registered ioreq servers, ignored those. + */ + if ( req->type !=3D IOREQ_TYPE_PCI_CONFIG || req->data_is_ptr ) + return X86EMUL_UNHANDLEABLE; + + sbdf.sbdf =3D req->addr >> 32; + + if ( req->dir ) + req->data =3D read(sbdf, req->addr, req->size); + else + write(sbdf, req->addr, req->size, req->data); + + return X86EMUL_OKAY; +} + +int vpci_register_ioreq(struct domain *d) +{ + ioservid_t id; + int rc; + + if ( !has_vpci(d) ) + return 0; + + rc =3D hvm_create_ioreq_server(d, HVM_IOREQSRV_BUFIOREQ_OFF, &id, true= ); + if ( rc ) + return rc; + + rc =3D hvm_set_ioreq_handler(d, id, ioreq_handler, NULL); + if ( rc ) + return rc; + + if ( is_hardware_domain(d) ) + { + /* Handle all devices in vpci. */ + rc =3D hvm_map_io_range_to_ioreq_server(d, id, XEN_DMOP_IO_RANGE_P= CI, + 0, ~(uint64_t)0); + if ( rc ) + return rc; + } + + rc =3D hvm_set_ioreq_server_state(d, id, true); + if ( rc ) + return rc; + + return rc; +} +#endif + /* * Local variables: * mode: C diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index 5295d4c990..4e9591c020 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -23,6 +23,9 @@ typedef int vpci_register_init_t(struct pci_dev *dev); static vpci_register_init_t *const x##_entry \ __used_section(".data.vpci." p) =3D x =20 +/* Register vPCI handler with ioreq. */ +int vpci_register_ioreq(struct domain *d); + /* Add vPCI handlers to device. */ int __must_check vpci_add_handlers(struct pci_dev *dev); =20 @@ -38,11 +41,6 @@ int __must_check vpci_add_register(struct vpci *vpci, int __must_check vpci_remove_register(struct vpci *vpci, unsigned int offs= et, unsigned int size); =20 -/* Generic read/write handlers for the PCI config space. */ -uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size); -void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, - uint32_t data); - /* Passthrough handlers. */ uint32_t vpci_hw_read16(const struct pci_dev *pdev, unsigned int reg, void *data); @@ -219,20 +217,12 @@ static inline int vpci_add_handlers(struct pci_dev *p= dev) return 0; } =20 -static inline void vpci_dump_msi(void) { } - -static inline uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, - unsigned int size) +static inline int vpci_register_ioreq(struct domain *d) { - ASSERT_UNREACHABLE(); - return ~(uint32_t)0; + return 0; } =20 -static inline void vpci_write(pci_sbdf_t sbdf, unsigned int reg, - unsigned int size, uint32_t data) -{ - ASSERT_UNREACHABLE(); -} +static inline void vpci_dump_msi(void) { } =20 static inline bool vpci_process_pending(struct vcpu *v) { --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850477; cv=none; d=zoho.com; s=zohoarc; b=ct/hMNBWgPFYtasyTmVNiKSdUOHbfCxzCDueWpLS83hoMLoM5IHj9D1fvI/rRM0BWSEVF3aEi45kU61jGB18FdPhlPy1ie56VT5C52C22oCoX4xuBVo/S1bn1r4pFKRXobYWLJSv1lghucsuF0m3YaXa6E0Vb2kvfok7eMwXP8Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850477; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=MNX8cq6uSw7wR1zL78kcgyBlNtfboI+RQJRFWv8vFhM=; b=dlm0dHcjBZ7CsrRbWs7Eq5HbkCKMowyOT5p/gdEXLcDAzzhpqeAHjpdtqzDFMGXBx1NHS3xjUpP2BP73c4hbNCzBw/4MvQn9guMVNdM2g9n58U5h3XRUDsdSp/wj1l0y7FzVilVbXRJIEkBVtWbzokkD2y4zwQmvLZvRV66Bhps= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569850477154741.0642596803024; Mon, 30 Sep 2019 06:34:37 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvo3-0005fP-De; Mon, 30 Sep 2019 13:33:39 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvo2-0005ei-6p for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:38 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by localhost (Halon) with ESMTPS id e1e9a7b4-e386-11e9-97fb-bc764e2007e4; Mon, 30 Sep 2019 13:33:26 +0000 (UTC) X-Inumbo-ID: e1e9a7b4-e386-11e9-97fb-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850406; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=C07o62v9BXZa85etJm1tmyW08YSPsYCdptuOdSOV1hw=; b=Nje1hLyuxP9FnZPJnFB80bp3SKOnDNIYD2+qzD6TDXwuN3DooW9l5HMl 9Aw2GFBkKW7qez3Srbfle650wyWF1QY2d8kyw3jnWrHfUJirfK4hJjCN7 NxixwXNNfvy+2w8tlEnn9o5rqq2m2THjxcaF5Cvy/z4WRqquE2zuGcb2l k=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: b5tZ9umJBAf738WArv3vxOZQnw/bRVaqUCCBG+259iKzU6xH4smawekcrvA9k3vhBHVG14LPyR 3oeQeSDXQtO6ubNH4aqDqfRm51UTtIbv/u6gupOoYQfWINioDzzWnx69PSYOwtCU/+5zwTySRM 8TWZxb6zm93LsKRn2Vr8jDHFYP/WalHpZnAIQ/NLAg86R3150TEzWYT13EXs36vpTGXoXXFdH+ 2seoRbE6Vk0Ml2MW3XRmVoDM0XKVrqKUDjC+dsF2PRb7XlKCNmRKtbZ303rDDQekXZeJZqMxY8 SoQ= X-SBRS: 2.7 X-MesageID: 6538602 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6538602" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:37 +0200 Message-ID: <20190930133238.49868-10-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 09/10] ioreq: split the code to detect PCI config space accesses X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Place the code that converts a PIO/COPY ioreq into a PCI_CONFIG one into a separate function, and adjust the code to make use of this newly introduced function. No functional change intended. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v1: - New in this version. --- xen/arch/x86/hvm/ioreq.c | 111 +++++++++++++++++++++++---------------- 1 file changed, 67 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 6b87a55db5..f3684fc648 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -183,6 +183,54 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv,= ioreq_t *p) return true; } =20 +static void convert_pci_ioreq(struct domain *d, ioreq_t *p) +{ + const struct hvm_mmcfg *mmcfg; + uint32_t cf8 =3D d->arch.hvm.pci_cf8; + + if ( p->type !=3D IOREQ_TYPE_PIO && p->type !=3D IOREQ_TYPE_COPY ) + { + ASSERT_UNREACHABLE(); + return; + } + + read_lock(&d->arch.hvm.mmcfg_lock); + if ( (p->type =3D=3D IOREQ_TYPE_PIO && + (p->addr & ~3) =3D=3D 0xcfc && + CF8_ENABLED(cf8)) || + (p->type =3D=3D IOREQ_TYPE_COPY && + (mmcfg =3D hvm_mmcfg_find(d, p->addr)) !=3D NULL) ) + { + uint32_t x86_fam; + pci_sbdf_t sbdf; + unsigned int reg; + + reg =3D p->type =3D=3D IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p= ->addr, + &sbdf) + : hvm_mmcfg_decode_addr(mmcfg, p->= addr, + &sbdf); + + /* PCI config data cycle */ + p->addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; + /* AMD extended configuration space access? */ + if ( p->type =3D=3D IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) && + d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && + (x86_fam =3D get_cpu_family( + d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 && + x86_fam < 0x17 ) + { + uint64_t msr_val; + + if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && + (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) + p->addr |=3D CF8_ADDR_HI(cf8); + } + p->type =3D IOREQ_TYPE_PCI_CONFIG; + + } + read_unlock(&d->arch.hvm.mmcfg_lock); +} + bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; @@ -1322,57 +1370,36 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) ioservid_t hvm_select_ioreq_server(struct domain *d, ioreq_t *p) { struct hvm_ioreq_server *s; - uint32_t cf8; uint8_t type; - uint64_t addr; unsigned int id; - const struct hvm_mmcfg *mmcfg; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) return XEN_INVALID_IOSERVID; =20 - cf8 =3D d->arch.hvm.pci_cf8; + /* + * Check and convert the PIO/MMIO ioreq to a PCI config space + * access. + */ + convert_pci_ioreq(d, p); =20 - read_lock(&d->arch.hvm.mmcfg_lock); - if ( (p->type =3D=3D IOREQ_TYPE_PIO && - (p->addr & ~3) =3D=3D 0xcfc && - CF8_ENABLED(cf8)) || - (p->type =3D=3D IOREQ_TYPE_COPY && - (mmcfg =3D hvm_mmcfg_find(d, p->addr)) !=3D NULL) ) + switch ( p->type ) { - uint32_t x86_fam; - pci_sbdf_t sbdf; - unsigned int reg; + case IOREQ_TYPE_PIO: + type =3D XEN_DMOP_IO_RANGE_PORT; + break; =20 - reg =3D p->type =3D=3D IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p= ->addr, - &sbdf) - : hvm_mmcfg_decode_addr(mmcfg, p->= addr, - &sbdf); + case IOREQ_TYPE_COPY: + type =3D XEN_DMOP_IO_RANGE_MEMORY; + break; =20 - /* PCI config data cycle */ + case IOREQ_TYPE_PCI_CONFIG: type =3D XEN_DMOP_IO_RANGE_PCI; - addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; - /* AMD extended configuration space access? */ - if ( p->type =3D=3D IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) && - d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && - (x86_fam =3D get_cpu_family( - d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 && - x86_fam < 0x17 ) - { - uint64_t msr_val; + break; =20 - if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && - (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) - addr |=3D CF8_ADDR_HI(cf8); - } - } - else - { - type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - addr =3D p->addr; + default: + ASSERT_UNREACHABLE(); + return XEN_INVALID_IOSERVID; } - read_unlock(&d->arch.hvm.mmcfg_lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -1388,7 +1415,7 @@ ioservid_t hvm_select_ioreq_server(struct domain *d, = ioreq_t *p) unsigned long start, end; =20 case XEN_DMOP_IO_RANGE_PORT: - start =3D addr; + start =3D p->addr; end =3D start + p->size - 1; if ( rangeset_contains_range(r, start, end) ) return id; @@ -1405,12 +1432,8 @@ ioservid_t hvm_select_ioreq_server(struct domain *d,= ioreq_t *p) break; =20 case XEN_DMOP_IO_RANGE_PCI: - if ( rangeset_contains_singleton(r, addr >> 32) ) - { - p->type =3D IOREQ_TYPE_PCI_CONFIG; - p->addr =3D addr; + if ( rangeset_contains_singleton(r, p->addr >> 32) ) return id; - } =20 break; } --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Thu May 16 17:41:19 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569850483; cv=none; d=zoho.com; s=zohoarc; b=B2tWb5R3fusohSVfcCNkGe3nsohpD+4ZiBVXfjHGOxkOokevZXozfRsN0QIMksEKBw2OAcTFqewVyKdz1gsiomzj6+4JYojVKJIIBTYWjYl38o9Z1AnkoxWjAQhBBiwaM5Zzaec6Vzlay9ofGSdCs/+Do+rbFcJQkGK5FXTWdgE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569850483; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=raYS6PaeFIao46ssc/9Q6dMvL3JauiXCGHpwN0Qv3XQ=; b=SL79LkU5AzWjfBQda+ksYumo6vnOG4paW0s4n8qose6uPVf9qolZMmBEN/j7RoGEQqTGcMhZ61mpq9w/Y8vykXDu/fIcgZt4K5h0AFh3d9zBIdcVkHHF7Jtk2aP5U8aS8EbWm/qAOCX6hsok2s9i48DnIvkj3aFS480BdS8cFaQ= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569850483971194.54563159632607; Mon, 30 Sep 2019 06:34:43 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvo8-0005jz-P8; Mon, 30 Sep 2019 13:33:44 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEvo7-0005ik-6o for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 13:33:43 +0000 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by localhost (Halon) with ESMTPS id e40f1a10-e386-11e9-97fb-bc764e2007e4; Mon, 30 Sep 2019 13:33:30 +0000 (UTC) X-Inumbo-ID: e40f1a10-e386-11e9-97fb-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1569850410; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bqa+mZRc/9alYURjatEvLIFM2PBvEfV2qKyP0A+lpss=; b=CFDfs+34DipWz1OeRTby6Jqrfv4mn02VlPGoGNSinwwVNbwuabQM70xF fi2wxJJfy8/beMB4YQrnb5I9zGVRXBJGKYJ5krKt9dPrdRP9hj9nffvEQ WLyMhWfMQcEwJlW5z//m9oxkybKkr8G9vA5rLhPK0e9Nxk6mmaeV996c7 Q=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: jsxn3nV9LZvrVpiuxX8CBndFKAn9kUhdyswjPeW+87TwabwEV9ppO5KuVgtBbpWlTZSLKiLVmB xeL4pJQ/Xea4w6UaSomJqJkD2WJba7WHOU4oQTBew9vosTuBV9cYiosjAs2oAP4In3JE2XHMkF M/yhzeY8Lt2UaBfJ5EmER/H5N9hnLyCmzDGd4AFaJRAmEcEaoZbmiFinsIqajswispeantyaNh QYUHJ7UPshbX1SC5U7wMODXKtUJEXRgyG9ccG2MbEeLsMi/+R4ek7PzHeS5B7Xix3MNpjgCxUr YVE= X-SBRS: 2.7 X-MesageID: 6602549 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,567,1559534400"; d="scan'208";a="6602549" From: Roger Pau Monne To: Date: Mon, 30 Sep 2019 15:32:38 +0200 Message-ID: <20190930133238.49868-11-roger.pau@citrix.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190930133238.49868-1-roger.pau@citrix.com> References: <20190930133238.49868-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 10/10] ioreq: provide support for long-running operations... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Paul Durrant , George Dunlap , Andrew Cooper , Konrad Rzeszutek Wilk , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) ...and switch vPCI to use this infrastructure for long running physmap modification operations. This allows to get rid of the vPCI specific modifications done to handle_hvm_io_completion and allows generalizing the support for long-running operations to other internal ioreq servers. Such support is implemented as a specific handler that can be registers by internal ioreq servers and that will be called to check for pending work. Returning true from this handler will prevent the vcpu from running until the handler returns false. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Jan Beulich --- Changes since v2: - Remove extra newline in vpci_process_pending. - Continue early in handle_hvm_io_completion in order to avoid one extra level of indentation. - Switch setting the ioreq state to a ternary conditional operator. --- xen/arch/x86/hvm/ioreq.c | 55 ++++++++++++++++++++++++++----- xen/drivers/vpci/header.c | 60 ++++++++++++++++++---------------- xen/drivers/vpci/vpci.c | 9 ++++- xen/include/asm-x86/hvm/vcpu.h | 3 +- xen/include/xen/vpci.h | 6 ---- 5 files changed, 89 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index f3684fc648..78322dfa67 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -239,16 +239,48 @@ bool handle_hvm_io_completion(struct vcpu *v) enum hvm_io_completion io_completion; unsigned int id; =20 - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - - FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) + FOR_EACH_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 + if ( hvm_ioreq_is_internal(id) ) + { + ioreq_t req =3D vio->io_req; + + if ( vio->io_req.state !=3D STATE_IOREQ_INPROCESS ) + continue; + + /* + * Check and convert the PIO/MMIO ioreq to a PCI config space + * access. + */ + convert_pci_ioreq(d, &req); + + if ( s->handler(&req, s->data) =3D=3D X86EMUL_RETRY ) + { + /* + * Need to raise a scheduler irq in order to prevent the + * guest vcpu from resuming execution. + * + * Note this is not required for external ioreq operations + * because in that case the vcpu is marked as blocked, but + * this cannot be done for long-running internal + * operations, since it would prevent the vcpu from being + * scheduled and thus the long running operation from + * finishing. + */ + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } + + /* Finished processing the ioreq. */ + vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) + ? STATE_IORESP_READY + : STATE_IOREQ_NONE; + + continue; + } + list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) @@ -1554,7 +1586,14 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, = bool buffered) } =20 if ( hvm_ioreq_is_internal(id) ) - return s->handler(proto_p, s->data); + { + int rc =3D s->handler(proto_p, s->data); + + if ( rc =3D=3D X86EMUL_RETRY ) + curr->arch.hvm.hvm_io.io_req.state =3D STATE_IOREQ_INPROCESS; + + return rc; + } =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return X86EMUL_RETRY; diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c index 3c794f486d..9360d19a50 100644 --- a/xen/drivers/vpci/header.c +++ b/xen/drivers/vpci/header.c @@ -129,37 +129,41 @@ static void modify_decoding(const struct pci_dev *pde= v, uint16_t cmd, =20 bool vpci_process_pending(struct vcpu *v) { - if ( v->vpci.mem ) + struct map_data data =3D { + .d =3D v->domain, + .map =3D v->vpci.cmd & PCI_COMMAND_MEMORY, + }; + int rc; + + if ( !v->vpci.mem ) { - struct map_data data =3D { - .d =3D v->domain, - .map =3D v->vpci.cmd & PCI_COMMAND_MEMORY, - }; - int rc =3D rangeset_consume_ranges(v->vpci.mem, map_range, &data); - - if ( rc =3D=3D -ERESTART ) - return true; - - spin_lock(&v->vpci.pdev->vpci->lock); - /* Disable memory decoding unconditionally on failure. */ - modify_decoding(v->vpci.pdev, - rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.c= md, - !rc && v->vpci.rom_only); - spin_unlock(&v->vpci.pdev->vpci->lock); - - rangeset_destroy(v->vpci.mem); - v->vpci.mem =3D NULL; - if ( rc ) - /* - * FIXME: in case of failure remove the device from the domain. - * Note that there might still be leftover mappings. While thi= s is - * safe for Dom0, for DomUs the domain will likely need to be - * killed in order to avoid leaking stale p2m mappings on - * failure. - */ - vpci_remove_device(v->vpci.pdev); + ASSERT_UNREACHABLE(); + return false; } =20 + rc =3D rangeset_consume_ranges(v->vpci.mem, map_range, &data); + if ( rc =3D=3D -ERESTART ) + return true; + + spin_lock(&v->vpci.pdev->vpci->lock); + /* Disable memory decoding unconditionally on failure. */ + modify_decoding(v->vpci.pdev, + rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.cmd, + !rc && v->vpci.rom_only); + spin_unlock(&v->vpci.pdev->vpci->lock); + + rangeset_destroy(v->vpci.mem); + v->vpci.mem =3D NULL; + if ( rc ) + /* + * FIXME: in case of failure remove the device from the domain. + * Note that there might still be leftover mappings. While this is + * safe for Dom0, for DomUs the domain will likely need to be + * killed in order to avoid leaking stale p2m mappings on + * failure. + */ + vpci_remove_device(v->vpci.pdev); + return false; } =20 diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index 206fcadbc6..0cc8543eb8 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -484,6 +484,7 @@ static void write(pci_sbdf_t sbdf, unsigned int reg, un= signed int size, static int ioreq_handler(ioreq_t *req, void *data) { pci_sbdf_t sbdf; + struct vcpu *curr =3D current; =20 /* * NB: certain requests of type different than PCI are broadcasted to = all @@ -492,6 +493,12 @@ static int ioreq_handler(ioreq_t *req, void *data) if ( req->type !=3D IOREQ_TYPE_PCI_CONFIG || req->data_is_ptr ) return X86EMUL_UNHANDLEABLE; =20 + if ( curr->vpci.mem ) + { + ASSERT(req->state =3D=3D STATE_IOREQ_INPROCESS); + return vpci_process_pending(curr) ? X86EMUL_RETRY : X86EMUL_OKAY; + } + sbdf.sbdf =3D req->addr >> 32; =20 if ( req->dir ) @@ -499,7 +506,7 @@ static int ioreq_handler(ioreq_t *req, void *data) else write(sbdf, req->addr, req->size, req->data); =20 - return X86EMUL_OKAY; + return curr->vpci.mem ? X86EMUL_RETRY : X86EMUL_OKAY; } =20 int vpci_register_ioreq(struct domain *d) diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 38f5c2bb9b..4563746466 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -92,7 +92,8 @@ struct hvm_vcpu_io { =20 static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) { - return ioreq->state =3D=3D STATE_IOREQ_READY && + return (ioreq->state =3D=3D STATE_IOREQ_READY || + ioreq->state =3D=3D STATE_IOREQ_INPROCESS) && !ioreq->data_is_ptr && (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); } diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index 4e9591c020..bad406b21d 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -223,12 +223,6 @@ static inline int vpci_register_ioreq(struct domain *d) } =20 static inline void vpci_dump_msi(void) { } - -static inline bool vpci_process_pending(struct vcpu *v) -{ - ASSERT_UNREACHABLE(); - return false; -} #endif =20 #endif --=20 2.23.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel