From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527371; cv=none; d=zoho.com; s=zohoarc; b=EpbzXRdnz7hFrb379TOahashWgnqOJpuTyu6d0DPO4EKfDPIwKK2YksdGPVyC70+xqAtJFKQWGt4EBwl1xYfJv438Z46lyibYPvHn2O7SOJ36vLY2iZzDlZmTPEoWrN9+TWUzp/rz0LHke6cPVDXGIOZ+j0HisoKiHrBDSBhF0s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527371; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=pDM4TmhS/apFXLkSh8AjCf+luD6ySJQjQ/lPu5w1wMc=; b=WYQTjVks03pM+0AB90aLCdzgH22rIWbqqnwX3y4/6cN6u8pXA4Mp1IfoiwCTbB9oB+giA+bczYg5IGWxlIj9/OP5H8AXJQ6kDrxOvBX2lUhZe01dy7tGRBRt7lqPzWC2JTKOIbpOPM+zN4RDqGxoblPnNUtn2RXq4sIUsr9NF40= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527371754229.67378196861648; Tue, 3 Sep 2019 09:16:11 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSV-00023O-Hu; Tue, 03 Sep 2019 16:15:07 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BST-00022A-VR for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:15:06 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id fd20025c-ce65-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:15:05 +0000 (UTC) X-Inumbo-ID: fd20025c-ce65-11e9-ab97-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527304; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LhyPhuBUqiy/YkI6TCkLU+8pFIgYCZTFltu1tHlrkCs=; b=UZt/BWh/Ai9Vt99C2ofCiZrugJwL8WovE0unQIs8J1cCzTxUGkGfKa7o g4U6Un0HlD2UtjaTXrnsNn5QVnDjJlTGkiKbA3puQ2lUJUYn7CCHg79vh hP9jgT7Pt7+ty0ctL/DcQ/4LFJRPMKQ7XFIRcq06FYq0yalpbsD4R8Utf E=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: cKa8aZFHq//bk44Sp4HFgxjfydQ19s4IThIeceuVwl1OSREwhccVTnAl13nfv9pwAvz8pCOv9I BysDEkRBIRBud/sO5MoV/VjlYNHnJLT+XHVeqMBGQfKly7Deed7Gq4dFxKDKsTsGWniRkDh4W5 wrsLzvOr/ksIuRuj255VLOgStKloRcRDIlMbSFXo+xfKkQvhfCDkR07WBTtH+NU9qfR7bzGOyR amp8rFhwUkq5L5/585CB0tu3hafShdb4vngODuFXWs27qo0ReQJ+mCY8kpc9n8A9i7erHzZdUp kD8= X-SBRS: 2.7 X-MesageID: 5256697 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5256697" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:18 +0200 Message-ID: <20190903161428.7159-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 01/11] ioreq: fix hvm_all_ioreq_servers_add_vcpu fail path cleanup X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The loop in FOR_EACH_IOREQ_SERVER is backwards hence the cleanup on failure needs to be done forwards. Fixes: 97a5a3e30161 ('x86/hvm/ioreq: maintain an array of ioreq servers rat= her than a list') Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Paul Durrant --- Changes since v1: - New in this version. --- xen/arch/x86/hvm/ioreq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index a79cabb680..692b710b02 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1195,7 +1195,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) return 0; =20 fail: - while ( id-- !=3D 0 ) + while ( id++ !=3D MAX_NR_IOREQ_SERVERS ) { s =3D GET_IOREQ_SERVER(d, id); =20 --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527349; cv=none; d=zoho.com; s=zohoarc; b=gT6V4exLvCGUY8/QP5zW+XGFIZG3L5WoiEWfksKRXcuY6nOaqMt/wnIpZuR4RSs6x5gKY4aGGzcq3V0BE5P/j/o8Bg6wUsm6mALJHGPJjR66x6fcsz/9tzgTz8Y5v7+CH6I42orkZvddnpgNGOBU3HD9qslzLInAjdR0UL1x6bE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527349; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=49ayXnL8CbWi0IjnAGuyDz8MRu6frf3kYnK2mWbpAYA=; b=lEWGwxRf1mlSrPE90ryYrDE7xUXaj+vDsgEwZRc1jU59lAdiPrgpbsWdWdSxz8GpqJVuB6RvlsUX7Xgia/M0o7SAzYauDYf8vWnJ2wVkzc5VxcQheNxouXhaNstDFdSZKSbxraPHqasbShlM09kdUOQU9K9CnFV1ti+mn/gxHLg= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 156752734952142.58924295848885; Tue, 3 Sep 2019 09:15:49 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BS8-0001rO-LO; Tue, 03 Sep 2019 16:14:44 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BS7-0001rH-Tj for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:14:43 +0000 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f00e0e6a-ce65-11e9-b299-bc764e2007e4; Tue, 03 Sep 2019 16:14:43 +0000 (UTC) X-Inumbo-ID: f00e0e6a-ce65-11e9-b299-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527283; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YS8KlF9FkGms/zL4mp6PZyXe0FqkybE8K+gxPiQIHFc=; b=Hny8PXAiMau7j2aZ7KQ35R2dB7OePy7OEaynMU9LYUFBM4s2QEt8Xt8m vGiGV8E2XvbJHbTGx8HgGUuVo8v6Dk+UTAs86w1DhR68kJsUhMf/sriUT SC9OsomMpp4l1kUjfsDzuZMdD3OOZyJEx4RHImyGHt0F3SYoZ0MQfMko7 M=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: ZUc2B59rq7c5UPaD1jyYG+gW313oRZd88vPEYL7vNs8SqziiUxFRkgL5iXBOXDYONeaXt5fOjU DqqZOQ/Pgjz9ChSO2Qb81cusP9yTW03+5g6lqbb9Kyd9oTKiJd7DvfO93+wWPORRQ7GPNVqFDV wo82s7D71nOmYmZBHudtCSGHhKpqgDN5WkQALIa6sc2tORMju5DpiGPMeIM3DWpAxaOvLh2Rpd HZ+va6IfidzBdKG8RCEloJ9YVfjccXSHxSObJd9gY690YmsPfiSqd3ymZ3VcHEaCkIisODZpVH zbs= X-SBRS: 2.7 X-MesageID: 5339034 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5339034" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:19 +0200 Message-ID: <20190903161428.7159-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 02/11] ioreq: terminate cf8 handling at hypervisor level X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Do not forward accesses to cf8 to external emulators, decoding of PCI accesses is handled by Xen, and emulators can request handling of config space accesses of devices using the provided ioreq interface. Fully terminate cf8 accesses at the hypervisor level, by improving the existing hvm_access_cf8 helper to also handle register reads, and always return X86EMUL_OKAY in order to terminate the emulation. Also return an error to ioreq servers attempting to map PCI IO ports (0xcf8-cfc), as those are handled by Xen. Note that without this change in the absence of some external emulator that catches accesses to cf8 read requests to the register would misbehave, as the ioreq internal handler did not handle those. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v1: - New in this version. --- xen/arch/x86/hvm/ioreq.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 692b710b02..69652e1080 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1015,6 +1015,12 @@ int hvm_map_io_range_to_ioreq_server(struct domain *= d, ioservid_t id, switch ( type ) { case XEN_DMOP_IO_RANGE_PORT: + rc =3D -EINVAL; + /* PCI config space accesses are handled internally. */ + if ( start <=3D 0xcf8 + 8 && 0xcf8 <=3D end ) + goto out; + else + /* fallthrough. */ case XEN_DMOP_IO_RANGE_MEMORY: case XEN_DMOP_IO_RANGE_PCI: r =3D s->range[type]; @@ -1518,11 +1524,15 @@ static int hvm_access_cf8( { struct domain *d =3D current->domain; =20 - if ( dir =3D=3D IOREQ_WRITE && bytes =3D=3D 4 ) + if ( bytes !=3D 4 ) + return X86EMUL_OKAY; + + if ( dir =3D=3D IOREQ_WRITE ) d->arch.hvm.pci_cf8 =3D *val; + else + *val =3D d->arch.hvm.pci_cf8; =20 - /* We always need to fall through to the catch all emulator */ - return X86EMUL_UNHANDLEABLE; + return X86EMUL_OKAY; } =20 void hvm_ioreq_init(struct domain *d) --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527368; cv=none; d=zoho.com; s=zohoarc; b=nnPFmgS1d5FhAkDNFC60au5A0moBtL+gBZtwwCPeF3UPE2Vpca1jYXWr2TQQdXzO2Mka0nevBryzIDDNcbdUmm8qNyx7MFiq7fNOjgcpYZCs+5a+P4sgDep6SixyR8/IdnUEe+IyFKtajlbP8bVSR2x70NKX3HKA4g/vXmbdcQM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527368; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=LJ7YUmb/fn2786n36Hm/avSKBwQP2IxkVdXjdEBVnNo=; b=EyutiRjn9bG/9XQPq6MKj6Zqxu0GbZVUg48doVOvxNqHMDH8ZTXTr5aKaEd930zB3BRcXnA7tSi2rTX0GELounMAMbsQMAYN0opcYuCVP6bCoYaY6Rw8+NAgSj4xqpC9trdtLnKaeNzssEdyqzobnQXoA8biPaTxUyuH7ENU9DE= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527368741428.7608376322662; Tue, 3 Sep 2019 09:16:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSD-0001s7-2j; Tue, 03 Sep 2019 16:14:49 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSC-0001rv-1E for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:14:48 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f24a4446-ce65-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:14:47 +0000 (UTC) X-Inumbo-ID: f24a4446-ce65-11e9-ab97-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527287; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qt960/TRo1SbW4x+6PlOt+hn82QaKoUVl+HwvwZNaTs=; b=d4nIYO7Vc5q/55EHEyiGO4tEfthrWe/8CdG7sONFVCEJ5K7e4BYT9QgB ztoXGMmhFfG8TYK6pvC6JwvcrI3DlL0FUi1P2TKI+jTuZBidsoJ4jhuS9 Q2WV5KlY/iikytqPvmoePk4pAEMqfwRWefv4jGI3Z4ZDMZCjmn35I3y5o U=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: MQgDxRFDgY2qkXyYx9EhX8pYHExyYJpVkIwzlzEhbT2OiwJnCbpmX9ptFd4rzldySLa++GldwA Z+5j1sBUEOmI3tq/e+PJ++SE8qsV7fx4iM53RAh1ykmMrX2fT++2s4Qhtw3HWezMevHABrYH4x uJcZ4LAMuxyD1guo5OUQlnRQZHu6Z7jDk38gYFFV/3EOSFR/WjPOB8jlB7Bq4y2ERVrxykhzX/ RLrGDPcS+PUwrKJ3t4usQNglgKonLjTe9CYHLGVyjwzvL0br8B+uSMnXbEgq7C/6jnc9kZvQPS uXQ= X-SBRS: 2.7 X-MesageID: 5068893 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5068893" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:20 +0200 Message-ID: <20190903161428.7159-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 03/11] ioreq: switch selection and forwarding to use ioservid_t X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) hvm_select_ioreq_server and hvm_send_ioreq where both using hvm_ioreq_server directly, switch to use ioservid_t in order to select and forward ioreqs. This is a preparatory change, since future patches will use the ioreq server id in order to differentiate between internal and external ioreq servers. Signed-off-by: Roger Pau Monn=C3=A9 Acked-by: Jan Beulich Reviewed-by: Paul Durrant --- Changes since v1: - New in this version. --- xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/emulate.c | 14 +++++++------- xen/arch/x86/hvm/ioreq.c | 24 ++++++++++++------------ xen/arch/x86/hvm/stdvga.c | 8 ++++---- xen/arch/x86/mm/p2m.c | 20 ++++++++++---------- xen/include/asm-x86/hvm/ioreq.h | 5 ++--- xen/include/asm-x86/p2m.h | 9 ++++----- xen/include/public/hvm/dm_op.h | 1 + 8 files changed, 41 insertions(+), 42 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index d6d0e8be89..c2fca9f729 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -263,7 +263,7 @@ static int set_mem_type(struct domain *d, return -EOPNOTSUPP; =20 /* Do not change to HVMMEM_ioreq_server if no ioreq server mapped.= */ - if ( !p2m_get_ioreq_server(d, &flags) ) + if ( p2m_get_ioreq_server(d, &flags) =3D=3D XEN_INVALID_IOSERVID ) return -EINVAL; } =20 diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index d75d3e6fd6..51d2fcba2d 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -254,7 +254,7 @@ static int hvmemul_do_io( * However, there's no cheap approach to avoid above situations in= xen, * so the device model side needs to check the incoming ioreq even= t. */ - struct hvm_ioreq_server *s =3D NULL; + ioservid_t id =3D XEN_INVALID_IOSERVID; p2m_type_t p2mt =3D p2m_invalid; =20 if ( is_mmio ) @@ -267,9 +267,9 @@ static int hvmemul_do_io( { unsigned int flags; =20 - s =3D p2m_get_ioreq_server(currd, &flags); + id =3D p2m_get_ioreq_server(currd, &flags); =20 - if ( s =3D=3D NULL ) + if ( id =3D=3D XEN_INVALID_IOSERVID ) { rc =3D X86EMUL_RETRY; vio->io_req.state =3D STATE_IOREQ_NONE; @@ -289,18 +289,18 @@ static int hvmemul_do_io( } } =20 - if ( !s ) - s =3D hvm_select_ioreq_server(currd, &p); + if ( id =3D=3D XEN_INVALID_IOSERVID ) + id =3D hvm_select_ioreq_server(currd, &p); =20 /* If there is no suitable backing DM, just ignore accesses */ - if ( !s ) + if ( id =3D=3D XEN_INVALID_IOSERVID ) { rc =3D hvm_process_io_intercept(&null_handler, &p); vio->io_req.state =3D STATE_IOREQ_NONE; } else { - rc =3D hvm_send_ioreq(s, &p, 0); + rc =3D hvm_send_ioreq(id, &p, 0); if ( rc !=3D X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state =3D STATE_IOREQ_NONE; else if ( !hvm_ioreq_needs_completion(&vio->io_req) ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 69652e1080..95492bc111 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -39,6 +39,7 @@ static void set_ioreq_server(struct domain *d, unsigned i= nt id, { ASSERT(id < MAX_NR_IOREQ_SERVERS); ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + BUILD_BUG_ON(MAX_NR_IOREQ_SERVERS >=3D XEN_INVALID_IOSERVID); =20 d->arch.hvm.ioreq_server.server[id] =3D s; } @@ -868,7 +869,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) =20 domain_pause(d); =20 - p2m_set_ioreq_server(d, 0, s); + p2m_set_ioreq_server(d, 0, id); =20 hvm_ioreq_server_disable(s); =20 @@ -1131,7 +1132,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d= , ioservid_t id, if ( s->emulator !=3D current->domain ) goto out; =20 - rc =3D p2m_set_ioreq_server(d, flags, s); + rc =3D p2m_set_ioreq_server(d, flags, id); =20 out: spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1255,8 +1256,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +ioservid_t hvm_select_ioreq_server(struct domain *d, ioreq_t *p) { struct hvm_ioreq_server *s; uint32_t cf8; @@ -1265,7 +1265,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, unsigned int id; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) - return NULL; + return XEN_INVALID_IOSERVID; =20 cf8 =3D d->arch.hvm.pci_cf8; =20 @@ -1320,7 +1320,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, start =3D addr; end =3D start + p->size - 1; if ( rangeset_contains_range(r, start, end) ) - return s; + return id; =20 break; =20 @@ -1329,7 +1329,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, end =3D hvm_mmio_last_byte(p); =20 if ( rangeset_contains_range(r, start, end) ) - return s; + return id; =20 break; =20 @@ -1338,14 +1338,14 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, { p->type =3D IOREQ_TYPE_PCI_CONFIG; p->addr =3D addr; - return s; + return id; } =20 break; } } =20 - return NULL; + return XEN_INVALID_IOSERVID; } =20 static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) @@ -1441,12 +1441,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq= _server *s, ioreq_t *p) return X86EMUL_OKAY; } =20 -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered) +int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered) { struct vcpu *curr =3D current; struct domain *d =3D curr->domain; struct hvm_ioreq_vcpu *sv; + struct hvm_ioreq_server *s =3D get_ioreq_server(d, id); =20 ASSERT(s); =20 @@ -1512,7 +1512,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buf= fered) if ( !s->enabled ) continue; =20 - if ( hvm_send_ioreq(s, p, buffered) =3D=3D X86EMUL_UNHANDLEABLE ) + if ( hvm_send_ioreq(id, p, buffered) =3D=3D X86EMUL_UNHANDLEABLE ) failed++; } =20 diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index bd398dbb1b..a689269712 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler= *handler, .dir =3D IOREQ_WRITE, .data =3D data, }; - struct hvm_ioreq_server *srv; + ioservid_t id; =20 if ( !stdvga_cache_is_enabled(s) || !s->stdvga ) goto done; @@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handl= er *handler, } =20 done: - srv =3D hvm_select_ioreq_server(current->domain, &p); - if ( !srv ) + id =3D hvm_select_ioreq_server(current->domain, &p); + if ( id =3D=3D XEN_INVALID_IOSERVID ) return X86EMUL_UNHANDLEABLE; =20 - return hvm_send_ioreq(srv, &p, 1); + return hvm_send_ioreq(id, &p, 1); } =20 static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 8a5229ee21..43849cbbd9 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -102,6 +102,7 @@ static int p2m_initialise(struct domain *d, struct p2m_= domain *p2m) p2m_pt_init(p2m); =20 spin_lock_init(&p2m->ioreq.lock); + p2m->ioreq.server =3D XEN_INVALID_IOSERVID; =20 return ret; } @@ -361,7 +362,7 @@ void p2m_memory_type_changed(struct domain *d) =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s) + ioservid_t id) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); int rc; @@ -376,16 +377,16 @@ int p2m_set_ioreq_server(struct domain *d, if ( flags =3D=3D 0 ) { rc =3D -EINVAL; - if ( p2m->ioreq.server !=3D s ) + if ( p2m->ioreq.server !=3D id ) goto out; =20 - p2m->ioreq.server =3D NULL; + p2m->ioreq.server =3D XEN_INVALID_IOSERVID; p2m->ioreq.flags =3D 0; } else { rc =3D -EBUSY; - if ( p2m->ioreq.server !=3D NULL ) + if ( p2m->ioreq.server !=3D XEN_INVALID_IOSERVID ) goto out; =20 /* @@ -397,7 +398,7 @@ int p2m_set_ioreq_server(struct domain *d, if ( read_atomic(&p2m->ioreq.entry_count) ) goto out; =20 - p2m->ioreq.server =3D s; + p2m->ioreq.server =3D id; p2m->ioreq.flags =3D flags; } =20 @@ -409,19 +410,18 @@ int p2m_set_ioreq_server(struct domain *d, return rc; } =20 -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags) +ioservid_t p2m_get_ioreq_server(struct domain *d, unsigned int *flags) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - struct hvm_ioreq_server *s; + ioservid_t id; =20 spin_lock(&p2m->ioreq.lock); =20 - s =3D p2m->ioreq.server; + id =3D p2m->ioreq.server; *flags =3D p2m->ioreq.flags; =20 spin_unlock(&p2m->ioreq.lock); - return s; + return id; } =20 void p2m_enable_hardware_log_dirty(struct domain *d) diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index e2588e912f..65491c48d2 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -47,9 +47,8 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, stru= ct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); void hvm_destroy_all_ioreq_servers(struct domain *d); =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +ioservid_t hvm_select_ioreq_server(struct domain *d, ioreq_t *p); +int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered); unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); =20 diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 94285db1b4..99a1dab311 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -354,7 +354,7 @@ struct p2m_domain { * ioreq server who's responsible for the emulation of * gfns with specific p2m type(for now, p2m_ioreq_server). */ - struct hvm_ioreq_server *server; + ioservid_t server; /* * flags specifies whether read, write or both operations * are to be emulated by an ioreq server. @@ -819,7 +819,7 @@ static inline p2m_type_t p2m_recalc_type_range(bool rec= alc, p2m_type_t t, if ( !recalc || !p2m_is_changeable(t) ) return t; =20 - if ( t =3D=3D p2m_ioreq_server && p2m->ioreq.server !=3D NULL ) + if ( t =3D=3D p2m_ioreq_server && p2m->ioreq.server !=3D XEN_INVALID_I= OSERVID ) return t; =20 return p2m_is_logdirty_range(p2m, gfn_start, gfn_end) ? p2m_ram_logdir= ty @@ -938,9 +938,8 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type= _t p2mt, mfn_t mfn) } =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s); -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags); + ioservid_t id); +ioservid_t p2m_get_ioreq_server(struct domain *d, unsigned int *flags); =20 static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt, p2m_type_t ot, mfn_t nfn, mfn_t ofn, diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index d3b554d019..8725cc20d3 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -54,6 +54,7 @@ */ =20 typedef uint16_t ioservid_t; +#define XEN_INVALID_IOSERVID 0xffff =20 /* * XEN_DMOP_create_ioreq_server: Instantiate a new IOREQ Server for a --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527360; cv=none; d=zoho.com; s=zohoarc; b=c/OncFzIIw26KbkV3d3ozLGUXygYwrnN8f5NrkJx5cXGxmRMQr4w8K3h2YBkHKXypcRLFgWaoFLUpISkEfaG02ATF8FYffF/9h943Z3pMIkBYwVhiG4fA6qLyPs8YOoSKoQwtoOINydHMFRNKgCLYZRV9oSQVA34m+SZgBuOV8E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527360; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=w8dsPhekZH8/anE6Ees7TTh4bwzxR43vH0iW0KrnzqI=; b=IfT/rQFQwsch5hWp8wGtuQauxaIItqziZQFcZol1nYNuo/yLrl6CeGBfAf0LmCcgWXttgwLGs48e8eHgN0ulgxU0mzjMFlxAqGovyjobJaD+kM+gvuwDwJWjL/qivqYoMFlflHZeRBlYpTqZOJlaxPPtaNffDUgBFpZ3/TBZqwU= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527360286559.0401386565112; Tue, 3 Sep 2019 09:16:00 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSF-0001t1-QM; Tue, 03 Sep 2019 16:14:51 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSE-0001si-FH for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:14:50 +0000 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f4075ac6-ce65-11e9-b299-bc764e2007e4; Tue, 03 Sep 2019 16:14:49 +0000 (UTC) X-Inumbo-ID: f4075ac6-ce65-11e9-b299-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527290; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GyIoOVf7Reu99rVrxn593Kpcww0gx1guOOBYA8so0Is=; b=Mni72JewtTnLIja0nO3EHqddf5jkqESWBT6/lvjrdNS+IwiQ0so1mDwb CKnR8jNSVKCz+s7oL8wQBBXtDRMPtoFWpPQ55rrMUZRHlcn9Wz1wlbtYX qqF5uvbfKmRqqPqZojWbyWByHOXbFKFLrHj3jZQZvYLIC77I1RNOVJVi0 E=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: lKoaOGeDS9SLKwSctYHDq5blWMPtb9wSKaiH9tbvfaECeS5lNTeA56i56YQtowZ+pU6/xNQAae u/MTA7YgbGA8Kn7d6u8khggpKZNik5JRL+FLE0UEm/uhkrcRJI62EPt5hAoYsutrikLho1qL+I yFW/Fiy0MSjXcPK1qqNR7M41aMYX8k9u0QaC3x4NEo69vi4shiAKu9kRIOgB0OziTfK1LLlu7r wuWXiaxWkxqsDXb6UmdlLSLFoE5FhCPgQKyBWCvnhyIbUaJFsvj7EWvlzgsSG+329zfZ/CDYls pW4= X-SBRS: 2.7 X-MesageID: 5080354 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5080354" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:21 +0200 Message-ID: <20190903161428.7159-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 04/11] ioreq: add fields to allow internal ioreq servers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Internal ioreq servers are plain function handlers implemented inside of the hypervisor. Note that most fields used by current (external) ioreq servers are not needed for internal ones, and hence have been placed inside of a struct and packed in an union together with the only internal specific field, a function pointer to a handler. This is required in order to have PCI config accesses forwarded to external ioreq servers or to internal ones (ie: QEMU emulated devices vs vPCI passthrough), and is the first step in order to allow unprivileged domains to use vPCI. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Paul Durrant --- Changes since v1: - Do not add an internal field to the ioreq server struct, whether a server is internal or external can already be inferred from the id. - Add an extra parameter to the internal handler in order to pass user-provided opaque data to the handler. --- xen/include/asm-x86/hvm/domain.h | 30 +++++++++++++++++++----------- 1 file changed, 19 insertions(+), 11 deletions(-) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index bcc5621797..9fbe83f45a 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -52,21 +52,29 @@ struct hvm_ioreq_vcpu { #define MAX_NR_IO_RANGES 256 =20 struct hvm_ioreq_server { - struct domain *target, *emulator; - + struct domain *target; /* Lock to serialize toolstack modifications */ spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; struct rangeset *range[NR_IO_RANGE_TYPES]; bool enabled; - uint8_t bufioreq_handling; + + union { + struct { + struct domain *emulator; + struct hvm_ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct hvm_ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + uint8_t bufioreq_handling; + }; + struct { + void *data; + int (*handler)(struct vcpu *v, ioreq_t *, void *); + }; + }; }; =20 /* --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527368; cv=none; d=zoho.com; s=zohoarc; b=c/hdspuZWffCv5/NAFHfk2MSNhU6XLg2xz7wE9Leu3m8GbdiyK29wW9f4Awvw7eShqq+tlZErzoT5Wgzm8LLNjPq6nSSuhf2KTVn9IJnmLSGF1KnzvY0sY0yJtx0hThpu27TTF87gQ5bG+kLUHAmYB40/RhWjh7Th+jYRGkZ4kU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527368; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=WiiAehnkBh9jKkvmJFaltCHYKa3a7KIvs9HKjb3q0r4=; b=nsHI0QBZuThSY/WHwLLWOktU1TSDZ6FvbsUbXtasztRz7UhDc74qeGv0jpDKIrQZdO7/pkxk45Fnm2/JnKeucc7lh0681bKI+1mSZSiQjSPpsSVFpnKpSTw3ApCGoBxoiu3CitRar3+2CHvvzJo3cxkns8A4pP4Io+fn2Ou4TNE= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527368277920.6930230140936; Tue, 3 Sep 2019 09:16:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSH-0001tU-Cl; Tue, 03 Sep 2019 16:14:53 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSF-0001ss-Jy for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:14:51 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f49add0a-ce65-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:14:50 +0000 (UTC) X-Inumbo-ID: f49add0a-ce65-11e9-ab97-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527291; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aWZABy/i9B5DGCC5tc4gS/wgzUInYjMNWvjcV6vkaiY=; b=Ggrk6KZSrHaAw8722WTgnCu6EYdXte509oWUvtLWIBfPuWfVzTmq7xZY RgRp7pMVguSb2G7GRGPIzbY/X9aGJFDcuGDuoli3DQ7NoXk9jXgHLi3mT vmYSAYmZ1Gt/rhh7XVx+YHd6Ap05tE/dRkotW2UvxJ5F6ySCsyTPkmij0 A=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: pafoqxmZTwFemsXBsiJ4wWkz4IJH4xM5LuKibUdmMSTSMRDcbz/b5ArkMdccd2RS6nZ2S7Yups oK6hqk9Mv+1RhfAzn1CXt4N2Nc/0TGsfeHTfS2On6xy6nP5QfxIySP7vC3nWRkpQ4wVVdZg95/ 3Ntzg0LZOgJwx0KcGw5Spi0a/8t1PcdGGolMsJB9SHylBu5jj3F6fxw8a72QPZ6UuHfMAlUBUs Erv75eAonoyAgqLQF14KBYXt5t4GWxZEi9tG+TuU6UdS3M9rm/g53kwQjjd763zHPrIuE9gXb6 RzA= X-SBRS: 2.7 X-MesageID: 5068896 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5068896" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:22 +0200 Message-ID: <20190903161428.7159-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 05/11] ioreq: add internal ioreq initialization support X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Add support for internal ioreq servers to initialization and deinitialization routines, prevent some functions from being executed against internal ioreq servers and add guards to only allow internal callers to modify internal ioreq servers. External callers (ie: from hypercalls) are only allowed to deal with external ioreq servers. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v1: - Do not pass an 'internal' parameter to most functions, and instead use the id to key whether an ioreq server is internal or external. - Prevent enabling an internal server without a handler. --- xen/arch/x86/hvm/dm.c | 17 ++- xen/arch/x86/hvm/ioreq.c | 173 +++++++++++++++++++------------ xen/include/asm-x86/hvm/domain.h | 5 +- xen/include/asm-x86/hvm/ioreq.h | 8 +- 4 files changed, 135 insertions(+), 68 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index c2fca9f729..6a3682e58c 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -417,7 +417,7 @@ static int dm_op(const struct dmop_args *op_args) break; =20 rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); + &data->id, false); break; } =20 @@ -450,6 +450,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, data->start, data->end); @@ -464,6 +467,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, data->start, data->end); @@ -481,6 +487,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EOPNOTSUPP; if ( !hap_enabled(d) ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 if ( first_gfn =3D=3D 0 ) rc =3D hvm_map_mem_type_to_ioreq_server(d, data->id, @@ -528,6 +537,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); break; @@ -541,6 +553,9 @@ static int dm_op(const struct dmop_args *op_args) rc =3D -EINVAL; if ( data->pad ) break; + rc =3D -EPERM; + if ( hvm_ioreq_is_internal(data->id) ) + break; =20 rc =3D hvm_destroy_ioreq_server(d, data->id); break; diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 95492bc111..dbc5e6b4c5 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -59,10 +59,11 @@ static struct hvm_ioreq_server *get_ioreq_server(const = struct domain *d, /* * Iterate over all possible ioreq servers. * - * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). - * This is a semantic that previously existed when ioreq servers - * were held in a linked list. + * NOTE: The iteration is backwards such that internal and more recently + * created external ioreq servers are favoured in + * hvm_select_ioreq_server(). + * This is a semantic that previously existed for external servers w= hen + * ioreq servers were held in a linked list. */ #define FOR_EACH_IOREQ_SERVER(d, id, s) \ for ( (id) =3D MAX_NR_IOREQ_SERVERS; (id) !=3D 0; ) \ @@ -70,6 +71,12 @@ static struct hvm_ioreq_server *get_ioreq_server(const s= truct domain *d, continue; \ else =20 +#define FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) \ + for ( (id) =3D MAX_NR_EXTERNAL_IOREQ_SERVERS; (id) !=3D 0; ) \ + if ( !(s =3D GET_IOREQ_SERVER(d, --(id))) ) \ + continue; \ + else + static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) { shared_iopage_t *p =3D s->ioreq.va; @@ -86,7 +93,7 @@ bool hvm_io_pending(struct vcpu *v) struct hvm_ioreq_server *s; unsigned int id; =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 @@ -190,7 +197,7 @@ bool handle_hvm_io_completion(struct vcpu *v) return false; } =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 @@ -430,7 +437,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) { @@ -688,7 +695,7 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_= ioreq_server *s, return rc; } =20 -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s, bool inter= nal) { struct hvm_ioreq_vcpu *sv; =20 @@ -697,29 +704,40 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_= server *s) if ( s->enabled ) goto done; =20 - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + if ( !internal ) + { + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); =20 - s->enabled =3D true; + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + } + else if ( !s->handler ) + { + ASSERT_UNREACHABLE(); + goto done; + } =20 - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); + s->enabled =3D true; =20 done: spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s, bool inte= rnal) { spin_lock(&s->lock); =20 if ( !s->enabled ) goto done; =20 - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + if ( !internal ) + { + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); + } =20 s->enabled =3D false; =20 @@ -736,33 +754,39 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_ser= ver *s, int rc; =20 s->target =3D d; - - get_knownalive_domain(currd); - s->emulator =3D currd; - spin_lock_init(&s->lock); - INIT_LIST_HEAD(&s->ioreq_vcpu_list); - spin_lock_init(&s->bufioreq_lock); - - s->ioreq.gfn =3D INVALID_GFN; - s->bufioreq.gfn =3D INVALID_GFN; =20 rc =3D hvm_ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; =20 - s->bufioreq_handling =3D bufioreq_handling; - - for_each_vcpu ( d, v ) + if ( !hvm_ioreq_is_internal(id) ) { - rc =3D hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail_add; + get_knownalive_domain(currd); + + s->emulator =3D currd; + INIT_LIST_HEAD(&s->ioreq_vcpu_list); + spin_lock_init(&s->bufioreq_lock); + + s->ioreq.gfn =3D INVALID_GFN; + s->bufioreq.gfn =3D INVALID_GFN; + + s->bufioreq_handling =3D bufioreq_handling; + + for_each_vcpu ( d, v ) + { + rc =3D hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail_add; + } } + else + s->handler =3D NULL; =20 return 0; =20 fail_add: + ASSERT(!hvm_ioreq_is_internal(id)); hvm_ioreq_server_remove_all_vcpus(s); hvm_ioreq_server_unmap_pages(s); =20 @@ -772,30 +796,34 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_ser= ver *s, return rc; } =20 -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s, bool inter= nal) { ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); - - /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. - */ - hvm_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); =20 hvm_ioreq_server_free_rangesets(s); =20 - put_domain(s->emulator); + if ( !internal ) + { + hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latte= r. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ + hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); + + put_domain(s->emulator); + } } =20 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) + ioservid_t *id, bool internal) { struct hvm_ioreq_server *s; unsigned int i; @@ -811,7 +839,9 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, domain_pause(d); spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) + for ( i =3D (internal ? MAX_NR_EXTERNAL_IOREQ_SERVERS : 0); + i < (internal ? MAX_NR_IOREQ_SERVERS : MAX_NR_EXTERNAL_IOREQ_SER= VERS); + i++ ) { if ( !GET_IOREQ_SERVER(d, i) ) break; @@ -821,6 +851,9 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, if ( i >=3D MAX_NR_IOREQ_SERVERS ) goto fail; =20 + ASSERT((internal && + i >=3D MAX_NR_EXTERNAL_IOREQ_SERVERS && i < MAX_NR_IOREQ_SERVE= RS) || + (!internal && i < MAX_NR_EXTERNAL_IOREQ_SERVERS)); /* * It is safe to call set_ioreq_server() prior to * hvm_ioreq_server_init() since the target domain is paused. @@ -864,20 +897,21 @@ int hvm_destroy_ioreq_server(struct domain *d, ioserv= id_t id) goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + /* NB: internal servers cannot be destroyed. */ + if ( hvm_ioreq_is_internal(id) || s->emulator !=3D current->domain ) goto out; =20 domain_pause(d); =20 p2m_set_ioreq_server(d, 0, id); =20 - hvm_ioreq_server_disable(s); + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); =20 /* * It is safe to call hvm_ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is paused. */ - hvm_ioreq_server_deinit(s); + hvm_ioreq_server_deinit(s, hvm_ioreq_is_internal(id)); set_ioreq_server(d, id, NULL); =20 domain_unpause(d); @@ -909,7 +943,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + /* NB: don't allow fetching information from internal ioreq servers. */ + if ( hvm_ioreq_is_internal(id) || s->emulator !=3D current->domain ) goto out; =20 if ( ioreq_gfn || bufioreq_gfn ) @@ -956,7 +991,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( hvm_ioreq_is_internal(id) || s->emulator !=3D current->domain ) goto out; =20 rc =3D hvm_ioreq_server_alloc_pages(s); @@ -1010,7 +1045,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d= , ioservid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( !hvm_ioreq_is_internal(id) && s->emulator !=3D current->domain ) goto out; =20 switch ( type ) @@ -1068,7 +1103,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domai= n *d, ioservid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( !hvm_ioreq_is_internal(id) && s->emulator !=3D current->domain ) goto out; =20 switch ( type ) @@ -1128,6 +1163,14 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *= d, ioservid_t id, if ( !s ) goto out; =20 + /* + * NB: do not support mapping internal ioreq servers to memory types, = as + * the current internal ioreq servers don't need this feature and it's= not + * been tested. + */ + rc =3D -EINVAL; + if ( hvm_ioreq_is_internal(id) ) + goto out; rc =3D -EPERM; if ( s->emulator !=3D current->domain ) goto out; @@ -1163,15 +1206,15 @@ int hvm_set_ioreq_server_state(struct domain *d, io= servid_t id, goto out; =20 rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) + if ( !hvm_ioreq_is_internal(id) && s->emulator !=3D current->domain ) goto out; =20 domain_pause(d); =20 if ( enabled ) - hvm_ioreq_server_enable(s); + hvm_ioreq_server_enable(s, hvm_ioreq_is_internal(id)); else - hvm_ioreq_server_disable(s); + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); =20 domain_unpause(d); =20 @@ -1190,7 +1233,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) { rc =3D hvm_ioreq_server_add_vcpu(s, v); if ( rc ) @@ -1202,7 +1245,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) return 0; =20 fail: - while ( id++ !=3D MAX_NR_IOREQ_SERVERS ) + while ( id++ !=3D MAX_NR_EXTERNAL_IOREQ_SERVERS ) { s =3D GET_IOREQ_SERVER(d, id); =20 @@ -1224,7 +1267,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain = *d, struct vcpu *v) =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - FOR_EACH_IOREQ_SERVER(d, id, s) + FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) hvm_ioreq_server_remove_vcpu(s, v); =20 spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1241,13 +1284,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - hvm_ioreq_server_disable(s); + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); =20 /* * It is safe to call hvm_ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is being destroyed. */ - hvm_ioreq_server_deinit(s); + hvm_ioreq_server_deinit(s, hvm_ioreq_is_internal(id)); set_ioreq_server(d, id, NULL); =20 xfree(s); diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 9fbe83f45a..9f92838b6e 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -97,7 +97,10 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; =20 -#define MAX_NR_IOREQ_SERVERS 8 +#define MAX_NR_EXTERNAL_IOREQ_SERVERS 8 +#define MAX_NR_INTERNAL_IOREQ_SERVERS 1 +#define MAX_NR_IOREQ_SERVERS \ + (MAX_NR_EXTERNAL_IOREQ_SERVERS + MAX_NR_INTERNAL_IOREQ_SERVERS) =20 struct hvm_domain { /* Guest page range used for non-default ioreq servers */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 65491c48d2..c3917aa74d 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -24,7 +24,7 @@ bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); =20 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); + ioservid_t *id, bool internal); int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *ioreq_gfn, @@ -54,6 +54,12 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffer= ed); =20 void hvm_ioreq_init(struct domain *d); =20 +static inline bool hvm_ioreq_is_internal(unsigned int id) +{ + ASSERT(id < MAX_NR_IOREQ_SERVERS); + return id >=3D MAX_NR_EXTERNAL_IOREQ_SERVERS; +} + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527371; cv=none; d=zoho.com; s=zohoarc; b=LVgahGEz9UNdyLvtfJahzjZAHIVMGMTebTfj0xknmP+BuyO4ELNVY3HKOR45NEj1VImzVRPbj5qzSfv4yvMEKl216f4D6WlkARldFdw4zEeuC/+CY9Zj8DMt9wUH4lyNSTpJKljZRxs4KmZRiDaIIh0um3RuzgcSJx//5ZKVpcg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527371; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=BcQa+12LB5rB9iTML5wdj4I8cZegdMxjp/1SQmeClbA=; b=erdDJjs5+WszA02QONhjapHSb6GGaudR78LNMuXfSz/Vs78OD+s0IfUFMqYeGOJGHpMaHZPxIGMUWft0NL+kaaSEzZt0tmoclELPFqLiMiZgQBDGLZHD8pFHQAo1YYJwLlagxJYKIMTY5pUTH+CcBzpRaTIKVp1a5MaEEiL20g8= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527371604340.4020930450599; Tue, 3 Sep 2019 09:16:11 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSb-0002DO-BK; Tue, 03 Sep 2019 16:15:13 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSZ-00028g-3A for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:15:11 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id fd20025e-ce65-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:15:05 +0000 (UTC) X-Inumbo-ID: fd20025e-ce65-11e9-ab97-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527305; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ao+suJH61Ih8IUHp95adbmYVii38rummBKHRC8XJGO0=; b=AlbHsJS1htr9gvX0/BPNUExB7Uph6XdhXn+lMGgo22ZHc0kcXBUZr764 +kqv+rC6pwOaLgc9gkbY4S0SdEa02n90zBn+oxDW2AFu/ykZ3hD+MW7Sr T3OYJ3tgqv/IPp+4qZo/Sxye3ea4cXLblkoaMrAzOosOMRW5TY3SWHr2c 0=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: hdXdbMtuWKvhccIEuNzPOMeZJ3zWBMiUvnQfLy4Opf7U1GaGGNVyXdl2yZGmUB8tDtHpJe/3Oo C8u5R4KAgzhFv40+ONqoSpzkeeIkOd4elQX1uRdNP81xnok9QKTT5miOMM44DIr8LBYEyZyZm+ uTdHGYLNyDfr2LffKeBo+AsneNkZvk09uu8S7TSFk39yoxIQ/sWId0tnGyzhAQ/kAGDrwvjbF6 0bqSxRIe76ESQvLrLJVLPeCuBcqTdImtjiIpvmWg376iPzHK1jS3zPmeyuj2gEqhK37+feUKNU iHA= X-SBRS: 2.7 X-MesageID: 5256741 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5256741" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:23 +0200 Message-ID: <20190903161428.7159-7-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal servers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Internal ioreq servers are always processed first, and ioreqs are dispatched by calling the handler function. Note this is already the case due to the implementation of FOR_EACH_IOREQ_SERVER. Note that hvm_send_ioreq doesn't get passed the ioreq server id, so obtain it from the ioreq server data by doing pointer arithmetic. Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Paul Durrant --- Changes since v1: - Avoid having to iterate twice over the list of ioreq servers since now internal servers are always processed first by FOR_EACH_IOREQ_SERVER. - Obtain ioreq server id using pointer arithmetic. --- xen/arch/x86/hvm/ioreq.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index dbc5e6b4c5..8331a89eae 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1493,9 +1493,18 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, = bool buffered) =20 ASSERT(s); =20 + if ( hvm_ioreq_is_internal(id) && buffered ) + { + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + if ( buffered ) return hvm_send_buffered_ioreq(s, proto_p); =20 + if ( hvm_ioreq_is_internal(id) ) + return s->handler(curr, proto_p, s->data); + if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return X86EMUL_RETRY; =20 --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527366; cv=none; d=zoho.com; s=zohoarc; b=inEFgfVbd+fOWwVa2uB5MCgE2KbUm8qYFL7AFP8sidkXeaFhZuIaxiQBS/QnnegVbSzOA7SWWEh5XuqvJWsVfmWo8euPMgBHGgmGFAtSNt7TgqN3VACbdUMn+rQNaF5qDLyJDE+nAdRR1ZrQScLjEoQBjveYTpWukTV1Vc83E+U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527366; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=qy9JJX/zhpdM2meqU7SRAAg+lK46gKBB2nQTJ75kCc8=; b=YuTLiHju6+yMbg+p8ZuMuvdIXhHORxr64dGCov8xYSldrvngmZsLZ3q8d27w/2mlNvQOoX1XIdfAmcbB9lnBWETZSTDsHo24rzOisOmGovsOPKdyXVr0+kwyHl+95Ss/+iyfFyEcHBPqhQVPSqe2p1PxG8nEvYL0R9j7MXvUu3A= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527366988353.97045500381114; Tue, 3 Sep 2019 09:16:06 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSK-0001v1-Rg; Tue, 03 Sep 2019 16:14:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSJ-0001uZ-Hj for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:14:55 +0000 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f6fca06a-ce65-11e9-a337-bc764e2007e4; Tue, 03 Sep 2019 16:14:54 +0000 (UTC) X-Inumbo-ID: f6fca06a-ce65-11e9-a337-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527294; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sGWHsODmz5HRsaIZLq0yTlfIFnFXOWqxIMS0UCDdFG4=; b=ahs4IS+3cXNl/Q4ZhQghXznhZGbOQF34hikLbJqFnZD79VTuLsuTIUMf quCXLT2RlbOZm65x/NOYD5IrDe6NaylE4pfWLij+J1wsVzU+lNlWePIcG nxo/OM22Dcw1GtFOf7lVMyYbOzIxO7WH04lXzwXRVVbjDn/E69r/lhB5m Y=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: pDra+XwseDlbWTYUU93mGwurT9APyJ1zkAriSYyDJlCzhg2R2A7TMW8oELpd6ptpH17OyWUoM9 C2Z5BAeUL2Do+7V4qoEo4YUDBWEA4wy8j+fHMXrdFaiKKT5UUD+Gla6m2TqNvDKSO2wjxqSJH+ gk6wml/ycczZrl8IBLo+GdZNjRW0EOJffXoeAyLciA42Iv4+jvnfCsLRB0wlg+YlDXe1HMz3mm jFvT+Nnnt+i7f13jb/uwqJ5uySFdKtIrfwjqU5HSWg4Nsrm7YqzlXZvIZvNAnPnbWt7b+aWU0k WtQ= X-SBRS: 2.7 X-MesageID: 5292987 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5292987" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:24 +0200 Message-ID: <20190903161428.7159-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 07/11] ioreq: allow registering internal ioreq server handler X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Provide a routine to register the handler for an internal ioreq server. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v1: - Allow to provide an opaque data parameter to pass to the handler. - Allow changing the handler as long as the server is not enabled. --- xen/arch/x86/hvm/ioreq.c | 35 +++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/ioreq.h | 4 ++++ 2 files changed, 39 insertions(+) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 8331a89eae..6339e5f884 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -485,6 +485,41 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *= s, bool buf) return rc; } =20 +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, + int (*handler)(struct vcpu *v, ioreq_t *, void *= ), + void *data) +{ + struct hvm_ioreq_server *s; + int rc =3D 0; + + if ( !hvm_ioreq_is_internal(id) ) + { + rc =3D -EINVAL; + goto out; + } + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + s =3D get_ioreq_server(d, id); + if ( !s ) + { + rc =3D -ENOENT; + goto out; + } + if ( s->enabled ) + { + rc =3D -EBUSY; + goto out; + } + + s->handler =3D handler; + s->data =3D data; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, struct hvm_ioreq_vcpu *sv) { diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index c3917aa74d..90cc2aa938 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -54,6 +54,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffer= ed); =20 void hvm_ioreq_init(struct domain *d); =20 +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, + int (*handler)(struct vcpu *v, ioreq_t *, void *= ), + void *data); + static inline bool hvm_ioreq_is_internal(unsigned int id) { ASSERT(id < MAX_NR_IOREQ_SERVERS); --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527373; cv=none; d=zoho.com; s=zohoarc; b=bOWo89CMVCsRF/cVUZDrWf1pxsVfphlZQ1CSxRFonb22TbVtCu+1SN8K29FlVKz2VNO82hLR5FvOuEYcbG6yqOLaaMKY95DB+Kw6P+5E+9htxZ47MHFc/V8bal1WzrTcKvXgsWNDBH+3+2EOhMzPGFVaZSy7CJ11U3oN8UzSvWI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527373; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=568fS2qGLVInOtTxirpBMxXpeGnRb2T7Nzygzqnx0XQ=; b=bCZuqEGXWrJQTuIoiWcLdvqKIm54Rsa2niAB3ioYoW19xCT9i8dMWMehLCJ6lzOG/bB25CcwDglw7wzhOrlDd6mDF/jG3FgIGapF4vtCejsZ1M3ZBVEMW8LUdR7lbu3NsNb4FkU6n1+81QNAlFJGkeFVvq6KzQzoGPlCXhfacyQ= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527373872421.9882702881488; Tue, 3 Sep 2019 09:16:13 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSX-00026G-8U; Tue, 03 Sep 2019 16:15:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSW-00024u-IL for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:15:08 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id fe0ed5b2-ce65-11e9-978d-bc764e2007e4; Tue, 03 Sep 2019 16:15:06 +0000 (UTC) X-Inumbo-ID: fe0ed5b2-ce65-11e9-978d-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527306; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IzacEEniZsX27LjC6KZ+qJCEeXrZByppjS7XMx4gNJw=; b=BiC9OtdAez720y8dGTEzorZ+Uh6R9TNL1WUBGMnW3GcgbZtGGLKl6qlM Q6bAeVAM3XrKiVlpdqYQeNZpvCU64k/70EM6h5MeRUEV+/2xWGN1yuRQK d22d9QVPLmrrjMCSQEfUDKTc0DUGHG47j80T5b7UZkPFIH9jdzrsOfs3e c=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 1PkHrdaUADuurPiOo0tQpKDhPI+VknfsCoUtVGMoe0ut+kEXLlttCNgTslT2yKbowImd/h9jkG +Ps6o2f5v92OtMP+JEKM38UVe8jBg5w0RV/4lyW+XJjweevf1ZkQ3lzpkTIzx1IDes58KdFIdJ lsn0YVM2VorkBZoS5uiQbSzKIe+h47msU2iW/AMvT/xq6LT4z14G3PlOOV3P0rlHyNRIQRp+u3 BizWxWK3pQkiLeEfwdEnhu0CZXVcT+cnFngizZXk7ELr3F2EQUsevbY5cEb5lz0QXSxffkQp8r lUU= X-SBRS: 2.7 X-MesageID: 5256745 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5256745" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:25 +0200 Message-ID: <20190903161428.7159-9-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 08/11] ioreq: allow decoding accesses to MMCFG regions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Pick up on the infrastructure already added for vPCI and allow ioreq to decode accesses to MMCFG regions registered for a domain. This infrastructure is still only accessible from internal callers, so MMCFG regions can only be registered from the internal domain builder used by PVH dom0. Note that the vPCI infrastructure to decode and handle accesses to MMCFG regions will be removed in following patches when vPCI is switched to become an internal ioreq server. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v1: - Remove prototype for destroy_vpci_mmcfg. - Keep the code in io.c so PCI accesses to MMCFG regions can be decoded before ioreq processing. --- xen/arch/x86/hvm/dom0_build.c | 8 +-- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 79 ++++++++++++----------------- xen/arch/x86/hvm/ioreq.c | 47 ++++++++++++----- xen/arch/x86/physdev.c | 5 +- xen/drivers/passthrough/x86/iommu.c | 2 +- xen/include/asm-x86/hvm/io.h | 29 ++++++++--- 7 files changed, 96 insertions(+), 76 deletions(-) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 8845399ae9..1ddbd46b39 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -1117,10 +1117,10 @@ static void __hwdom_init pvh_setup_mmcfg(struct dom= ain *d) =20 for ( i =3D 0; i < pci_mmcfg_config_num; i++ ) { - rc =3D register_vpci_mmcfg_handler(d, pci_mmcfg_config[i].address, - pci_mmcfg_config[i].start_bus_num= ber, - pci_mmcfg_config[i].end_bus_numbe= r, - pci_mmcfg_config[i].pci_segment); + rc =3D hvm_register_mmcfg(d, pci_mmcfg_config[i].address, + pci_mmcfg_config[i].start_bus_number, + pci_mmcfg_config[i].end_bus_number, + pci_mmcfg_config[i].pci_segment); if ( rc ) printk("Unable to setup MMCFG handler at %#lx for segment %u\n= ", pci_mmcfg_config[i].address, diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 2b8189946b..fec0073618 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -741,7 +741,7 @@ void hvm_domain_destroy(struct domain *d) xfree(ioport); } =20 - destroy_vpci_mmcfg(d); + hvm_free_mmcfg(d); } =20 static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h) diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index a5b0a23f06..3334888136 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -279,6 +279,18 @@ unsigned int hvm_pci_decode_addr(unsigned int cf8, uns= igned int addr, return CF8_ADDR_LO(cf8) | (addr & 3); } =20 +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf) +{ + addr -=3D mmcfg->addr; + sbdf->bdf =3D MMCFG_BDF(addr); + sbdf->bus +=3D mmcfg->start_bus; + sbdf->seg =3D mmcfg->segment; + + return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); +} + + /* Do some sanity checks. */ static bool vpci_access_allowed(unsigned int reg, unsigned int len) { @@ -383,50 +395,14 @@ void register_vpci_portio_handler(struct domain *d) handler->ops =3D &vpci_portio_ops; } =20 -struct hvm_mmcfg { - struct list_head next; - paddr_t addr; - unsigned int size; - uint16_t segment; - uint8_t start_bus; -}; - /* Handlers to trap PCI MMCFG config accesses. */ -static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d, - paddr_t addr) -{ - const struct hvm_mmcfg *mmcfg; - - list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) - if ( addr >=3D mmcfg->addr && addr < mmcfg->addr + mmcfg->size ) - return mmcfg; - - return NULL; -} - -bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr) -{ - return vpci_mmcfg_find(d, addr); -} - -static unsigned int vpci_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, - paddr_t addr, pci_sbdf_t *sbdf) -{ - addr -=3D mmcfg->addr; - sbdf->bdf =3D MMCFG_BDF(addr); - sbdf->bus +=3D mmcfg->start_bus; - sbdf->seg =3D mmcfg->segment; - - return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); -} - static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr) { struct domain *d =3D v->domain; bool found; =20 read_lock(&d->arch.hvm.mmcfg_lock); - found =3D vpci_mmcfg_find(d, addr); + found =3D hvm_is_mmcfg_address(d, addr); read_unlock(&d->arch.hvm.mmcfg_lock); =20 return found; @@ -443,14 +419,14 @@ static int vpci_mmcfg_read(struct vcpu *v, unsigned l= ong addr, *data =3D ~0ul; =20 read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D vpci_mmcfg_find(d, addr); + mmcfg =3D hvm_mmcfg_find(d, addr); if ( !mmcfg ) { read_unlock(&d->arch.hvm.mmcfg_lock); return X86EMUL_RETRY; } =20 - reg =3D vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf); + reg =3D hvm_mmcfg_decode_addr(mmcfg, addr, &sbdf); read_unlock(&d->arch.hvm.mmcfg_lock); =20 if ( !vpci_access_allowed(reg, len) || @@ -485,14 +461,14 @@ static int vpci_mmcfg_write(struct vcpu *v, unsigned = long addr, pci_sbdf_t sbdf; =20 read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D vpci_mmcfg_find(d, addr); + mmcfg =3D hvm_mmcfg_find(d, addr); if ( !mmcfg ) { read_unlock(&d->arch.hvm.mmcfg_lock); return X86EMUL_RETRY; } =20 - reg =3D vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf); + reg =3D hvm_mmcfg_decode_addr(mmcfg, addr, &sbdf); read_unlock(&d->arch.hvm.mmcfg_lock); =20 if ( !vpci_access_allowed(reg, len) || @@ -512,9 +488,9 @@ static const struct hvm_mmio_ops vpci_mmcfg_ops =3D { .write =3D vpci_mmcfg_write, }; =20 -int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, - unsigned int start_bus, unsigned int end_b= us, - unsigned int seg) +int hvm_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg) { struct hvm_mmcfg *mmcfg, *new; =20 @@ -549,7 +525,7 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr= _t addr, return ret; } =20 - if ( list_empty(&d->arch.hvm.mmcfg_regions) ) + if ( list_empty(&d->arch.hvm.mmcfg_regions) && has_vpci(d) ) register_mmio_handler(d, &vpci_mmcfg_ops); =20 list_add(&new->next, &d->arch.hvm.mmcfg_regions); @@ -558,7 +534,7 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr= _t addr, return 0; } =20 -void destroy_vpci_mmcfg(struct domain *d) +void hvm_free_mmcfg(struct domain *d) { struct list_head *mmcfg_regions =3D &d->arch.hvm.mmcfg_regions; =20 @@ -574,6 +550,17 @@ void destroy_vpci_mmcfg(struct domain *d) write_unlock(&d->arch.hvm.mmcfg_lock); } =20 +const struct hvm_mmcfg *hvm_mmcfg_find(const struct domain *d, paddr_t add= r) +{ + const struct hvm_mmcfg *mmcfg; + + list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) + if ( addr >=3D mmcfg->addr && addr < mmcfg->addr + mmcfg->size ) + return mmcfg; + + return NULL; +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 6339e5f884..fecdc2786f 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1090,21 +1090,34 @@ int hvm_map_io_range_to_ioreq_server(struct domain = *d, ioservid_t id, /* PCI config space accesses are handled internally. */ if ( start <=3D 0xcf8 + 8 && 0xcf8 <=3D end ) goto out; - else - /* fallthrough. */ + break; + case XEN_DMOP_IO_RANGE_MEMORY: + { + const struct hvm_mmcfg *mmcfg; + + rc =3D -EINVAL; + /* PCI config space accesses are handled internally. */ + read_lock(&d->arch.hvm.mmcfg_lock); + list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) + if ( start <=3D mmcfg->addr + mmcfg->size && mmcfg->addr <=3D = end ) + { + read_unlock(&d->arch.hvm.mmcfg_lock); + goto out; + } + read_unlock(&d->arch.hvm.mmcfg_lock); + break; + } + case XEN_DMOP_IO_RANGE_PCI: - r =3D s->range[type]; break; =20 default: - r =3D NULL; - break; + rc =3D -EINVAL; + goto out; } =20 - rc =3D -EINVAL; - if ( !r ) - goto out; + r =3D s->range[type]; =20 rc =3D -EEXIST; if ( rangeset_overlaps_range(r, start, end) ) @@ -1341,27 +1354,34 @@ ioservid_t hvm_select_ioreq_server(struct domain *d= , ioreq_t *p) uint8_t type; uint64_t addr; unsigned int id; + const struct hvm_mmcfg *mmcfg; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) return XEN_INVALID_IOSERVID; =20 cf8 =3D d->arch.hvm.pci_cf8; =20 - if ( p->type =3D=3D IOREQ_TYPE_PIO && - (p->addr & ~3) =3D=3D 0xcfc && - CF8_ENABLED(cf8) ) + read_lock(&d->arch.hvm.mmcfg_lock); + if ( (p->type =3D=3D IOREQ_TYPE_PIO && + (p->addr & ~3) =3D=3D 0xcfc && + CF8_ENABLED(cf8)) || + (p->type =3D=3D IOREQ_TYPE_COPY && + (mmcfg =3D hvm_mmcfg_find(d, p->addr)) !=3D NULL) ) { uint32_t x86_fam; pci_sbdf_t sbdf; unsigned int reg; =20 - reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf); + reg =3D p->type =3D=3D IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p= ->addr, + &sbdf) + : hvm_mmcfg_decode_addr(mmcfg, p->= addr, + &sbdf); =20 /* PCI config data cycle */ type =3D XEN_DMOP_IO_RANGE_PCI; addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ - if ( CF8_ADDR_HI(cf8) && + if ( p->type =3D=3D IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && (x86_fam =3D get_cpu_family( d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 && @@ -1380,6 +1400,7 @@ ioservid_t hvm_select_ioreq_server(struct domain *d, = ioreq_t *p) XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; addr =3D p->addr; } + read_unlock(&d->arch.hvm.mmcfg_lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c index 3a3c15890b..f61f66df5f 100644 --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -562,9 +562,8 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(voi= d) arg) * For HVM (PVH) domains try to add the newly found MMCFG to t= he * domain. */ - ret =3D register_vpci_mmcfg_handler(currd, info.address, - info.start_bus, info.end_bus, - info.segment); + ret =3D hvm_register_mmcfg(currd, info.address, info.start_bus, + info.end_bus, info.segment); } =20 break; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index 92c1d01edf..a33e31e361 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -246,7 +246,7 @@ static bool __hwdom_init hwdom_iommu_map(const struct d= omain *d, * TODO: runtime added MMCFG regions are not checked to make sure they * don't overlap with already mapped regions, thus preventing trapping. */ - if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) ) + if ( has_vpci(d) && hvm_is_mmcfg_address(d, pfn_to_paddr(pfn)) ) return false; =20 return true; diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 7ceb119b64..86ebbd1e7e 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -165,9 +165,19 @@ void stdvga_deinit(struct domain *d); =20 extern void hvm_dpci_msi_eoi(struct domain *d, int vector); =20 -/* Decode a PCI port IO access into a bus/slot/func/reg. */ +struct hvm_mmcfg { + struct list_head next; + paddr_t addr; + unsigned int size; + uint16_t segment; + uint8_t start_bus; +}; + +/* Decode a PCI port IO or MMCFG access into a bus/slot/func/reg. */ unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr, pci_sbdf_t *sbdf); +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf); =20 /* * HVM port IO handler that performs forwarding of guest IO ports into mac= hine @@ -178,15 +188,18 @@ void register_g2m_portio_handler(struct domain *d); /* HVM port IO handler for vPCI accesses. */ void register_vpci_portio_handler(struct domain *d); =20 -/* HVM MMIO handler for PCI MMCFG accesses. */ -int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, - unsigned int start_bus, unsigned int end_b= us, - unsigned int seg); -/* Destroy tracked MMCFG areas. */ -void destroy_vpci_mmcfg(struct domain *d); +/* HVM PCI MMCFG regions registration. */ +int hvm_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg); +void hvm_free_mmcfg(struct domain *d); +const struct hvm_mmcfg *hvm_mmcfg_find(const struct domain *d, paddr_t add= r); =20 /* Check if an address is between a MMCFG region for a domain. */ -bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr); +static inline bool hvm_is_mmcfg_address(const struct domain *d, paddr_t ad= dr) +{ + return hvm_mmcfg_find(d, addr); +} =20 #endif /* __ASM_X86_HVM_IO_H__ */ =20 --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527369; cv=none; d=zoho.com; s=zohoarc; b=ogAEgJY+cI8r/beyt6pBHPrzsk6ELYLYJMHZL5VI8bWmQQ5vUptmwXMbdrJopI5b8YbBi3gn9Q0pqsYId/ngh6Vuw2YJVfvciUV0Lwv95Vm9UO7S1k7V9okzM9xsBYFKqRpn1Jhc6uwukNqmiphVaTWHPDx7ww7xDGk52R2zA0c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527369; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=5iS70WyciVk9aHWptSvGrvVMwF98q5p8jzxhroQjZUQ=; b=KFSg90lCgF/05fN3f670KIj2TBzEXIBfVVtLWVvrNz4UgHzzCaYtCx8YpK2vwDOm9adbTI5o4Sx+tZ5A+uCJJmNKgSLr2kzQ0tZ+vAVtcICDaqk5ABGnSUITgAHFdfkqQsFhk3QpmJ+Q3e+wNT+QTKXwKGifi4nhO4CT+PNeOgo= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527369616378.29203113838446; Tue, 3 Sep 2019 09:16:09 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSR-00020J-5D; Tue, 03 Sep 2019 16:15:03 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSQ-0001zo-Fp for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:15:02 +0000 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id faac24c4-ce65-11e9-b299-bc764e2007e4; Tue, 03 Sep 2019 16:15:00 +0000 (UTC) X-Inumbo-ID: faac24c4-ce65-11e9-b299-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527300; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NTD+Wr9TEFa+KLAg0Y/hRSrmWto6W80tiDvpI/WkphU=; b=PekLAmaG4LczA2FlOgE7KVrfpp2pSeh+oZkv6F7rww3FRdw5KOiIQyfi O8hjm9IBiezvtlmrN5QHxDl3HnXcnjt7jnyO3gdnnILmBIxr3g6zP9nXT 5V28dcvthCMWcbiqN1hdp0UKEjazI9CEVfzhv/at97zBtgh+nESA65VZd Y=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: bCANPBDGLfoTXGgRLFmMFRttjD4guNuhBTUyHwdMiTBA0wpmgrkZCeOggMf3Y7VWCPnHBBNWpF SZtF38PMLDYDvDUKGgcLTzEWIt5yw9zv0Xe/f86Ub23s4N37FNK81UUKfUWQOlV3uasr/pRX43 lz6jb5a0JA52bPFGIvkNiJiO4+OlgZNetyE1VSMm4XwlG3WatvbFYUpFyq9i/UcbcPZpKvVPk4 0W16jGUO1bc13onwOvMcYPWylGlaaGe2UncZhla4m+1zwsUNsWPr0LNrOhzI4DTDTWpIUZuxX5 HVo= X-SBRS: 2.7 X-MesageID: 5339073 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5339073" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:26 +0200 Message-ID: <20190903161428.7159-10-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 09/11] vpci: register as an internal ioreq server X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Switch vPCI to become an internal ioreq server, and hence drop all the vPCI specific decoding and trapping to PCI IO ports and MMCFG regions. This allows to unify the vPCI code with the ioreq infrastructure, opening the door for domains having PCI accesses handled by vPCI and other ioreq servers at the same time. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v1: - Remove prototypes for register_vpci_portio_handler and register_vpci_mmcfg_handler. - Re-add vpci check in hwdom_iommu_map. - Fix test harness. - Remove vpci_{read/write} prototypes and make the functions static. --- tools/tests/vpci/Makefile | 5 +- tools/tests/vpci/emul.h | 4 + xen/arch/x86/hvm/dom0_build.c | 1 + xen/arch/x86/hvm/hvm.c | 5 +- xen/arch/x86/hvm/io.c | 201 ---------------------------------- xen/arch/x86/physdev.c | 1 + xen/drivers/vpci/vpci.c | 69 +++++++++++- xen/include/xen/vpci.h | 22 +--- 8 files changed, 86 insertions(+), 222 deletions(-) diff --git a/tools/tests/vpci/Makefile b/tools/tests/vpci/Makefile index 5075bc2be2..c365c4522a 100644 --- a/tools/tests/vpci/Makefile +++ b/tools/tests/vpci/Makefile @@ -25,7 +25,10 @@ install: =20 vpci.c: $(XEN_ROOT)/xen/drivers/vpci/vpci.c # Remove includes and add the test harness header - sed -e '/#include/d' -e '1s/^/#include "emul.h"/' <$< >$@ + sed -e '/#include/d' -e '1s/^/#include "emul.h"/' \ + -e 's/^static uint32_t read/uint32_t vpci_read/' \ + -e 's/^static void write/void vpci_write/' <$< >$@ + =20 list.h: $(XEN_ROOT)/xen/include/xen/list.h vpci.h: $(XEN_ROOT)/xen/include/xen/vpci.h diff --git a/tools/tests/vpci/emul.h b/tools/tests/vpci/emul.h index 796797fdc2..790c4de601 100644 --- a/tools/tests/vpci/emul.h +++ b/tools/tests/vpci/emul.h @@ -125,6 +125,10 @@ typedef union { tx > ty ? tx : ty; \ }) =20 +uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size); +void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, + uint32_t data); + #endif =20 /* diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 1ddbd46b39..c022502bb8 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -29,6 +29,7 @@ =20 #include #include +#include #include #include #include diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index fec0073618..228c79643d 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -644,10 +644,13 @@ int hvm_domain_initialise(struct domain *d) d->arch.hvm.io_bitmap =3D hvm_io_bitmap; =20 register_g2m_portio_handler(d); - register_vpci_portio_handler(d); =20 hvm_ioreq_init(d); =20 + rc =3D vpci_register_ioreq(d); + if ( rc ) + goto fail1; + hvm_init_guest_time(d); =20 d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON] =3D SHUTDOWN_reboot; diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 3334888136..4c72e68a5b 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -290,204 +290,6 @@ unsigned int hvm_mmcfg_decode_addr(const struct hvm_m= mcfg *mmcfg, return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); } =20 - -/* Do some sanity checks. */ -static bool vpci_access_allowed(unsigned int reg, unsigned int len) -{ - /* Check access size. */ - if ( len !=3D 1 && len !=3D 2 && len !=3D 4 && len !=3D 8 ) - return false; - - /* Check that access is size aligned. */ - if ( (reg & (len - 1)) ) - return false; - - return true; -} - -/* vPCI config space IO ports handlers (0xcf8/0xcfc). */ -static bool vpci_portio_accept(const struct hvm_io_handler *handler, - const ioreq_t *p) -{ - return (p->addr =3D=3D 0xcf8 && p->size =3D=3D 4) || (p->addr & ~3) = =3D=3D 0xcfc; -} - -static int vpci_portio_read(const struct hvm_io_handler *handler, - uint64_t addr, uint32_t size, uint64_t *data) -{ - const struct domain *d =3D current->domain; - unsigned int reg; - pci_sbdf_t sbdf; - uint32_t cf8; - - *data =3D ~(uint64_t)0; - - if ( addr =3D=3D 0xcf8 ) - { - ASSERT(size =3D=3D 4); - *data =3D d->arch.hvm.pci_cf8; - return X86EMUL_OKAY; - } - - ASSERT((addr & ~3) =3D=3D 0xcfc); - cf8 =3D ACCESS_ONCE(d->arch.hvm.pci_cf8); - if ( !CF8_ENABLED(cf8) ) - return X86EMUL_UNHANDLEABLE; - - reg =3D hvm_pci_decode_addr(cf8, addr, &sbdf); - - if ( !vpci_access_allowed(reg, size) ) - return X86EMUL_OKAY; - - *data =3D vpci_read(sbdf, reg, size); - - return X86EMUL_OKAY; -} - -static int vpci_portio_write(const struct hvm_io_handler *handler, - uint64_t addr, uint32_t size, uint64_t data) -{ - struct domain *d =3D current->domain; - unsigned int reg; - pci_sbdf_t sbdf; - uint32_t cf8; - - if ( addr =3D=3D 0xcf8 ) - { - ASSERT(size =3D=3D 4); - d->arch.hvm.pci_cf8 =3D data; - return X86EMUL_OKAY; - } - - ASSERT((addr & ~3) =3D=3D 0xcfc); - cf8 =3D ACCESS_ONCE(d->arch.hvm.pci_cf8); - if ( !CF8_ENABLED(cf8) ) - return X86EMUL_UNHANDLEABLE; - - reg =3D hvm_pci_decode_addr(cf8, addr, &sbdf); - - if ( !vpci_access_allowed(reg, size) ) - return X86EMUL_OKAY; - - vpci_write(sbdf, reg, size, data); - - return X86EMUL_OKAY; -} - -static const struct hvm_io_ops vpci_portio_ops =3D { - .accept =3D vpci_portio_accept, - .read =3D vpci_portio_read, - .write =3D vpci_portio_write, -}; - -void register_vpci_portio_handler(struct domain *d) -{ - struct hvm_io_handler *handler; - - if ( !has_vpci(d) ) - return; - - handler =3D hvm_next_io_handler(d); - if ( !handler ) - return; - - handler->type =3D IOREQ_TYPE_PIO; - handler->ops =3D &vpci_portio_ops; -} - -/* Handlers to trap PCI MMCFG config accesses. */ -static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr) -{ - struct domain *d =3D v->domain; - bool found; - - read_lock(&d->arch.hvm.mmcfg_lock); - found =3D hvm_is_mmcfg_address(d, addr); - read_unlock(&d->arch.hvm.mmcfg_lock); - - return found; -} - -static int vpci_mmcfg_read(struct vcpu *v, unsigned long addr, - unsigned int len, unsigned long *data) -{ - struct domain *d =3D v->domain; - const struct hvm_mmcfg *mmcfg; - unsigned int reg; - pci_sbdf_t sbdf; - - *data =3D ~0ul; - - read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D hvm_mmcfg_find(d, addr); - if ( !mmcfg ) - { - read_unlock(&d->arch.hvm.mmcfg_lock); - return X86EMUL_RETRY; - } - - reg =3D hvm_mmcfg_decode_addr(mmcfg, addr, &sbdf); - read_unlock(&d->arch.hvm.mmcfg_lock); - - if ( !vpci_access_allowed(reg, len) || - (reg + len) > PCI_CFG_SPACE_EXP_SIZE ) - return X86EMUL_OKAY; - - /* - * According to the PCIe 3.1A specification: - * - Configuration Reads and Writes must usually be DWORD or smaller - * in size. - * - Because Root Complex implementations are not required to support - * accesses to a RCRB that cross DW boundaries [...] software - * should take care not to cause the generation of such accesses - * when accessing a RCRB unless the Root Complex will support the - * access. - * Xen however supports 8byte accesses by splitting them into two - * 4byte accesses. - */ - *data =3D vpci_read(sbdf, reg, min(4u, len)); - if ( len =3D=3D 8 ) - *data |=3D (uint64_t)vpci_read(sbdf, reg + 4, 4) << 32; - - return X86EMUL_OKAY; -} - -static int vpci_mmcfg_write(struct vcpu *v, unsigned long addr, - unsigned int len, unsigned long data) -{ - struct domain *d =3D v->domain; - const struct hvm_mmcfg *mmcfg; - unsigned int reg; - pci_sbdf_t sbdf; - - read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg =3D hvm_mmcfg_find(d, addr); - if ( !mmcfg ) - { - read_unlock(&d->arch.hvm.mmcfg_lock); - return X86EMUL_RETRY; - } - - reg =3D hvm_mmcfg_decode_addr(mmcfg, addr, &sbdf); - read_unlock(&d->arch.hvm.mmcfg_lock); - - if ( !vpci_access_allowed(reg, len) || - (reg + len) > PCI_CFG_SPACE_EXP_SIZE ) - return X86EMUL_OKAY; - - vpci_write(sbdf, reg, min(4u, len), data); - if ( len =3D=3D 8 ) - vpci_write(sbdf, reg + 4, 4, data >> 32); - - return X86EMUL_OKAY; -} - -static const struct hvm_mmio_ops vpci_mmcfg_ops =3D { - .check =3D vpci_mmcfg_accept, - .read =3D vpci_mmcfg_read, - .write =3D vpci_mmcfg_write, -}; - int hvm_register_mmcfg(struct domain *d, paddr_t addr, unsigned int start_bus, unsigned int end_bus, unsigned int seg) @@ -525,9 +327,6 @@ int hvm_register_mmcfg(struct domain *d, paddr_t addr, return ret; } =20 - if ( list_empty(&d->arch.hvm.mmcfg_regions) && has_vpci(d) ) - register_mmio_handler(d, &vpci_mmcfg_ops); - list_add(&new->next, &d->arch.hvm.mmcfg_regions); write_unlock(&d->arch.hvm.mmcfg_lock); =20 diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c index f61f66df5f..bf2c64a0a9 100644 --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index cbd1bac7fc..5664020c2d 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -20,6 +20,8 @@ #include #include =20 +#include + /* Internal struct to store the emulated PCI registers. */ struct vpci_register { vpci_read_t *read; @@ -302,7 +304,7 @@ static uint32_t merge_result(uint32_t data, uint32_t ne= w, unsigned int size, return (data & ~(mask << (offset * 8))) | ((new & mask) << (offset * 8= )); } =20 -uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size) +static uint32_t read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size) { const struct domain *d =3D current->domain; const struct pci_dev *pdev; @@ -404,8 +406,8 @@ static void vpci_write_helper(const struct pci_dev *pde= v, r->private); } =20 -void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, - uint32_t data) +static void write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, + uint32_t data) { const struct domain *d =3D current->domain; const struct pci_dev *pdev; @@ -478,6 +480,67 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, uns= igned int size, spin_unlock(&pdev->vpci->lock); } =20 +#ifdef __XEN__ +static int ioreq_handler(struct vcpu *v, ioreq_t *req, void *data) +{ + pci_sbdf_t sbdf; + + if ( req->type =3D=3D IOREQ_TYPE_INVALIDATE ) + /* + * Ignore invalidate requests, those can be received even without + * having any memory ranges registered, see send_invalidate_req. + */ + return X86EMUL_OKAY; + + if ( req->type !=3D IOREQ_TYPE_PCI_CONFIG || req->data_is_ptr ) + { + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + + sbdf.sbdf =3D req->addr >> 32; + + if ( req->dir ) + req->data =3D read(sbdf, req->addr, req->size); + else + write(sbdf, req->addr, req->size, req->data); + + return X86EMUL_OKAY; +} + +int vpci_register_ioreq(struct domain *d) +{ + ioservid_t id; + int rc; + + if ( !has_vpci(d) ) + return 0; + + rc =3D hvm_create_ioreq_server(d, HVM_IOREQSRV_BUFIOREQ_OFF, &id, true= ); + if ( rc ) + return rc; + + rc =3D hvm_add_ioreq_handler(d, id, ioreq_handler, NULL); + if ( rc ) + return rc; + + if ( is_hardware_domain(d) ) + { + /* Handle all devices in vpci. */ + rc =3D hvm_map_io_range_to_ioreq_server(d, id, XEN_DMOP_IO_RANGE_P= CI, + 0, ~(uint64_t)0); + if ( rc ) + return rc; + } + + rc =3D hvm_set_ioreq_server_state(d, id, true); + if ( rc ) + return rc; + + return rc; +} +#endif + /* * Local variables: * mode: C diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index 4cf233c779..36f435ed5b 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -23,6 +23,9 @@ typedef int vpci_register_init_t(struct pci_dev *dev); static vpci_register_init_t *const x##_entry \ __used_section(".data.vpci." p) =3D x =20 +/* Register vPCI handler with ioreq. */ +int vpci_register_ioreq(struct domain *d); + /* Add vPCI handlers to device. */ int __must_check vpci_add_handlers(struct pci_dev *dev); =20 @@ -38,11 +41,6 @@ int __must_check vpci_add_register(struct vpci *vpci, int __must_check vpci_remove_register(struct vpci *vpci, unsigned int offs= et, unsigned int size); =20 -/* Generic read/write handlers for the PCI config space. */ -uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size); -void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, - uint32_t data); - /* Passthrough handlers. */ uint32_t vpci_hw_read16(const struct pci_dev *pdev, unsigned int reg, void *data); @@ -221,20 +219,12 @@ static inline int vpci_add_handlers(struct pci_dev *p= dev) return 0; } =20 -static inline void vpci_dump_msi(void) { } - -static inline uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, - unsigned int size) +static inline int vpci_register_ioreq(struct domain *d) { - ASSERT_UNREACHABLE(); - return ~(uint32_t)0; + return 0; } =20 -static inline void vpci_write(pci_sbdf_t sbdf, unsigned int reg, - unsigned int size, uint32_t data) -{ - ASSERT_UNREACHABLE(); -} +static inline void vpci_dump_msi(void) { } =20 static inline bool vpci_process_pending(struct vcpu *v) { --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527371; cv=none; d=zoho.com; s=zohoarc; b=BzDVaZGP6AYykh1JGXzQvTUaCexDxsTy/0n189AyQW2NaqZ8xHnw8XLU3lblMvzYpBgU7GvClWyq9P1uzMW1R28M0746uiyu2UwogkKRhUTABKgm8OG4QAoMlJN2AS2GUeDUKP5uSlvKsKRhNeKP8XJt09v/0PJHmE6LXWXzvcs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527371; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=KPi5bmrkCjkRqoIvsRTURD41a9Eie9mjnXWOie/aoEI=; b=UohE9KblLOwHBPiAgG++jsV/R0tl9Q4zxzeqDLSMD0pHvevo99tAOKhyJJE5TfmhtXU3Z0JbXYyBGcpTciA0nbZ8EloLFUOWCQvXmFSDEBTRIfTk8u50Ae8yXlvVHM/UF/Yp+vhebyf0MMkt5rqzQ8cUL6T+w9KurIPau4a7tbU= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527371961326.9625141343987; Tue, 3 Sep 2019 09:16:11 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSq-0002Ui-5u; Tue, 03 Sep 2019 16:15:28 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSo-0002Sj-EB for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:15:26 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 095ee8e4-ce66-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:15:25 +0000 (UTC) X-Inumbo-ID: 095ee8e4-ce66-11e9-ab97-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527326; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RWEPQbzUT8Dtsy7Ra+OQaFxHaespK045XGO+giWKvs0=; b=YRr9X4tub/p4XUdwwjUcyWFOiveNLznhAAU6zbO+oMfdYqYje7348QKR FkX7Mi6HRaqtaI+GUKiLThj593DA174S+fQsPlaXxEJZOVj7Of+QP6IMH VE+uSze59PnejbBPrS9VjbZWDICRh8XsnmpSZ8EFGVaVE7hYDEiiUn8/S A=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: h+c3JZb20ypw8tydrDyO17PQdQgr6pPM4GYveWRFxW+/xVeWHj3zhSCe2TbDfizhxd5FhQpGUG B5rdfK44sMg0wtvhZ5BuUlB9CDamtGXpkXtl+cUPP/Ey+i3v1PnICoRNAUleDUhCwLDufTJIK1 o+ZBUC5bDYCyQwPI9rS5XdnuMFpTTqYzzg6L/fG+oiskZORiSWuAvzBsGqS8Vr0PIDWS2yLiND LNFMXZ2fbpGskE1mqJIzzt/BQOnsdyhGcMXdwVJNm4GvRCHnct756AkJFTLDUG1b/syyb4egvM c0o= X-SBRS: 2.7 X-MesageID: 5118656 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5118656" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:27 +0200 Message-ID: <20190903161428.7159-11-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 10/11] ioreq: split the code to detect PCI config space accesses X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Place the code that converts a PIO/COPY ioreq into a PCI_CONFIG one into a separate function, and adjust the code to make use of this newly introduced function. No functional change intended. Signed-off-by: Roger Pau Monn=C3=A9 --- Changes since v1: - New in this version. --- xen/arch/x86/hvm/ioreq.c | 111 +++++++++++++++++++++++---------------- 1 file changed, 67 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index fecdc2786f..33c56b880c 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -183,6 +183,54 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv,= ioreq_t *p) return true; } =20 +static void convert_pci_ioreq(struct domain *d, ioreq_t *p) +{ + const struct hvm_mmcfg *mmcfg; + uint32_t cf8 =3D d->arch.hvm.pci_cf8; + + if ( p->type !=3D IOREQ_TYPE_PIO && p->type !=3D IOREQ_TYPE_COPY ) + { + ASSERT_UNREACHABLE(); + return; + } + + read_lock(&d->arch.hvm.mmcfg_lock); + if ( (p->type =3D=3D IOREQ_TYPE_PIO && + (p->addr & ~3) =3D=3D 0xcfc && + CF8_ENABLED(cf8)) || + (p->type =3D=3D IOREQ_TYPE_COPY && + (mmcfg =3D hvm_mmcfg_find(d, p->addr)) !=3D NULL) ) + { + uint32_t x86_fam; + pci_sbdf_t sbdf; + unsigned int reg; + + reg =3D p->type =3D=3D IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p= ->addr, + &sbdf) + : hvm_mmcfg_decode_addr(mmcfg, p->= addr, + &sbdf); + + /* PCI config data cycle */ + p->addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; + /* AMD extended configuration space access? */ + if ( p->type =3D=3D IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) && + d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && + (x86_fam =3D get_cpu_family( + d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 && + x86_fam < 0x17 ) + { + uint64_t msr_val; + + if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && + (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) + p->addr |=3D CF8_ADDR_HI(cf8); + } + p->type =3D IOREQ_TYPE_PCI_CONFIG; + + } + read_unlock(&d->arch.hvm.mmcfg_lock); +} + bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; @@ -1350,57 +1398,36 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) ioservid_t hvm_select_ioreq_server(struct domain *d, ioreq_t *p) { struct hvm_ioreq_server *s; - uint32_t cf8; uint8_t type; - uint64_t addr; unsigned int id; - const struct hvm_mmcfg *mmcfg; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) return XEN_INVALID_IOSERVID; =20 - cf8 =3D d->arch.hvm.pci_cf8; + /* + * Check and convert the PIO/MMIO ioreq to a PCI config space + * access. + */ + convert_pci_ioreq(d, p); =20 - read_lock(&d->arch.hvm.mmcfg_lock); - if ( (p->type =3D=3D IOREQ_TYPE_PIO && - (p->addr & ~3) =3D=3D 0xcfc && - CF8_ENABLED(cf8)) || - (p->type =3D=3D IOREQ_TYPE_COPY && - (mmcfg =3D hvm_mmcfg_find(d, p->addr)) !=3D NULL) ) + switch ( p->type ) { - uint32_t x86_fam; - pci_sbdf_t sbdf; - unsigned int reg; + case IOREQ_TYPE_PIO: + type =3D XEN_DMOP_IO_RANGE_PORT; + break; =20 - reg =3D p->type =3D=3D IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p= ->addr, - &sbdf) - : hvm_mmcfg_decode_addr(mmcfg, p->= addr, - &sbdf); + case IOREQ_TYPE_COPY: + type =3D XEN_DMOP_IO_RANGE_MEMORY; + break; =20 - /* PCI config data cycle */ + case IOREQ_TYPE_PCI_CONFIG: type =3D XEN_DMOP_IO_RANGE_PCI; - addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; - /* AMD extended configuration space access? */ - if ( p->type =3D=3D IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) && - d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && - (x86_fam =3D get_cpu_family( - d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 && - x86_fam < 0x17 ) - { - uint64_t msr_val; + break; =20 - if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && - (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) - addr |=3D CF8_ADDR_HI(cf8); - } - } - else - { - type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - addr =3D p->addr; + default: + ASSERT_UNREACHABLE(); + return XEN_INVALID_IOSERVID; } - read_unlock(&d->arch.hvm.mmcfg_lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -1416,7 +1443,7 @@ ioservid_t hvm_select_ioreq_server(struct domain *d, = ioreq_t *p) unsigned long start, end; =20 case XEN_DMOP_IO_RANGE_PORT: - start =3D addr; + start =3D p->addr; end =3D start + p->size - 1; if ( rangeset_contains_range(r, start, end) ) return id; @@ -1433,12 +1460,8 @@ ioservid_t hvm_select_ioreq_server(struct domain *d,= ioreq_t *p) break; =20 case XEN_DMOP_IO_RANGE_PCI: - if ( rangeset_contains_singleton(r, addr >> 32) ) - { - p->type =3D IOREQ_TYPE_PCI_CONFIG; - p->addr =3D addr; + if ( rangeset_contains_singleton(r, p->addr >> 32) ) return id; - } =20 break; } --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri May 3 11:52:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527372; cv=none; d=zoho.com; s=zohoarc; b=I7zINqo0n/MuWu9MNj6VtI0yaUrfQ4jYEiyszbH1qnpkBO9fvL7eOYI2qjMr6Eh2xyj9Rae3nDF+OZEhI7WAGFEHjilwNLNDC++Pk0/A5vPdeGOQeK0HaEyI743N5HTJ/O0LGbCYzayFw1aRHr2q9r48b7+zw7tGzzItIggz6xk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527372; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=UPRF/quzg5iQe2LTnFalWulMwsE4ZwjEbdpe+oQpNvU=; b=J8DOnmt9qUVdj90E66pey4OZyL0i/NIRh3J7wNozOzRwStBJgfYo3A9tMJHm/p33tz/ABJ3yMtpFWw24vydyk7WUQslCmaQwMTg1vnCOFzXfnKu3KolAnjlsHMNU7/yMom83+5vyruq3AFGDadMUw/GJiLjYu16OnGv2LKtn5iw= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527372649649.8277241246512; Tue, 3 Sep 2019 09:16:12 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSW-00024e-F3; Tue, 03 Sep 2019 16:15:08 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSV-00022x-6i for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:15:07 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id fd81e199-ce65-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:15:06 +0000 (UTC) X-Inumbo-ID: fd81e199-ce65-11e9-ab97-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527307; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s8EqdBvCTePDowWf8R6eBmH/8k4r12TC0opBxZ29Jws=; b=D308+m3sSpPJYFY+g4D+14jkCEJM96Jm/SaXcHJu+MKtM6AkML35on9X +yq8hOaWnAU5ALRho6AHfeK0529/7Ocakwq3PwfxuwmkVZhJF40yIgMPx 14C/afuObe8ccxAHkMFUqW6O+41bdTJqin1gwOzPiVuQHgd7SMTn0M3BN M=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: JUFmZo9N7wVLT7SRJiZC6ZHCvyss0ikSARoRAAf2BD5hWH2AJIAjrDmGDrwlY+s94DN3DFN88r neStFtCIyw1+U9JJdgVPkkPdoyNKAmow3kfgYinsxfMOI3qslgzCu+aEwn8cSK4MHwiUtzOjrQ 43P99xBx+D+UzRIG1xTWTTqrAEElYyWC7eWY/ySdQ5VdeXyIHF1saNLg3JAeVc0mE+98JOpl6T OJNpVU4eSL3PrLVFMkyd84/u4RZo3XeKjO8Llgf/p3t2fL9C27yVVhcngbvWX1q2Y/nC2JN+Ll eb8= X-SBRS: 2.7 X-MesageID: 5068915 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5068915" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:28 +0200 Message-ID: <20190903161428.7159-12-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 11/11] ioreq: provide support for long-running operations... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) ...and switch vPCI to use this infrastructure for long running physmap modification operations. This allows to get rid of the vPCI specific modifications done to handle_hvm_io_completion and allows generalizing the support for long-running operations to other internal ioreq servers. Such support is implemented as a specific handler that can be registers by internal ioreq servers and that will be called to check for pending work. Returning true from this handler will prevent the vcpu from running until the handler returns false. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/hvm/ioreq.c | 55 +++++++++++++++++++++++++----- xen/drivers/vpci/header.c | 61 ++++++++++++++++++---------------- xen/drivers/vpci/vpci.c | 8 ++++- xen/include/asm-x86/hvm/vcpu.h | 3 +- xen/include/xen/vpci.h | 6 ---- 5 files changed, 89 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 33c56b880c..caa53dfa84 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -239,16 +239,48 @@ bool handle_hvm_io_completion(struct vcpu *v) enum hvm_io_completion io_completion; unsigned int id; =20 - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - - FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) + FOR_EACH_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 + if ( hvm_ioreq_is_internal(id) ) + { + if ( vio->io_req.state =3D=3D STATE_IOREQ_INPROCESS ) + { + ioreq_t req =3D vio->io_req; + + /* + * Check and convert the PIO/MMIO ioreq to a PCI config sp= ace + * access. + */ + convert_pci_ioreq(d, &req); + + if ( s->handler(v, &req, s->data) =3D=3D X86EMUL_RETRY ) + { + /* + * Need to raise a scheduler irq in order to prevent t= he + * guest vcpu from resuming execution. + * + * Note this is not required for external ioreq operat= ions + * because in that case the vcpu is marked as blocked,= but + * this cannot be done for long-running internal + * operations, since it would prevent the vcpu from be= ing + * scheduled and thus the long running operation from + * finishing. + */ + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } + + /* Finished processing the ioreq. */ + if ( hvm_ioreq_needs_completion(&vio->io_req) ) + vio->io_req.state =3D STATE_IORESP_READY; + else + vio->io_req.state =3D STATE_IOREQ_NONE; + } + continue; + } + list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) @@ -1582,7 +1614,14 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, = bool buffered) return hvm_send_buffered_ioreq(s, proto_p); =20 if ( hvm_ioreq_is_internal(id) ) - return s->handler(curr, proto_p, s->data); + { + int rc =3D s->handler(curr, proto_p, s->data); + + if ( rc =3D=3D X86EMUL_RETRY ) + curr->arch.hvm.hvm_io.io_req.state =3D STATE_IOREQ_INPROCESS; + + return rc; + } =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return X86EMUL_RETRY; diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c index 3c794f486d..f1c1a69492 100644 --- a/xen/drivers/vpci/header.c +++ b/xen/drivers/vpci/header.c @@ -129,37 +129,42 @@ static void modify_decoding(const struct pci_dev *pde= v, uint16_t cmd, =20 bool vpci_process_pending(struct vcpu *v) { - if ( v->vpci.mem ) + struct map_data data =3D { + .d =3D v->domain, + .map =3D v->vpci.cmd & PCI_COMMAND_MEMORY, + }; + int rc; + + if ( !v->vpci.mem ) { - struct map_data data =3D { - .d =3D v->domain, - .map =3D v->vpci.cmd & PCI_COMMAND_MEMORY, - }; - int rc =3D rangeset_consume_ranges(v->vpci.mem, map_range, &data); - - if ( rc =3D=3D -ERESTART ) - return true; - - spin_lock(&v->vpci.pdev->vpci->lock); - /* Disable memory decoding unconditionally on failure. */ - modify_decoding(v->vpci.pdev, - rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.c= md, - !rc && v->vpci.rom_only); - spin_unlock(&v->vpci.pdev->vpci->lock); - - rangeset_destroy(v->vpci.mem); - v->vpci.mem =3D NULL; - if ( rc ) - /* - * FIXME: in case of failure remove the device from the domain. - * Note that there might still be leftover mappings. While thi= s is - * safe for Dom0, for DomUs the domain will likely need to be - * killed in order to avoid leaking stale p2m mappings on - * failure. - */ - vpci_remove_device(v->vpci.pdev); + ASSERT_UNREACHABLE(); + return false; } =20 + rc =3D rangeset_consume_ranges(v->vpci.mem, map_range, &data); + + if ( rc =3D=3D -ERESTART ) + return true; + + spin_lock(&v->vpci.pdev->vpci->lock); + /* Disable memory decoding unconditionally on failure. */ + modify_decoding(v->vpci.pdev, + rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.cmd, + !rc && v->vpci.rom_only); + spin_unlock(&v->vpci.pdev->vpci->lock); + + rangeset_destroy(v->vpci.mem); + v->vpci.mem =3D NULL; + if ( rc ) + /* + * FIXME: in case of failure remove the device from the domain. + * Note that there might still be leftover mappings. While this is + * safe for Dom0, for DomUs the domain will likely need to be + * killed in order to avoid leaking stale p2m mappings on + * failure. + */ + vpci_remove_device(v->vpci.pdev); + return false; } =20 diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index 5664020c2d..6069dff612 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -498,6 +498,12 @@ static int ioreq_handler(struct vcpu *v, ioreq_t *req,= void *data) return X86EMUL_UNHANDLEABLE; } =20 + if ( v->vpci.mem ) + { + ASSERT(req->state =3D=3D STATE_IOREQ_INPROCESS); + return vpci_process_pending(v) ? X86EMUL_RETRY : X86EMUL_OKAY; + } + sbdf.sbdf =3D req->addr >> 32; =20 if ( req->dir ) @@ -505,7 +511,7 @@ static int ioreq_handler(struct vcpu *v, ioreq_t *req, = void *data) else write(sbdf, req->addr, req->size, req->data); =20 - return X86EMUL_OKAY; + return v->vpci.mem ? X86EMUL_RETRY : X86EMUL_OKAY; } =20 int vpci_register_ioreq(struct domain *d) diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 38f5c2bb9b..4563746466 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -92,7 +92,8 @@ struct hvm_vcpu_io { =20 static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) { - return ioreq->state =3D=3D STATE_IOREQ_READY && + return (ioreq->state =3D=3D STATE_IOREQ_READY || + ioreq->state =3D=3D STATE_IOREQ_INPROCESS) && !ioreq->data_is_ptr && (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); } diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index 36f435ed5b..a65491e0c9 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -225,12 +225,6 @@ static inline int vpci_register_ioreq(struct domain *d) } =20 static inline void vpci_dump_msi(void) { } - -static inline bool vpci_process_pending(struct vcpu *v) -{ - ASSERT_UNREACHABLE(); - return false; -} #endif =20 #endif --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel