From nobody Mon Feb 9 02:28:09 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1567527372; cv=none; d=zoho.com; s=zohoarc; b=I7zINqo0n/MuWu9MNj6VtI0yaUrfQ4jYEiyszbH1qnpkBO9fvL7eOYI2qjMr6Eh2xyj9Rae3nDF+OZEhI7WAGFEHjilwNLNDC++Pk0/A5vPdeGOQeK0HaEyI743N5HTJ/O0LGbCYzayFw1aRHr2q9r48b7+zw7tGzzItIggz6xk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1567527372; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=UPRF/quzg5iQe2LTnFalWulMwsE4ZwjEbdpe+oQpNvU=; b=J8DOnmt9qUVdj90E66pey4OZyL0i/NIRh3J7wNozOzRwStBJgfYo3A9tMJHm/p33tz/ABJ3yMtpFWw24vydyk7WUQslCmaQwMTg1vnCOFzXfnKu3KolAnjlsHMNU7/yMom83+5vyruq3AFGDadMUw/GJiLjYu16OnGv2LKtn5iw= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1567527372649649.8277241246512; Tue, 3 Sep 2019 09:16:12 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSW-00024e-F3; Tue, 03 Sep 2019 16:15:08 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSV-00022x-6i for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:15:07 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id fd81e199-ce65-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:15:06 +0000 (UTC) X-Inumbo-ID: fd81e199-ce65-11e9-ab97-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527307; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s8EqdBvCTePDowWf8R6eBmH/8k4r12TC0opBxZ29Jws=; b=D308+m3sSpPJYFY+g4D+14jkCEJM96Jm/SaXcHJu+MKtM6AkML35on9X +yq8hOaWnAU5ALRho6AHfeK0529/7Ocakwq3PwfxuwmkVZhJF40yIgMPx 14C/afuObe8ccxAHkMFUqW6O+41bdTJqin1gwOzPiVuQHgd7SMTn0M3BN M=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: JUFmZo9N7wVLT7SRJiZC6ZHCvyss0ikSARoRAAf2BD5hWH2AJIAjrDmGDrwlY+s94DN3DFN88r neStFtCIyw1+U9JJdgVPkkPdoyNKAmow3kfgYinsxfMOI3qslgzCu+aEwn8cSK4MHwiUtzOjrQ 43P99xBx+D+UzRIG1xTWTTqrAEElYyWC7eWY/ySdQ5VdeXyIHF1saNLg3JAeVc0mE+98JOpl6T OJNpVU4eSL3PrLVFMkyd84/u4RZo3XeKjO8Llgf/p3t2fL9C27yVVhcngbvWX1q2Y/nC2JN+Ll eb8= X-SBRS: 2.7 X-MesageID: 5068915 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5068915" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:28 +0200 Message-ID: <20190903161428.7159-12-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 11/11] ioreq: provide support for long-running operations... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) ...and switch vPCI to use this infrastructure for long running physmap modification operations. This allows to get rid of the vPCI specific modifications done to handle_hvm_io_completion and allows generalizing the support for long-running operations to other internal ioreq servers. Such support is implemented as a specific handler that can be registers by internal ioreq servers and that will be called to check for pending work. Returning true from this handler will prevent the vcpu from running until the handler returns false. Signed-off-by: Roger Pau Monn=C3=A9 --- xen/arch/x86/hvm/ioreq.c | 55 +++++++++++++++++++++++++----- xen/drivers/vpci/header.c | 61 ++++++++++++++++++---------------- xen/drivers/vpci/vpci.c | 8 ++++- xen/include/asm-x86/hvm/vcpu.h | 3 +- xen/include/xen/vpci.h | 6 ---- 5 files changed, 89 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 33c56b880c..caa53dfa84 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -239,16 +239,48 @@ bool handle_hvm_io_completion(struct vcpu *v) enum hvm_io_completion io_completion; unsigned int id; =20 - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - - FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) + FOR_EACH_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; =20 + if ( hvm_ioreq_is_internal(id) ) + { + if ( vio->io_req.state =3D=3D STATE_IOREQ_INPROCESS ) + { + ioreq_t req =3D vio->io_req; + + /* + * Check and convert the PIO/MMIO ioreq to a PCI config sp= ace + * access. + */ + convert_pci_ioreq(d, &req); + + if ( s->handler(v, &req, s->data) =3D=3D X86EMUL_RETRY ) + { + /* + * Need to raise a scheduler irq in order to prevent t= he + * guest vcpu from resuming execution. + * + * Note this is not required for external ioreq operat= ions + * because in that case the vcpu is marked as blocked,= but + * this cannot be done for long-running internal + * operations, since it would prevent the vcpu from be= ing + * scheduled and thus the long running operation from + * finishing. + */ + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } + + /* Finished processing the ioreq. */ + if ( hvm_ioreq_needs_completion(&vio->io_req) ) + vio->io_req.state =3D STATE_IORESP_READY; + else + vio->io_req.state =3D STATE_IOREQ_NONE; + } + continue; + } + list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) @@ -1582,7 +1614,14 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, = bool buffered) return hvm_send_buffered_ioreq(s, proto_p); =20 if ( hvm_ioreq_is_internal(id) ) - return s->handler(curr, proto_p, s->data); + { + int rc =3D s->handler(curr, proto_p, s->data); + + if ( rc =3D=3D X86EMUL_RETRY ) + curr->arch.hvm.hvm_io.io_req.state =3D STATE_IOREQ_INPROCESS; + + return rc; + } =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return X86EMUL_RETRY; diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c index 3c794f486d..f1c1a69492 100644 --- a/xen/drivers/vpci/header.c +++ b/xen/drivers/vpci/header.c @@ -129,37 +129,42 @@ static void modify_decoding(const struct pci_dev *pde= v, uint16_t cmd, =20 bool vpci_process_pending(struct vcpu *v) { - if ( v->vpci.mem ) + struct map_data data =3D { + .d =3D v->domain, + .map =3D v->vpci.cmd & PCI_COMMAND_MEMORY, + }; + int rc; + + if ( !v->vpci.mem ) { - struct map_data data =3D { - .d =3D v->domain, - .map =3D v->vpci.cmd & PCI_COMMAND_MEMORY, - }; - int rc =3D rangeset_consume_ranges(v->vpci.mem, map_range, &data); - - if ( rc =3D=3D -ERESTART ) - return true; - - spin_lock(&v->vpci.pdev->vpci->lock); - /* Disable memory decoding unconditionally on failure. */ - modify_decoding(v->vpci.pdev, - rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.c= md, - !rc && v->vpci.rom_only); - spin_unlock(&v->vpci.pdev->vpci->lock); - - rangeset_destroy(v->vpci.mem); - v->vpci.mem =3D NULL; - if ( rc ) - /* - * FIXME: in case of failure remove the device from the domain. - * Note that there might still be leftover mappings. While thi= s is - * safe for Dom0, for DomUs the domain will likely need to be - * killed in order to avoid leaking stale p2m mappings on - * failure. - */ - vpci_remove_device(v->vpci.pdev); + ASSERT_UNREACHABLE(); + return false; } =20 + rc =3D rangeset_consume_ranges(v->vpci.mem, map_range, &data); + + if ( rc =3D=3D -ERESTART ) + return true; + + spin_lock(&v->vpci.pdev->vpci->lock); + /* Disable memory decoding unconditionally on failure. */ + modify_decoding(v->vpci.pdev, + rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.cmd, + !rc && v->vpci.rom_only); + spin_unlock(&v->vpci.pdev->vpci->lock); + + rangeset_destroy(v->vpci.mem); + v->vpci.mem =3D NULL; + if ( rc ) + /* + * FIXME: in case of failure remove the device from the domain. + * Note that there might still be leftover mappings. While this is + * safe for Dom0, for DomUs the domain will likely need to be + * killed in order to avoid leaking stale p2m mappings on + * failure. + */ + vpci_remove_device(v->vpci.pdev); + return false; } =20 diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index 5664020c2d..6069dff612 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -498,6 +498,12 @@ static int ioreq_handler(struct vcpu *v, ioreq_t *req,= void *data) return X86EMUL_UNHANDLEABLE; } =20 + if ( v->vpci.mem ) + { + ASSERT(req->state =3D=3D STATE_IOREQ_INPROCESS); + return vpci_process_pending(v) ? X86EMUL_RETRY : X86EMUL_OKAY; + } + sbdf.sbdf =3D req->addr >> 32; =20 if ( req->dir ) @@ -505,7 +511,7 @@ static int ioreq_handler(struct vcpu *v, ioreq_t *req, = void *data) else write(sbdf, req->addr, req->size, req->data); =20 - return X86EMUL_OKAY; + return v->vpci.mem ? X86EMUL_RETRY : X86EMUL_OKAY; } =20 int vpci_register_ioreq(struct domain *d) diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 38f5c2bb9b..4563746466 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -92,7 +92,8 @@ struct hvm_vcpu_io { =20 static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) { - return ioreq->state =3D=3D STATE_IOREQ_READY && + return (ioreq->state =3D=3D STATE_IOREQ_READY || + ioreq->state =3D=3D STATE_IOREQ_INPROCESS) && !ioreq->data_is_ptr && (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); } diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index 36f435ed5b..a65491e0c9 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -225,12 +225,6 @@ static inline int vpci_register_ioreq(struct domain *d) } =20 static inline void vpci_dump_msi(void) { } - -static inline bool vpci_process_pending(struct vcpu *v) -{ - ASSERT_UNREACHABLE(); - return false; -} #endif =20 #endif --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel