From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780324; cv=none; d=zohomail.com; s=zohoarc; b=UbNVLF13MZD88v/ZqYgSOtVAE8NayVdCu+XjFcopuqCkU8w0yXGYXGbFpLMlmm+nxMO3peQPzRM6xht/Faht2Qf20OG5BVBdnDBAfk2eR1VbCCmAZ7tu+L/WR/GO1e6aIcMK3jOZwtmPzcfw+WDGx5APsUNPPzWcZxOSGg869hI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780324; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=nQibM6OLGUJNh20LEK8SFBDh55fjFRwqVu7e0OezZaA=; b=WH7dgGPEQWrOsOp5A7DaXUIDtph1rZSWgKtToNiF5JSgAVbra68Lj4m9itPSlqlplM6AvFw5g05vaejOC/mRJWs3IlsaRVsTzlaj2SpYpmhpX+0Ga2l+GCYjVpFCsEWSe+xJ8ZUQF1GeQdOZTVm3Sort0I6gKd6P1BMMSmlQSUg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780324245202.0389455341948; Thu, 15 Oct 2020 09:45:24 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7571.19912 (Exim 4.92) (envelope-from ) id 1kT6NB-00050z-4i; Thu, 15 Oct 2020 16:45:01 +0000 Received: by outflank-mailman (output) from mailman id 7571.19912; Thu, 15 Oct 2020 16:45:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NB-00050r-1T; Thu, 15 Oct 2020 16:45:01 +0000 Received: by outflank-mailman (input) for mailman id 7571; Thu, 15 Oct 2020 16:45:00 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NA-0004yr-4h for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:00 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b31d87fb-3f7e-45dc-adf7-64a3b9353b5d; Thu, 15 Oct 2020 16:44:54 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id h20so3821966lji.9 for ; Thu, 15 Oct 2020 09:44:54 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:52 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NA-0004yr-4h for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:00 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b31d87fb-3f7e-45dc-adf7-64a3b9353b5d; Thu, 15 Oct 2020 16:44:54 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id h20so3821966lji.9 for ; Thu, 15 Oct 2020 09:44:54 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:52 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b31d87fb-3f7e-45dc-adf7-64a3b9353b5d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nQibM6OLGUJNh20LEK8SFBDh55fjFRwqVu7e0OezZaA=; b=LpY/YPInCcQOPibddOk7rTHh7vATE8tPwbTLLzOZ2joUeEnlFw9Alav9KxN5BU8rN1 Zz+gjFLuNjP1ngfjQdkIn8ECw/vxm+lpCKdrbOClQax7tIEkKaz4/05Cn5N5z+IZAIPw 66edSD9rtjPvQihcLu0LNUBzX+tD14l+JYOFiiQ+rXdr5yxHF/tZcrHAI7HfeqszEir3 4wYPe1td5vZdG+kHYhcYRXg2igyqjWYhKLjVEYrmOai/j8TazrYBnhkk466ntVaOilNo NLc0f4OfdpbnWnbyvEew5G6xrpLDuIPg+sst0wSXWzyx8bozHokCUFH1BPvx8iEorr1m wpVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nQibM6OLGUJNh20LEK8SFBDh55fjFRwqVu7e0OezZaA=; b=igt2SQuSuvtjIU3daGWKw+uJAnWjKtMH5Nyyjst98RqlvqdTRGgxbYlx956PFCqj29 YBpWbsFgdN/zv6+8QxP+10FalJwmsnIrizRdpOQkn1+yoVEshJ4brHdLjWnY/DyiZI3n 8Ah3GsYlPg2V4UvZ19scl8u2IFwiANvqTCtEINnz4fiS+Km9fNhK5n+XcrFC4qNMbmwS gLeQaM3nvd1yRQrwtrCx4hd+wrSEbshU/etHRqLJWRau40v1todQi/M9AWixl9yq1ohM teH2s4TtVkOZ12bWj+X1RGrMGLJebcvqO/CImNuXz0EGIUhknFWdHZl1KZRNH/kZSH1T NVgw== X-Gm-Message-State: AOAM532+q+ZrGwQ+RCXMRERHJ+WFM3odbWHF+/Vlk9fbyCMjUiFyNULm 838in9WNEWqf+Khfb8gdX0KYbQDosMx5tw== X-Google-Smtp-Source: ABdhPJywIMlWOYIyBcJjX6BDz4MfjG3WD16fS8DtYtKnNg29oCzYIjgpeYoPH1rzE2IlVTaEUaA/Aw== X-Received: by 2002:a2e:b04f:: with SMTP id d15mr1572191ljl.413.1602780292832; Thu, 15 Oct 2020 09:44:52 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall , Stefano Stabellini , Wei Liu , Julien Grall Subject: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it common Date: Thu, 15 Oct 2020 19:44:12 +0300 Message-Id: <1602780274-29141-2-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch makes some preparation to x86/hvm/ioreq.c before moving to the common code. This way we will get a verbatim copy for a code movement in subsequent patch (arch/x86/hvm/ioreq.c will be *just* renamed to common/ioreq). This patch does the following: 1. Introduce *inline* arch_hvm_ioreq_init(), arch_hvm_ioreq_destroy(), arch_hvm_io_completion(), arch_hvm_destroy_ioreq_server() and hvm_ioreq_server_get_type_addr() to abstract arch specific materials. 2 Make hvm_map_mem_type_to_ioreq_server() *inline*. It is not going to be called from the common code. 3. Make get_ioreq_server() global. It is going to be called from a few places. 4. Add IOREQ_STATUS_* #define-s and update candidates for moving. 5. Re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" - fold the check of p->type into hvm_get_ioreq_server_range_type() and make it return success/failure - remove relocate_portio_handler() call from arch_hvm_ioreq_destroy() in arch/x86/hvm/ioreq.c - introduce arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completio= n() Changes V1 -> V2: - update patch description - make arch functions inline and put them into arch header to achieve a truly rename by the subsequent patch - return void in arch_hvm_destroy_ioreq_server() - return bool in arch_hvm_ioreq_destroy() - bring relocate_portio_handler() back to arch_hvm_ioreq_destroy() - rename IOREQ_IO* to IOREQ_STATUS* - remove *handle* from arch_handle_hvm_io_completion() - re-order #include-s alphabetically - rename hvm_get_ioreq_server_range_type() to hvm_ioreq_server_get_type_= addr() and add "const" to several arguments --- xen/arch/x86/hvm/ioreq.c | 153 +++++-------------------------------- xen/include/asm-x86/hvm/ioreq.h | 165 ++++++++++++++++++++++++++++++++++++= +++- 2 files changed, 184 insertions(+), 134 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 1cc27df..d3433d7 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1,5 +1,5 @@ /* - * hvm/io.c: hardware virtual machine I/O emulation + * ioreq.c: hardware virtual machine I/O emulation * * Copyright (c) 2016 Citrix Systems Inc. * @@ -17,21 +17,18 @@ */ =20 #include +#include +#include #include +#include #include -#include +#include #include -#include #include -#include -#include -#include +#include #include =20 -#include -#include #include -#include =20 #include #include @@ -48,8 +45,8 @@ static void set_ioreq_server(struct domain *d, unsigned i= nt id, #define GET_IOREQ_SERVER(d, id) \ (d)->arch.hvm.ioreq_server.server[id] =20 -static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) +struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) { if ( id >=3D MAX_NR_IOREQ_SERVERS ) return NULL; @@ -209,19 +206,8 @@ bool handle_hvm_io_completion(struct vcpu *v) return handle_pio(vio->io_req.addr, vio->io_req.size, vio->io_req.dir); =20 - case HVMIO_realmode_completion: - { - struct hvm_emulate_ctxt ctxt; - - hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); - vmx_realmode_emulate_one(&ctxt); - hvm_emulate_writeback(&ctxt); - - break; - } default: - ASSERT_UNREACHABLE(); - break; + return arch_hvm_io_completion(io_completion); } =20 return true; @@ -855,7 +841,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) =20 domain_pause(d); =20 - p2m_set_ioreq_server(d, 0, s); + arch_hvm_destroy_ioreq_server(s); =20 hvm_ioreq_server_disable(s); =20 @@ -1080,54 +1066,6 @@ int hvm_unmap_io_range_from_ioreq_server(struct doma= in *d, ioservid_t id, return rc; } =20 -/* - * Map or unmap an ioreq server to specific memory type. For now, only - * HVMMEM_ioreq_server is supported, and in the future new types can be - * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And - * currently, only write operations are to be forwarded to an ioreq server. - * Support for the emulation of read operations can be added when an ioreq - * server has such requirement in the future. - */ -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags) -{ - struct hvm_ioreq_server *s; - int rc; - - if ( type !=3D HVMMEM_ioreq_server ) - return -EINVAL; - - if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - rc =3D p2m_set_ioreq_server(d, flags, s); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - if ( rc =3D=3D 0 && flags =3D=3D 0 ) - { - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - if ( read_atomic(&p2m->ioreq.entry_count) ) - p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); - } - - return rc; -} - int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, bool enabled) { @@ -1215,7 +1153,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) struct hvm_ioreq_server *s; unsigned int id; =20 - if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) ) + if ( !arch_hvm_ioreq_destroy(d) ) return; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1243,50 +1181,13 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, ioreq_t *p) { struct hvm_ioreq_server *s; - uint32_t cf8; uint8_t type; uint64_t addr; unsigned int id; =20 - if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) + if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) ) return NULL; =20 - cf8 =3D d->arch.hvm.pci_cf8; - - if ( p->type =3D=3D IOREQ_TYPE_PIO && - (p->addr & ~3) =3D=3D 0xcfc && - CF8_ENABLED(cf8) ) - { - uint32_t x86_fam; - pci_sbdf_t sbdf; - unsigned int reg; - - reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf); - - /* PCI config data cycle */ - type =3D XEN_DMOP_IO_RANGE_PCI; - addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; - /* AMD extended configuration space access? */ - if ( CF8_ADDR_HI(cf8) && - d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && - (x86_fam =3D get_cpu_family( - d->arch.cpuid->basic.raw_fms, NULL, NULL)) >=3D 0x10 && - x86_fam < 0x17 ) - { - uint64_t msr_val; - - if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && - (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) - addr |=3D CF8_ADDR_HI(cf8); - } - } - else - { - type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - addr =3D p->addr; - } - FOR_EACH_IOREQ_SERVER(d, id, s) { struct rangeset *r; @@ -1351,7 +1252,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) pg =3D iorp->va; =20 if ( !pg ) - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; =20 /* * Return 0 for the cases we can't deal with: @@ -1381,7 +1282,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) break; default: gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } =20 spin_lock(&s->bufioreq_lock); @@ -1391,7 +1292,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) { /* The queue is full: send the iopacket through the normal path. */ spin_unlock(&s->bufioreq_lock); - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } =20 pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D bp; @@ -1422,7 +1323,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) notify_via_xen_event_channel(d, s->bufioreq_evtchn); spin_unlock(&s->bufioreq_lock); =20 - return X86EMUL_OKAY; + return IOREQ_STATUS_HANDLED; } =20 int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, @@ -1438,7 +1339,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_= t *proto_p, return hvm_send_buffered_ioreq(s, proto_p); =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return X86EMUL_RETRY; + return IOREQ_STATUS_RETRY; =20 list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -1478,11 +1379,11 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, iore= q_t *proto_p, notify_via_xen_event_channel(d, port); =20 sv->pending =3D true; - return X86EMUL_RETRY; + return IOREQ_STATUS_RETRY; } } =20 - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } =20 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) @@ -1496,30 +1397,18 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool b= uffered) if ( !s->enabled ) continue; =20 - if ( hvm_send_ioreq(s, p, buffered) =3D=3D X86EMUL_UNHANDLEABLE ) + if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) failed++; } =20 return failed; } =20 -static int hvm_access_cf8( - int dir, unsigned int port, unsigned int bytes, uint32_t *val) -{ - struct domain *d =3D current->domain; - - if ( dir =3D=3D IOREQ_WRITE && bytes =3D=3D 4 ) - d->arch.hvm.pci_cf8 =3D *val; - - /* We always need to fall through to the catch all emulator */ - return X86EMUL_UNHANDLEABLE; -} - void hvm_ioreq_init(struct domain *d) { spin_lock_init(&d->arch.hvm.ioreq_server.lock); =20 - register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); + arch_hvm_ioreq_init(d); } =20 /* diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index e2588e9..376e2ef 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,6 +19,165 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ =20 +#include +#include + +#include + +struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id); + +static inline bool arch_hvm_io_completion(enum hvm_io_completion io_comple= tion) +{ + switch ( io_completion ) + { + case HVMIO_realmode_completion: + { + struct hvm_emulate_ctxt ctxt; + + hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); + vmx_realmode_emulate_one(&ctxt); + hvm_emulate_writeback(&ctxt); + + break; + } + + default: + ASSERT_UNREACHABLE(); + break; + } + + return true; +} + +/* Called when target domain is paused */ +static inline void arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *= s) +{ + p2m_set_ioreq_server(s->target, 0, s); +} + +/* + * Map or unmap an ioreq server to specific memory type. For now, only + * HVMMEM_ioreq_server is supported, and in the future new types can be + * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And + * currently, only write operations are to be forwarded to an ioreq server. + * Support for the emulation of read operations can be added when an ioreq + * server has such requirement in the future. + */ +static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d, + ioservid_t id, + uint32_t type, + uint32_t flags) +{ + struct hvm_ioreq_server *s; + int rc; + + if ( type !=3D HVMMEM_ioreq_server ) + return -EINVAL; + + if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + rc =3D p2m_set_ioreq_server(d, flags, s); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + if ( rc =3D=3D 0 && flags =3D=3D 0 ) + { + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + if ( read_atomic(&p2m->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); + } + + return rc; +} + +static inline int hvm_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) +{ + uint32_t cf8 =3D d->arch.hvm.pci_cf8; + + if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) + return -EINVAL; + + if ( p->type =3D=3D IOREQ_TYPE_PIO && + (p->addr & ~3) =3D=3D 0xcfc && + CF8_ENABLED(cf8) ) + { + uint32_t x86_fam; + pci_sbdf_t sbdf; + unsigned int reg; + + reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf); + + /* PCI config data cycle */ + *type =3D XEN_DMOP_IO_RANGE_PCI; + *addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; + /* AMD extended configuration space access? */ + if ( CF8_ADDR_HI(cf8) && + d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && + (x86_fam =3D get_cpu_family( + d->arch.cpuid->basic.raw_fms, NULL, NULL)) >=3D 0x10 && + x86_fam < 0x17 ) + { + uint64_t msr_val; + + if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && + (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) + *addr |=3D CF8_ADDR_HI(cf8); + } + } + else + { + *type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr =3D p->addr; + } + + return 0; +} + +static inline int hvm_access_cf8( + int dir, unsigned int port, unsigned int bytes, uint32_t *val) +{ + struct domain *d =3D current->domain; + + if ( dir =3D=3D IOREQ_WRITE && bytes =3D=3D 4 ) + d->arch.hvm.pci_cf8 =3D *val; + + /* We always need to fall through to the catch all emulator */ + return X86EMUL_UNHANDLEABLE; +} + +static inline void arch_hvm_ioreq_init(struct domain *d) +{ + register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); +} + +static inline bool arch_hvm_ioreq_destroy(struct domain *d) +{ + if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) ) + return false; + + return true; +} + bool hvm_io_pending(struct vcpu *v); bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); @@ -38,8 +197,6 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, i= oservid_t id, int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end); -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags); int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, bool enabled); =20 @@ -55,6 +212,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffe= red); =20 void hvm_ioreq_init(struct domain *d); =20 +#define IOREQ_STATUS_HANDLED X86EMUL_OKAY +#define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE +#define IOREQ_STATUS_RETRY X86EMUL_RETRY + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780349; cv=none; d=zohomail.com; s=zohoarc; b=C0iX4ce23VDnSn74Gu+2BDXUqbHo5qSBOWPEIjcPzdV3k80WINWJZz4JtMwRyeqWXXe8lRcF4ZeNG3kU2iqYF865cOUZKOh7FpE37olS5v05Ix++/RJqBrDFCOVOpqCYocQf76JnM4AmW0VVMJkxF4bYm52N6S5Eit/KnYBM+fo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780349; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=oIjfo1CGKNSvcBAR0h3UWupQUOFTypLD55b2m9vW5pk=; b=kQzFo0ucmJq5WA2BhVIjUeI9Vq/KEv4d7+r5c9qqCrgvs1sUaL9ZxlbPvYN8h7GJEqen8wszzFJwVky29d7P3CyHVsfhBXtoe3c0y8UsAwOf0bX/mOd9GTLCn5JnCySaK01fqrqccnC01n7dZ3I0Z6OaiMTmeaUmQHKoS2DPuis= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780349066474.8418040417174; Thu, 15 Oct 2020 09:45:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7576.19961 (Exim 4.92) (envelope-from ) id 1kT6NV-0005LZ-KR; Thu, 15 Oct 2020 16:45:21 +0000 Received: by outflank-mailman (output) from mailman id 7576.19961; Thu, 15 Oct 2020 16:45:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NV-0005LN-GZ; Thu, 15 Oct 2020 16:45:21 +0000 Received: by outflank-mailman (input) for mailman id 7576; Thu, 15 Oct 2020 16:45:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NU-0004yr-5n for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:20 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6851cef5-f3f4-4e3f-bdf8-f145c9d70b28; Thu, 15 Oct 2020 16:44:56 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id a4so3812332lji.12 for ; Thu, 15 Oct 2020 09:44:56 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:53 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NU-0004yr-5n for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:20 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6851cef5-f3f4-4e3f-bdf8-f145c9d70b28; Thu, 15 Oct 2020 16:44:56 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id a4so3812332lji.12 for ; Thu, 15 Oct 2020 09:44:56 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:53 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6851cef5-f3f4-4e3f-bdf8-f145c9d70b28 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=oIjfo1CGKNSvcBAR0h3UWupQUOFTypLD55b2m9vW5pk=; b=Vykmed178toXYuQ3h7KwpQed3bgBglikHchHrPwhzfYTiF6h8N9+KWO/ZfXnP+4suW KGNBEElkagvR42jsh/fU3Luwm87ljYpHbWXnDaMqIYRA3BnA3UddEdGU0dGVUoHUMsKx jHdE7MUxHPcWvDa0/68BSFBPNHTdPLfbrZmX7TMAdpewUO2SL0eH2w9deemo22TCozfP v6JOGDJjR/+V9pMefRfvh+/+6WpUCftCxy8BFvXdbsPYoWNGE4NV/NVV47/3PBce2wd1 AaKqKFBzHr27IIx1+rMHomhPshge+O2cMi+GmkR/4SWC6nETiR5TDuq7XTrob2f/SFD1 D3MQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=oIjfo1CGKNSvcBAR0h3UWupQUOFTypLD55b2m9vW5pk=; b=kgQ26vuWaMsW+PWdrulQ0L4ayUm8EEJ27dOd3JJUZM3aUaITnj0CFIr7YGGdu9Ff3+ d9ACIGu5TVD19HXCzkINwII5/f6uuLrbLXheAfW+U3Z4FnPqFEjyKGaVt9dF0y0/kMXy X/t0XaMneFnAsamn6PkHFQifD8KzUzYdOUO/rc7fCeeXxVAo/n4RlPnM9Xma5buRRzlj qzKp/sWDXZ82ux7ykD2ganDQOasv8/nQg4odkC9sWcmOZgbwFge0n1l2luBkRvJQbbOM cco9ix/nx8RG/84nK7gGMelQeuYHxDJ7mqsaoyKcIlWcBTLgTf/bJTHFqrDS3NOoQUlt 8zKw== X-Gm-Message-State: AOAM530hgh7byeisoo49YCaOixXN+lBVbxQ3VNjAKYLUUdOJaZ0os+jv woazuHqZAmsCVoXHNsQMurysEvNhQpppkw== X-Google-Smtp-Source: ABdhPJw3RbUcqCpieE0iDXhJvgMzU77JvsZC3zTj2LiLH1W9Cdl8E5OfZL7nF63BjYW8w3SNla+aWA== X-Received: by 2002:a2e:6813:: with SMTP id c19mr1712578lja.152.1602780294179; Thu, 15 Oct 2020 09:44:54 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Paul Durrant , Tim Deegan , Julien Grall Subject: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common Date: Thu, 15 Oct 2020 19:44:13 +0300 Message-Id: <1602780274-29141-3-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch moves previously prepared x86/hvm/ioreq.c to the common code. The common IOREQ feature is supposed to be built with IOREQ_SERVER option enabled, which is selected for x86's config HVM for now. In order to avoid having a gigantic patch here, the subsequent patches will update remaining bits in the common code step by step: - Make IOREQ related structs/materials common - Drop the "hvm" prefixes and infixes - Remove layering violation by moving corresponding fields out of *arch.hvm* or abstracting away accesses to them This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** Please note, this patch depends on the following which is on review: https://patchwork.kernel.org/patch/11816689/ *** Changes RFC -> V1: - was split into three patches: - x86/ioreq: Prepare IOREQ feature for making it common - xen/ioreq: Make x86's IOREQ feature common - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common - update MAINTAINERS file - do not use a separate subdir for the IOREQ stuff, move it to: - xen/common/ioreq.c - xen/include/xen/ioreq.h - update x86's files to include xen/ioreq.h - remove unneeded headers in arch/x86/hvm/ioreq.c - re-order the headers alphabetically in common/ioreq.c - update common/ioreq.c according to the newly introduced arch functions: arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion() Changes V1 -> V2: - update patch description - make everything needed in the previous patch to achieve a truly rename here - don't include unnecessary headers from asm-x86/hvm/ioreq.h and xen/ioreq.h - use __XEN_IOREQ_H__ instead of __IOREQ_H__ - move get_ioreq_server() to common/ioreq.c --- MAINTAINERS | 8 +- xen/arch/x86/Kconfig | 1 + xen/arch/x86/hvm/Makefile | 1 - xen/arch/x86/hvm/ioreq.c | 1422 -----------------------------------= ---- xen/arch/x86/mm.c | 2 +- xen/arch/x86/mm/shadow/common.c | 2 +- xen/common/Kconfig | 3 + xen/common/Makefile | 1 + xen/common/ioreq.c | 1422 +++++++++++++++++++++++++++++++++++= ++++ xen/include/asm-x86/hvm/ioreq.h | 39 +- xen/include/xen/ioreq.h | 71 ++ 11 files changed, 1509 insertions(+), 1463 deletions(-) delete mode 100644 xen/arch/x86/hvm/ioreq.c create mode 100644 xen/common/ioreq.c create mode 100644 xen/include/xen/ioreq.h diff --git a/MAINTAINERS b/MAINTAINERS index 26c5382..cbb00d6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -333,6 +333,13 @@ X: xen/drivers/passthrough/vtd/ X: xen/drivers/passthrough/device_tree.c F: xen/include/xen/iommu.h =20 +I/O EMULATION (IOREQ) +M: Paul Durrant +S: Supported +F: xen/common/ioreq.c +F: xen/include/xen/ioreq.h +F: xen/include/public/hvm/ioreq.h + KCONFIG M: Doug Goldstein S: Supported @@ -549,7 +556,6 @@ F: xen/arch/x86/hvm/ioreq.c F: xen/include/asm-x86/hvm/emulate.h F: xen/include/asm-x86/hvm/io.h F: xen/include/asm-x86/hvm/ioreq.h -F: xen/include/public/hvm/ioreq.h =20 X86 MEMORY MANAGEMENT M: Jan Beulich diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 24868aa..abe0fce 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -91,6 +91,7 @@ config PV_LINEAR_PT =20 config HVM def_bool !PV_SHIM_EXCLUSIVE + select IOREQ_SERVER prompt "HVM support" ---help--- Interfaces to support HVM domains. HVM domains require hardware diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile index 3464191..0c1eff2 100644 --- a/xen/arch/x86/hvm/Makefile +++ b/xen/arch/x86/hvm/Makefile @@ -13,7 +13,6 @@ obj-y +=3D hvm.o obj-y +=3D hypercall.o obj-y +=3D intercept.o obj-y +=3D io.o -obj-y +=3D ioreq.o obj-y +=3D irq.o obj-y +=3D monitor.o obj-y +=3D mtrr.o diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c deleted file mode 100644 index d3433d7..0000000 --- a/xen/arch/x86/hvm/ioreq.c +++ /dev/null @@ -1,1422 +0,0 @@ -/* - * ioreq.c: hardware virtual machine I/O emulation - * - * Copyright (c) 2016 Citrix Systems Inc. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or - * more details. - * - * You should have received a copy of the GNU General Public License along= with - * this program; If not, see . - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include - -#include -#include - -static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) -{ - ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); - - d->arch.hvm.ioreq_server.server[id] =3D s; -} - -#define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] - -struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) -{ - if ( id >=3D MAX_NR_IOREQ_SERVERS ) - return NULL; - - return GET_IOREQ_SERVER(d, id); -} - -/* - * Iterate over all possible ioreq servers. - * - * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). - * This is a semantic that previously existed when ioreq servers - * were held in a linked list. - */ -#define FOR_EACH_IOREQ_SERVER(d, id, s) \ - for ( (id) =3D MAX_NR_IOREQ_SERVERS; (id) !=3D 0; ) \ - if ( !(s =3D GET_IOREQ_SERVER(d, --(id))) ) \ - continue; \ - else - -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) -{ - shared_iopage_t *p =3D s->ioreq.va; - - ASSERT((v =3D=3D current) || !vcpu_runnable(v)); - ASSERT(p !=3D NULL); - - return &p->vcpu_ioreq[v->vcpu_id]; -} - -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **s= rvp) -{ - struct domain *d =3D v->domain; - struct hvm_ioreq_server *s; - unsigned int id; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct hvm_ioreq_vcpu *sv; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu =3D=3D v && sv->pending ) - { - if ( srvp ) - *srvp =3D s; - return sv; - } - } - } - - return NULL; -} - -bool hvm_io_pending(struct vcpu *v) -{ - return get_pending_vcpu(v, NULL); -} - -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) -{ - unsigned int prev_state =3D STATE_IOREQ_NONE; - unsigned int state =3D p->state; - uint64_t data =3D ~0; - - smp_rmb(); - - /* - * The only reason we should see this condition be false is when an - * emulator dying races with I/O being requested. - */ - while ( likely(state !=3D STATE_IOREQ_NONE) ) - { - if ( unlikely(state < prev_state) ) - { - gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %= u\n", - prev_state, state); - sv->pending =3D false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - switch ( prev_state =3D state ) - { - case STATE_IORESP_READY: /* IORESP_READY -> NONE */ - p->state =3D STATE_IOREQ_NONE; - data =3D p->data; - break; - - case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READ= Y */ - case STATE_IOREQ_INPROCESS: - wait_on_xen_event_channel(sv->ioreq_evtchn, - ({ state =3D p->state; - smp_rmb(); - state !=3D prev_state; })); - continue; - - default: - gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); - sv->pending =3D false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - break; - } - - p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; - if ( hvm_ioreq_needs_completion(p) ) - p->data =3D data; - - sv->pending =3D false; - - return true; -} - -bool handle_hvm_io_completion(struct vcpu *v) -{ - struct domain *d =3D v->domain; - struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; - enum hvm_io_completion io_completion; - - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - - sv =3D get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) - return false; - - vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) ? - STATE_IORESP_READY : STATE_IOREQ_NONE; - - msix_write_completion(v); - vcpu_end_shutdown_deferral(v); - - io_completion =3D vio->io_completion; - vio->io_completion =3D HVMIO_no_completion; - - switch ( io_completion ) - { - case HVMIO_no_completion: - break; - - case HVMIO_mmio_completion: - return handle_mmio(); - - case HVMIO_pio_completion: - return handle_pio(vio->io_req.addr, vio->io_req.size, - vio->io_req.dir); - - default: - return arch_hvm_io_completion(io_completion); - } - - return true; -} - -static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) -{ - struct domain *d =3D s->target; - unsigned int i; - - BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN !=3D HVM_PARAM_IOREQ_PFN + 1); - - for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) - { - if ( !test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) ) - return _gfn(d->arch.hvm.params[i]); - } - - return INVALID_GFN; -} - -static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) -{ - struct domain *d =3D s->target; - unsigned int i; - - for ( i =3D 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ ) - { - if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) ) - return _gfn(d->arch.hvm.ioreq_gfn.base + i); - } - - /* - * If we are out of 'normal' GFNs then we may still have a 'legacy' - * GFN available. - */ - return hvm_alloc_legacy_ioreq_gfn(s); -} - -static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, - gfn_t gfn) -{ - struct domain *d =3D s->target; - unsigned int i; - - for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) - { - if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) ) - break; - } - if ( i > HVM_PARAM_BUFIOREQ_PFN ) - return false; - - set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask); - return true; -} - -static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) -{ - struct domain *d =3D s->target; - unsigned int i =3D gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; - - ASSERT(!gfn_eq(gfn, INVALID_GFN)); - - if ( !hvm_free_legacy_ioreq_gfn(s, gfn) ) - { - ASSERT(i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8); - set_bit(i, &d->arch.hvm.ioreq_gfn.mask); - } -} - -static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return; - - destroy_ring_for_helper(&iorp->va, iorp->page); - iorp->page =3D NULL; - - hvm_free_ioreq_gfn(s, iorp->gfn); - iorp->gfn =3D INVALID_GFN; -} - -static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) -{ - struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - int rc; - - if ( iorp->page ) - { - /* - * If a page has already been allocated (which will happen on - * demand if hvm_get_ioreq_server_frame() is called), then - * mapping a guest frame is not permitted. - */ - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; - - return 0; - } - - if ( d->is_dying ) - return -EINVAL; - - iorp->gfn =3D hvm_alloc_ioreq_gfn(s); - - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return -ENOMEM; - - rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, - &iorp->va); - - if ( rc ) - hvm_unmap_ioreq_gfn(s, buf); - - return rc; -} - -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - - if ( iorp->page ) - { - /* - * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then - * allocating a page is not permitted. - */ - if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; - - return 0; - } - - page =3D alloc_domheap_page(s->target, MEMF_no_refcount); - - if ( !page ) - return -ENOMEM; - - if ( !get_page_and_type(page, s->target, PGT_writable_page) ) - { - /* - * The domain can't possibly know about this page yet, so failure - * here is a clear indication of something fishy going on. - */ - domain_crash(s->emulator); - return -ENODATA; - } - - iorp->va =3D __map_domain_page_global(page); - if ( !iorp->va ) - goto fail; - - iorp->page =3D page; - clear_page(iorp->va); - return 0; - - fail: - put_page_alloc_ref(page); - put_page_and_type(page); - - return -ENOMEM; -} - -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page =3D iorp->page; - - if ( !page ) - return; - - iorp->page =3D NULL; - - unmap_domain_page_global(iorp->va); - iorp->va =3D NULL; - - put_page_alloc_ref(page); - put_page_and_type(page); -} - -bool is_ioreq_server_page(struct domain *d, const struct page_info *page) -{ - const struct hvm_ioreq_server *s; - unsigned int id; - bool found =3D false; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) - { - found =3D true; - break; - } - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return found; -} - -static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) - -{ - struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return; - - if ( guest_physmap_remove_page(d, iorp->gfn, - page_to_mfn(iorp->page), 0) ) - domain_crash(d); - clear_page(iorp->va); -} - -static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) -{ - struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - int rc; - - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return 0; - - clear_page(iorp->va); - - rc =3D guest_physmap_add_page(d, iorp->gfn, - page_to_mfn(iorp->page), 0); - if ( rc =3D=3D 0 ) - paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); - - return rc; -} - -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) -{ - ASSERT(spin_is_locked(&s->lock)); - - if ( s->ioreq.va !=3D NULL ) - { - ioreq_t *p =3D get_ioreq(s, sv->vcpu); - - p->vp_eport =3D sv->ioreq_evtchn; - } -} - -#define HANDLE_BUFIOREQ(s) \ - ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) - -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - int rc; - - sv =3D xzalloc(struct hvm_ioreq_vcpu); - - rc =3D -ENOMEM; - if ( !sv ) - goto fail1; - - spin_lock(&s->lock); - - rc =3D alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail2; - - sv->ioreq_evtchn =3D rc; - - if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) - { - rc =3D alloc_unbound_xen_event_channel(v->domain, 0, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail3; - - s->bufioreq_evtchn =3D rc; - } - - sv->vcpu =3D v; - - list_add(&sv->list_entry, &s->ioreq_vcpu_list); - - if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); - - spin_unlock(&s->lock); - return 0; - - fail3: - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - fail2: - spin_unlock(&s->lock); - xfree(sv); - - fail1: - return rc; -} - -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - - spin_lock(&s->lock); - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu !=3D v ) - continue; - - list_del(&sv->list_entry); - - if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - break; - } - - spin_unlock(&s->lock); -} - -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) -{ - struct hvm_ioreq_vcpu *sv, *next; - - spin_lock(&s->lock); - - list_for_each_entry_safe ( sv, - next, - &s->ioreq_vcpu_list, - list_entry ) - { - struct vcpu *v =3D sv->vcpu; - - list_del(&sv->list_entry); - - if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - } - - spin_unlock(&s->lock); -} - -static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) -{ - int rc; - - rc =3D hvm_map_ioreq_gfn(s, false); - - if ( !rc && HANDLE_BUFIOREQ(s) ) - rc =3D hvm_map_ioreq_gfn(s, true); - - if ( rc ) - hvm_unmap_ioreq_gfn(s, false); - - return rc; -} - -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) -{ - hvm_unmap_ioreq_gfn(s, true); - hvm_unmap_ioreq_gfn(s, false); -} - -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) -{ - int rc; - - rc =3D hvm_alloc_ioreq_mfn(s, false); - - if ( !rc && (s->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc =3D hvm_alloc_ioreq_mfn(s, true); - - if ( rc ) - hvm_free_ioreq_mfn(s, false); - - return rc; -} - -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) -{ - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); -} - -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) -{ - unsigned int i; - - for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) - rangeset_destroy(s->range[i]); -} - -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, - ioservid_t id) -{ - unsigned int i; - int rc; - - for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) - { - char *name; - - rc =3D asprintf(&name, "ioreq_server %d %s", id, - (i =3D=3D XEN_DMOP_IO_RANGE_PORT) ? "port" : - (i =3D=3D XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : - (i =3D=3D XEN_DMOP_IO_RANGE_PCI) ? "pci" : - ""); - if ( rc ) - goto fail; - - s->range[i] =3D rangeset_new(s->target, name, - RANGESETF_prettyprint_hex); - - xfree(name); - - rc =3D -ENOMEM; - if ( !s->range[i] ) - goto fail; - - rangeset_limit(s->range[i], MAX_NR_IO_RANGES); - } - - return 0; - - fail: - hvm_ioreq_server_free_rangesets(s); - - return rc; -} - -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) -{ - struct hvm_ioreq_vcpu *sv; - - spin_lock(&s->lock); - - if ( s->enabled ) - goto done; - - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); - - s->enabled =3D true; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); - - done: - spin_unlock(&s->lock); -} - -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) -{ - spin_lock(&s->lock); - - if ( !s->enabled ) - goto done; - - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); - - s->enabled =3D false; - - done: - spin_unlock(&s->lock); -} - -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) -{ - struct domain *currd =3D current->domain; - struct vcpu *v; - int rc; - - s->target =3D d; - - get_knownalive_domain(currd); - s->emulator =3D currd; - - spin_lock_init(&s->lock); - INIT_LIST_HEAD(&s->ioreq_vcpu_list); - spin_lock_init(&s->bufioreq_lock); - - s->ioreq.gfn =3D INVALID_GFN; - s->bufioreq.gfn =3D INVALID_GFN; - - rc =3D hvm_ioreq_server_alloc_rangesets(s, id); - if ( rc ) - return rc; - - s->bufioreq_handling =3D bufioreq_handling; - - for_each_vcpu ( d, v ) - { - rc =3D hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail_add; - } - - return 0; - - fail_add: - hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); - return rc; -} - -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) -{ - ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); - - /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. - */ - hvm_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); -} - -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) -{ - struct hvm_ioreq_server *s; - unsigned int i; - int rc; - - if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) - return -EINVAL; - - s =3D xzalloc(struct hvm_ioreq_server); - if ( !s ) - return -ENOMEM; - - domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) - { - if ( !GET_IOREQ_SERVER(d, i) ) - break; - } - - rc =3D -ENOSPC; - if ( i >=3D MAX_NR_IOREQ_SERVERS ) - goto fail; - - /* - * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. - */ - set_ioreq_server(d, i, s); - - rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); - if ( rc ) - { - set_ioreq_server(d, i, NULL); - goto fail; - } - - if ( id ) - *id =3D i; - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - return 0; - - fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - xfree(s); - return rc; -} - -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - domain_pause(d); - - arch_hvm_destroy_ioreq_server(s); - - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is paused. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - domain_unpause(d); - - xfree(s); - - rc =3D 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - if ( ioreq_gfn || bufioreq_gfn ) - { - rc =3D hvm_ioreq_server_map_pages(s); - if ( rc ) - goto out; - } - - if ( ioreq_gfn ) - *ioreq_gfn =3D gfn_x(s->ioreq.gfn); - - if ( HANDLE_BUFIOREQ(s) ) - { - if ( bufioreq_gfn ) - *bufioreq_gfn =3D gfn_x(s->bufioreq.gfn); - - if ( bufioreq_port ) - *bufioreq_port =3D s->bufioreq_evtchn; - } - - rc =3D 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) -{ - struct hvm_ioreq_server *s; - int rc; - - ASSERT(is_hvm_domain(d)); - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - rc =3D hvm_ioreq_server_alloc_pages(s); - if ( rc ) - goto out; - - switch ( idx ) - { - case XENMEM_resource_ioreq_server_frame_bufioreq: - rc =3D -ENOENT; - if ( !HANDLE_BUFIOREQ(s) ) - goto out; - - *mfn =3D page_to_mfn(s->bufioreq.page); - rc =3D 0; - break; - - case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn =3D page_to_mfn(s->ioreq.page); - rc =3D 0; - break; - - default: - rc =3D -EINVAL; - break; - } - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) -{ - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r =3D s->range[type]; - break; - - default: - r =3D NULL; - break; - } - - rc =3D -EINVAL; - if ( !r ) - goto out; - - rc =3D -EEXIST; - if ( rangeset_overlaps_range(r, start, end) ) - goto out; - - rc =3D rangeset_add_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) -{ - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r =3D s->range[type]; - break; - - default: - r =3D NULL; - break; - } - - rc =3D -EINVAL; - if ( !r ) - goto out; - - rc =3D -ENOENT; - if ( !rangeset_contains_range(r, start, end) ) - goto out; - - rc =3D rangeset_remove_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - domain_pause(d); - - if ( enabled ) - hvm_ioreq_server_enable(s); - else - hvm_ioreq_server_disable(s); - - domain_unpause(d); - - rc =3D 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - return rc; -} - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - rc =3D hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail; - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return 0; - - fail: - while ( ++id !=3D MAX_NR_IOREQ_SERVERS ) - { - s =3D GET_IOREQ_SERVER(d, id); - - if ( !s ) - continue; - - hvm_ioreq_server_remove_vcpu(s, v); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - -void hvm_destroy_all_ioreq_servers(struct domain *d) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - if ( !arch_hvm_ioreq_destroy(d) ) - return; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - /* No need to domain_pause() as the domain is being torn down */ - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is being destroyed. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - xfree(s); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) -{ - struct hvm_ioreq_server *s; - uint8_t type; - uint64_t addr; - unsigned int id; - - if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) ) - return NULL; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct rangeset *r; - - if ( !s->enabled ) - continue; - - r =3D s->range[type]; - - switch ( type ) - { - unsigned long start, end; - - case XEN_DMOP_IO_RANGE_PORT: - start =3D addr; - end =3D start + p->size - 1; - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_MEMORY: - start =3D hvm_mmio_first_byte(p); - end =3D hvm_mmio_last_byte(p); - - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_PCI: - if ( rangeset_contains_singleton(r, addr >> 32) ) - { - p->type =3D IOREQ_TYPE_PCI_CONFIG; - p->addr =3D addr; - return s; - } - - break; - } - } - - return NULL; -} - -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) -{ - struct domain *d =3D current->domain; - struct hvm_ioreq_page *iorp; - buffered_iopage_t *pg; - buf_ioreq_t bp =3D { .data =3D p->data, - .addr =3D p->addr, - .type =3D p->type, - .dir =3D p->dir }; - /* Timeoffset sends 64b data, but no address. Use two consecutive slot= s. */ - int qw =3D 0; - - /* Ensure buffered_iopage fits in a page */ - BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); - - iorp =3D &s->bufioreq; - pg =3D iorp->va; - - if ( !pg ) - return IOREQ_STATUS_UNHANDLED; - - /* - * Return 0 for the cases we can't deal with: - * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB - * - we cannot buffer accesses to guest memory buffers, as the guest - * may expect the memory buffer to be synchronously accessed - * - the count field is usually used with data_is_ptr and since we do= n't - * support data_is_ptr we do not waste space for the count field ei= ther - */ - if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count !=3D 1) ) - return 0; - - switch ( p->size ) - { - case 1: - bp.size =3D 0; - break; - case 2: - bp.size =3D 1; - break; - case 4: - bp.size =3D 2; - break; - case 8: - bp.size =3D 3; - qw =3D 1; - break; - default: - gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return IOREQ_STATUS_UNHANDLED; - } - - spin_lock(&s->bufioreq_lock); - - if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=3D - (IOREQ_BUFFER_SLOT_NUM - qw) ) - { - /* The queue is full: send the iopacket through the normal path. */ - spin_unlock(&s->bufioreq_lock); - return IOREQ_STATUS_UNHANDLED; - } - - pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D bp; - - if ( qw ) - { - bp.data =3D p->data >> 32; - pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = =3D bp; - } - - /* Make the ioreq_t visible /before/ write_pointer. */ - smp_wmb(); - pg->ptrs.write_pointer +=3D qw ? 2 : 1; - - /* Canonicalize read/write pointers to prevent their overflow. */ - while ( (s->bufioreq_handling =3D=3D HVM_IOREQSRV_BUFIOREQ_ATOMIC) && - qw++ < IOREQ_BUFFER_SLOT_NUM && - pg->ptrs.read_pointer >=3D IOREQ_BUFFER_SLOT_NUM ) - { - union bufioreq_pointers old =3D pg->ptrs, new; - unsigned int n =3D old.read_pointer / IOREQ_BUFFER_SLOT_NUM; - - new.read_pointer =3D old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; - new.write_pointer =3D old.write_pointer - n * IOREQ_BUFFER_SLOT_NU= M; - cmpxchg(&pg->ptrs.full, old.full, new.full); - } - - notify_via_xen_event_channel(d, s->bufioreq_evtchn); - spin_unlock(&s->bufioreq_lock); - - return IOREQ_STATUS_HANDLED; -} - -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered) -{ - struct vcpu *curr =3D current; - struct domain *d =3D curr->domain; - struct hvm_ioreq_vcpu *sv; - - ASSERT(s); - - if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); - - if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return IOREQ_STATUS_RETRY; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu =3D=3D curr ) - { - evtchn_port_t port =3D sv->ioreq_evtchn; - ioreq_t *p =3D get_ioreq(s, curr); - - if ( unlikely(p->state !=3D STATE_IOREQ_NONE) ) - { - gprintk(XENLOG_ERR, "device model set bad IO state %d\n", - p->state); - break; - } - - if ( unlikely(p->vp_eport !=3D port) ) - { - gprintk(XENLOG_ERR, "device model set bad event channel %d= \n", - p->vp_eport); - break; - } - - proto_p->state =3D STATE_IOREQ_NONE; - proto_p->vp_eport =3D port; - *p =3D *proto_p; - - prepare_wait_on_xen_event_channel(port); - - /* - * Following happens /after/ blocking and setting up ioreq - * contents. prepare_wait_on_xen_event_channel() is an implicit - * barrier. - */ - p->state =3D STATE_IOREQ_READY; - notify_via_xen_event_channel(d, port); - - sv->pending =3D true; - return IOREQ_STATUS_RETRY; - } - } - - return IOREQ_STATUS_UNHANDLED; -} - -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) -{ - struct domain *d =3D current->domain; - struct hvm_ioreq_server *s; - unsigned int id, failed =3D 0; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( !s->enabled ) - continue; - - if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) - failed++; - } - - return failed; -} - -void hvm_ioreq_init(struct domain *d) -{ - spin_lock_init(&d->arch.hvm.ioreq_server.lock); - - arch_hvm_ioreq_init(d); -} - -/* - * Local variables: - * mode: C - * c-file-style: "BSD" - * c-basic-offset: 4 - * tab-width: 4 - * indent-tabs-mode: nil - * End: - */ diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 8c8f054..b5865ae 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -100,6 +100,7 @@ */ =20 #include +#include #include #include #include @@ -141,7 +142,6 @@ #include #include #include -#include =20 #include #include diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index 6182313..3e6c14d 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -20,6 +20,7 @@ * along with this program; If not, see . */ =20 +#include #include #include #include @@ -34,7 +35,6 @@ #include #include #include -#include #include #include "private.h" =20 diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 3e2cf25..c971ded 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -139,6 +139,9 @@ config HYPFS_CONFIG Disable this option in case you want to spare some memory or you want to hide the .config contents from dom0. =20 +config IOREQ_SERVER + bool + config KEXEC bool "kexec support" default y diff --git a/xen/common/Makefile b/xen/common/Makefile index b3b60a1..cdb99fb 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -15,6 +15,7 @@ obj-$(CONFIG_GRANT_TABLE) +=3D grant_table.o obj-y +=3D guestcopy.o obj-bin-y +=3D gunzip.init.o obj-$(CONFIG_HYPFS) +=3D hypfs.o +obj-$(CONFIG_IOREQ_SERVER) +=3D ioreq.o obj-y +=3D irq.o obj-y +=3D kernel.o obj-y +=3D keyhandler.o diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c new file mode 100644 index 0000000..d3433d7 --- /dev/null +++ b/xen/common/ioreq.c @@ -0,0 +1,1422 @@ +/* + * ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +static void set_ioreq_server(struct domain *d, unsigned int id, + struct hvm_ioreq_server *s) +{ + ASSERT(id < MAX_NR_IOREQ_SERVERS); + ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + + d->arch.hvm.ioreq_server.server[id] =3D s; +} + +#define GET_IOREQ_SERVER(d, id) \ + (d)->arch.hvm.ioreq_server.server[id] + +struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) +{ + if ( id >=3D MAX_NR_IOREQ_SERVERS ) + return NULL; + + return GET_IOREQ_SERVER(d, id); +} + +/* + * Iterate over all possible ioreq servers. + * + * NOTE: The iteration is backwards such that more recently created + * ioreq servers are favoured in hvm_select_ioreq_server(). + * This is a semantic that previously existed when ioreq servers + * were held in a linked list. + */ +#define FOR_EACH_IOREQ_SERVER(d, id, s) \ + for ( (id) =3D MAX_NR_IOREQ_SERVERS; (id) !=3D 0; ) \ + if ( !(s =3D GET_IOREQ_SERVER(d, --(id))) ) \ + continue; \ + else + +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +{ + shared_iopage_t *p =3D s->ioreq.va; + + ASSERT((v =3D=3D current) || !vcpu_runnable(v)); + ASSERT(p !=3D NULL); + + return &p->vcpu_ioreq[v->vcpu_id]; +} + +static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct hvm_ioreq_server **s= rvp) +{ + struct domain *d =3D v->domain; + struct hvm_ioreq_server *s; + unsigned int id; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct hvm_ioreq_vcpu *sv; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu =3D=3D v && sv->pending ) + { + if ( srvp ) + *srvp =3D s; + return sv; + } + } + } + + return NULL; +} + +bool hvm_io_pending(struct vcpu *v) +{ + return get_pending_vcpu(v, NULL); +} + +static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +{ + unsigned int prev_state =3D STATE_IOREQ_NONE; + unsigned int state =3D p->state; + uint64_t data =3D ~0; + + smp_rmb(); + + /* + * The only reason we should see this condition be false is when an + * emulator dying races with I/O being requested. + */ + while ( likely(state !=3D STATE_IOREQ_NONE) ) + { + if ( unlikely(state < prev_state) ) + { + gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %= u\n", + prev_state, state); + sv->pending =3D false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + switch ( prev_state =3D state ) + { + case STATE_IORESP_READY: /* IORESP_READY -> NONE */ + p->state =3D STATE_IOREQ_NONE; + data =3D p->data; + break; + + case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READ= Y */ + case STATE_IOREQ_INPROCESS: + wait_on_xen_event_channel(sv->ioreq_evtchn, + ({ state =3D p->state; + smp_rmb(); + state !=3D prev_state; })); + continue; + + default: + gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); + sv->pending =3D false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + break; + } + + p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; + if ( hvm_ioreq_needs_completion(p) ) + p->data =3D data; + + sv->pending =3D false; + + return true; +} + +bool handle_hvm_io_completion(struct vcpu *v) +{ + struct domain *d =3D v->domain; + struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; + struct hvm_ioreq_server *s; + struct hvm_ioreq_vcpu *sv; + enum hvm_io_completion io_completion; + + if ( has_vpci(d) && vpci_process_pending(v) ) + { + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } + + sv =3D get_pending_vcpu(v, &s); + if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + return false; + + vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) ? + STATE_IORESP_READY : STATE_IOREQ_NONE; + + msix_write_completion(v); + vcpu_end_shutdown_deferral(v); + + io_completion =3D vio->io_completion; + vio->io_completion =3D HVMIO_no_completion; + + switch ( io_completion ) + { + case HVMIO_no_completion: + break; + + case HVMIO_mmio_completion: + return handle_mmio(); + + case HVMIO_pio_completion: + return handle_pio(vio->io_req.addr, vio->io_req.size, + vio->io_req.dir); + + default: + return arch_hvm_io_completion(io_completion); + } + + return true; +} + +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) +{ + struct domain *d =3D s->target; + unsigned int i; + + BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN !=3D HVM_PARAM_IOREQ_PFN + 1); + + for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) + { + if ( !test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) ) + return _gfn(d->arch.hvm.params[i]); + } + + return INVALID_GFN; +} + +static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) +{ + struct domain *d =3D s->target; + unsigned int i; + + for ( i =3D 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ ) + { + if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) ) + return _gfn(d->arch.hvm.ioreq_gfn.base + i); + } + + /* + * If we are out of 'normal' GFNs then we may still have a 'legacy' + * GFN available. + */ + return hvm_alloc_legacy_ioreq_gfn(s); +} + +static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, + gfn_t gfn) +{ + struct domain *d =3D s->target; + unsigned int i; + + for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) + { + if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) ) + break; + } + if ( i > HVM_PARAM_BUFIOREQ_PFN ) + return false; + + set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask); + return true; +} + +static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) +{ + struct domain *d =3D s->target; + unsigned int i =3D gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; + + ASSERT(!gfn_eq(gfn, INVALID_GFN)); + + if ( !hvm_free_legacy_ioreq_gfn(s, gfn) ) + { + ASSERT(i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8); + set_bit(i, &d->arch.hvm.ioreq_gfn.mask); + } +} + +static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return; + + destroy_ring_for_helper(&iorp->va, iorp->page); + iorp->page =3D NULL; + + hvm_free_ioreq_gfn(s, iorp->gfn); + iorp->gfn =3D INVALID_GFN; +} + +static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +{ + struct domain *d =3D s->target; + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + int rc; + + if ( iorp->page ) + { + /* + * If a page has already been allocated (which will happen on + * demand if hvm_get_ioreq_server_frame() is called), then + * mapping a guest frame is not permitted. + */ + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + if ( d->is_dying ) + return -EINVAL; + + iorp->gfn =3D hvm_alloc_ioreq_gfn(s); + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -ENOMEM; + + rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, + &iorp->va); + + if ( rc ) + hvm_unmap_ioreq_gfn(s, buf); + + return rc; +} + +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct page_info *page; + + if ( iorp->page ) + { + /* + * If a guest frame has already been mapped (which may happen + * on demand if hvm_get_ioreq_server_info() is called), then + * allocating a page is not permitted. + */ + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + page =3D alloc_domheap_page(s->target, MEMF_no_refcount); + + if ( !page ) + return -ENOMEM; + + if ( !get_page_and_type(page, s->target, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so failure + * here is a clear indication of something fishy going on. + */ + domain_crash(s->emulator); + return -ENODATA; + } + + iorp->va =3D __map_domain_page_global(page); + if ( !iorp->va ) + goto fail; + + iorp->page =3D page; + clear_page(iorp->va); + return 0; + + fail: + put_page_alloc_ref(page); + put_page_and_type(page); + + return -ENOMEM; +} + +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct page_info *page =3D iorp->page; + + if ( !page ) + return; + + iorp->page =3D NULL; + + unmap_domain_page_global(iorp->va); + iorp->va =3D NULL; + + put_page_alloc_ref(page); + put_page_and_type(page); +} + +bool is_ioreq_server_page(struct domain *d, const struct page_info *page) +{ + const struct hvm_ioreq_server *s; + unsigned int id; + bool found =3D false; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) + { + found =3D true; + break; + } + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return found; +} + +static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) + +{ + struct domain *d =3D s->target; + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return; + + if ( guest_physmap_remove_page(d, iorp->gfn, + page_to_mfn(iorp->page), 0) ) + domain_crash(d); + clear_page(iorp->va); +} + +static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +{ + struct domain *d =3D s->target; + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + int rc; + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return 0; + + clear_page(iorp->va); + + rc =3D guest_physmap_add_page(d, iorp->gfn, + page_to_mfn(iorp->page), 0); + if ( rc =3D=3D 0 ) + paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); + + return rc; +} + +static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, + struct hvm_ioreq_vcpu *sv) +{ + ASSERT(spin_is_locked(&s->lock)); + + if ( s->ioreq.va !=3D NULL ) + { + ioreq_t *p =3D get_ioreq(s, sv->vcpu); + + p->vp_eport =3D sv->ioreq_evtchn; + } +} + +#define HANDLE_BUFIOREQ(s) \ + ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) + +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + int rc; + + sv =3D xzalloc(struct hvm_ioreq_vcpu); + + rc =3D -ENOMEM; + if ( !sv ) + goto fail1; + + spin_lock(&s->lock); + + rc =3D alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail2; + + sv->ioreq_evtchn =3D rc; + + if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) + { + rc =3D alloc_unbound_xen_event_channel(v->domain, 0, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail3; + + s->bufioreq_evtchn =3D rc; + } + + sv->vcpu =3D v; + + list_add(&sv->list_entry, &s->ioreq_vcpu_list); + + if ( s->enabled ) + hvm_update_ioreq_evtchn(s, sv); + + spin_unlock(&s->lock); + return 0; + + fail3: + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + fail2: + spin_unlock(&s->lock); + xfree(sv); + + fail1: + return rc; +} + +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu !=3D v ) + continue; + + list_del(&sv->list_entry); + + if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + break; + } + + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv, *next; + + spin_lock(&s->lock); + + list_for_each_entry_safe ( sv, + next, + &s->ioreq_vcpu_list, + list_entry ) + { + struct vcpu *v =3D sv->vcpu; + + list_del(&sv->list_entry); + + if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + } + + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) +{ + int rc; + + rc =3D hvm_map_ioreq_gfn(s, false); + + if ( !rc && HANDLE_BUFIOREQ(s) ) + rc =3D hvm_map_ioreq_gfn(s, true); + + if ( rc ) + hvm_unmap_ioreq_gfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +{ + hvm_unmap_ioreq_gfn(s, true); + hvm_unmap_ioreq_gfn(s, false); +} + +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +{ + int rc; + + rc =3D hvm_alloc_ioreq_mfn(s, false); + + if ( !rc && (s->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) ) + rc =3D hvm_alloc_ioreq_mfn(s, true); + + if ( rc ) + hvm_free_ioreq_mfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +{ + hvm_free_ioreq_mfn(s, true); + hvm_free_ioreq_mfn(s, false); +} + +static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +{ + unsigned int i; + + for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) + rangeset_destroy(s->range[i]); +} + +static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, + ioservid_t id) +{ + unsigned int i; + int rc; + + for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) + { + char *name; + + rc =3D asprintf(&name, "ioreq_server %d %s", id, + (i =3D=3D XEN_DMOP_IO_RANGE_PORT) ? "port" : + (i =3D=3D XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : + (i =3D=3D XEN_DMOP_IO_RANGE_PCI) ? "pci" : + ""); + if ( rc ) + goto fail; + + s->range[i] =3D rangeset_new(s->target, name, + RANGESETF_prettyprint_hex); + + xfree(name); + + rc =3D -ENOMEM; + if ( !s->range[i] ) + goto fail; + + rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + } + + return 0; + + fail: + hvm_ioreq_server_free_rangesets(s); + + return rc; +} + +static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + if ( s->enabled ) + goto done; + + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); + + s->enabled =3D true; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + + done: + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +{ + spin_lock(&s->lock); + + if ( !s->enabled ) + goto done; + + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); + + s->enabled =3D false; + + done: + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) +{ + struct domain *currd =3D current->domain; + struct vcpu *v; + int rc; + + s->target =3D d; + + get_knownalive_domain(currd); + s->emulator =3D currd; + + spin_lock_init(&s->lock); + INIT_LIST_HEAD(&s->ioreq_vcpu_list); + spin_lock_init(&s->bufioreq_lock); + + s->ioreq.gfn =3D INVALID_GFN; + s->bufioreq.gfn =3D INVALID_GFN; + + rc =3D hvm_ioreq_server_alloc_rangesets(s, id); + if ( rc ) + return rc; + + s->bufioreq_handling =3D bufioreq_handling; + + for_each_vcpu ( d, v ) + { + rc =3D hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail_add; + } + + return 0; + + fail_add: + hvm_ioreq_server_remove_all_vcpus(s); + hvm_ioreq_server_unmap_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); + return rc; +} + +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +{ + ASSERT(!s->enabled); + hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latter. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ + hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); +} + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id) +{ + struct hvm_ioreq_server *s; + unsigned int i; + int rc; + + if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) + return -EINVAL; + + s =3D xzalloc(struct hvm_ioreq_server); + if ( !s ) + return -ENOMEM; + + domain_pause(d); + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) + { + if ( !GET_IOREQ_SERVER(d, i) ) + break; + } + + rc =3D -ENOSPC; + if ( i >=3D MAX_NR_IOREQ_SERVERS ) + goto fail; + + /* + * It is safe to call set_ioreq_server() prior to + * hvm_ioreq_server_init() since the target domain is paused. + */ + set_ioreq_server(d, i, s); + + rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); + if ( rc ) + { + set_ioreq_server(d, i, NULL); + goto fail; + } + + if ( id ) + *id =3D i; + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + return 0; + + fail: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + xfree(s); + return rc; +} + +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + domain_pause(d); + + arch_hvm_destroy_ioreq_server(s); + + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is paused. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + domain_unpause(d); + + xfree(s); + + rc =3D 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + if ( ioreq_gfn || bufioreq_gfn ) + { + rc =3D hvm_ioreq_server_map_pages(s); + if ( rc ) + goto out; + } + + if ( ioreq_gfn ) + *ioreq_gfn =3D gfn_x(s->ioreq.gfn); + + if ( HANDLE_BUFIOREQ(s) ) + { + if ( bufioreq_gfn ) + *bufioreq_gfn =3D gfn_x(s->bufioreq.gfn); + + if ( bufioreq_port ) + *bufioreq_port =3D s->bufioreq_evtchn; + } + + rc =3D 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) +{ + struct hvm_ioreq_server *s; + int rc; + + ASSERT(is_hvm_domain(d)); + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + rc =3D hvm_ioreq_server_alloc_pages(s); + if ( rc ) + goto out; + + switch ( idx ) + { + case XENMEM_resource_ioreq_server_frame_bufioreq: + rc =3D -ENOENT; + if ( !HANDLE_BUFIOREQ(s) ) + goto out; + + *mfn =3D page_to_mfn(s->bufioreq.page); + rc =3D 0; + break; + + case XENMEM_resource_ioreq_server_frame_ioreq(0): + *mfn =3D page_to_mfn(s->ioreq.page); + rc =3D 0; + break; + + default: + rc =3D -EINVAL; + break; + } + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r =3D s->range[type]; + break; + + default: + r =3D NULL; + break; + } + + rc =3D -EINVAL; + if ( !r ) + goto out; + + rc =3D -EEXIST; + if ( rangeset_overlaps_range(r, start, end) ) + goto out; + + rc =3D rangeset_add_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r =3D s->range[type]; + break; + + default: + r =3D NULL; + break; + } + + rc =3D -EINVAL; + if ( !r ) + goto out; + + rc =3D -ENOENT; + if ( !rangeset_contains_range(r, start, end) ) + goto out; + + rc =3D rangeset_remove_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + domain_pause(d); + + if ( enabled ) + hvm_ioreq_server_enable(s); + else + hvm_ioreq_server_disable(s); + + domain_unpause(d); + + rc =3D 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + return rc; +} + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + rc =3D hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail; + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return 0; + + fail: + while ( ++id !=3D MAX_NR_IOREQ_SERVERS ) + { + s =3D GET_IOREQ_SERVER(d, id); + + if ( !s ) + continue; + + hvm_ioreq_server_remove_vcpu(s, v); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + hvm_ioreq_server_remove_vcpu(s, v); + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +void hvm_destroy_all_ioreq_servers(struct domain *d) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + if ( !arch_hvm_ioreq_destroy(d) ) + return; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + /* No need to domain_pause() as the domain is being torn down */ + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is being destroyed. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + xfree(s); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) ) + return NULL; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct rangeset *r; + + if ( !s->enabled ) + continue; + + r =3D s->range[type]; + + switch ( type ) + { + unsigned long start, end; + + case XEN_DMOP_IO_RANGE_PORT: + start =3D addr; + end =3D start + p->size - 1; + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_MEMORY: + start =3D hvm_mmio_first_byte(p); + end =3D hvm_mmio_last_byte(p); + + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_PCI: + if ( rangeset_contains_singleton(r, addr >> 32) ) + { + p->type =3D IOREQ_TYPE_PCI_CONFIG; + p->addr =3D addr; + return s; + } + + break; + } + } + + return NULL; +} + +static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +{ + struct domain *d =3D current->domain; + struct hvm_ioreq_page *iorp; + buffered_iopage_t *pg; + buf_ioreq_t bp =3D { .data =3D p->data, + .addr =3D p->addr, + .type =3D p->type, + .dir =3D p->dir }; + /* Timeoffset sends 64b data, but no address. Use two consecutive slot= s. */ + int qw =3D 0; + + /* Ensure buffered_iopage fits in a page */ + BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); + + iorp =3D &s->bufioreq; + pg =3D iorp->va; + + if ( !pg ) + return IOREQ_STATUS_UNHANDLED; + + /* + * Return 0 for the cases we can't deal with: + * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB + * - we cannot buffer accesses to guest memory buffers, as the guest + * may expect the memory buffer to be synchronously accessed + * - the count field is usually used with data_is_ptr and since we do= n't + * support data_is_ptr we do not waste space for the count field ei= ther + */ + if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count !=3D 1) ) + return 0; + + switch ( p->size ) + { + case 1: + bp.size =3D 0; + break; + case 2: + bp.size =3D 1; + break; + case 4: + bp.size =3D 2; + break; + case 8: + bp.size =3D 3; + qw =3D 1; + break; + default: + gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); + return IOREQ_STATUS_UNHANDLED; + } + + spin_lock(&s->bufioreq_lock); + + if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=3D + (IOREQ_BUFFER_SLOT_NUM - qw) ) + { + /* The queue is full: send the iopacket through the normal path. */ + spin_unlock(&s->bufioreq_lock); + return IOREQ_STATUS_UNHANDLED; + } + + pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D bp; + + if ( qw ) + { + bp.data =3D p->data >> 32; + pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = =3D bp; + } + + /* Make the ioreq_t visible /before/ write_pointer. */ + smp_wmb(); + pg->ptrs.write_pointer +=3D qw ? 2 : 1; + + /* Canonicalize read/write pointers to prevent their overflow. */ + while ( (s->bufioreq_handling =3D=3D HVM_IOREQSRV_BUFIOREQ_ATOMIC) && + qw++ < IOREQ_BUFFER_SLOT_NUM && + pg->ptrs.read_pointer >=3D IOREQ_BUFFER_SLOT_NUM ) + { + union bufioreq_pointers old =3D pg->ptrs, new; + unsigned int n =3D old.read_pointer / IOREQ_BUFFER_SLOT_NUM; + + new.read_pointer =3D old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; + new.write_pointer =3D old.write_pointer - n * IOREQ_BUFFER_SLOT_NU= M; + cmpxchg(&pg->ptrs.full, old.full, new.full); + } + + notify_via_xen_event_channel(d, s->bufioreq_evtchn); + spin_unlock(&s->bufioreq_lock); + + return IOREQ_STATUS_HANDLED; +} + +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered) +{ + struct vcpu *curr =3D current; + struct domain *d =3D curr->domain; + struct hvm_ioreq_vcpu *sv; + + ASSERT(s); + + if ( buffered ) + return hvm_send_buffered_ioreq(s, proto_p); + + if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) + return IOREQ_STATUS_RETRY; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu =3D=3D curr ) + { + evtchn_port_t port =3D sv->ioreq_evtchn; + ioreq_t *p =3D get_ioreq(s, curr); + + if ( unlikely(p->state !=3D STATE_IOREQ_NONE) ) + { + gprintk(XENLOG_ERR, "device model set bad IO state %d\n", + p->state); + break; + } + + if ( unlikely(p->vp_eport !=3D port) ) + { + gprintk(XENLOG_ERR, "device model set bad event channel %d= \n", + p->vp_eport); + break; + } + + proto_p->state =3D STATE_IOREQ_NONE; + proto_p->vp_eport =3D port; + *p =3D *proto_p; + + prepare_wait_on_xen_event_channel(port); + + /* + * Following happens /after/ blocking and setting up ioreq + * contents. prepare_wait_on_xen_event_channel() is an implicit + * barrier. + */ + p->state =3D STATE_IOREQ_READY; + notify_via_xen_event_channel(d, port); + + sv->pending =3D true; + return IOREQ_STATUS_RETRY; + } + } + + return IOREQ_STATUS_UNHANDLED; +} + +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +{ + struct domain *d =3D current->domain; + struct hvm_ioreq_server *s; + unsigned int id, failed =3D 0; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( !s->enabled ) + continue; + + if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) + failed++; + } + + return failed; +} + +void hvm_ioreq_init(struct domain *d) +{ + spin_lock_init(&d->arch.hvm.ioreq_server.lock); + + arch_hvm_ioreq_init(d); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 376e2ef..a3d8faa 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,14 +19,13 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ =20 +#include + #include #include =20 #include =20 -struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id); - static inline bool arch_hvm_io_completion(enum hvm_io_completion io_comple= tion) { switch ( io_completion ) @@ -178,40 +177,6 @@ static inline bool arch_hvm_ioreq_destroy(struct domai= n *d) return true; } =20 -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); -bool is_ioreq_server_page(struct domain *d, const struct page_info *page); - -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); - -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); - -void hvm_ioreq_init(struct domain *d); - #define IOREQ_STATUS_HANDLED X86EMUL_OKAY #define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE #define IOREQ_STATUS_RETRY X86EMUL_RETRY diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h new file mode 100644 index 0000000..6db4392 --- /dev/null +++ b/xen/include/xen/ioreq.h @@ -0,0 +1,71 @@ +/* + * ioreq.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#ifndef __XEN_IOREQ_H__ +#define __XEN_IOREQ_H__ + +#include + +struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id); + +bool hvm_io_pending(struct vcpu *v); +bool handle_hvm_io_completion(struct vcpu *v); +bool is_ioreq_server_page(struct domain *d, const struct page_info *page); + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id); +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port); +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled); + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); +void hvm_destroy_all_ioreq_servers(struct domain *d); + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); + +void hvm_ioreq_init(struct domain *d); + +#endif /* __XEN_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780324; cv=none; d=zohomail.com; s=zohoarc; b=IgC+Vuf1PR+DCZYbU/hlkOLt+oth7UXv/XhWc5R8VD/xeKyqQZ2ZFyypzEcWAAsm7WCbAflwI6/5IbhBtQ94lwyETpXkyvl/Gy03wrQpnitKNcFqTFkcxZZrRChmg96pjP+PBeZWgiN3XGgajqKoUoneuB9wGB91qiJma5YtCWU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780324; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=mmqWLOWImzeNp56Irm3zs3dKXoDrkJx4SfyF9oEDLUs=; b=FgV0u68hxyMYIxCJHE2AtQVtF3y8+2aBdYQ3u23pwrWVDyQUcmnF9aYKPFALD2PZfPAHOCTi9aqDXlsjMT5J9ofl40pMXUJgcgKBGLqOfBTI4oaIbBRiYi6T3Gufh9086+a4q4MXO1tP1tTwUIMEmSMdNPA7CELBhO027hMZqCk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780324147145.86935919582208; Thu, 15 Oct 2020 09:45:24 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7572.19925 (Exim 4.92) (envelope-from ) id 1kT6NG-00054s-EU; Thu, 15 Oct 2020 16:45:06 +0000 Received: by outflank-mailman (output) from mailman id 7572.19925; Thu, 15 Oct 2020 16:45:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NG-00054h-AJ; Thu, 15 Oct 2020 16:45:06 +0000 Received: by outflank-mailman (input) for mailman id 7572; Thu, 15 Oct 2020 16:45:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NF-0004yr-54 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:05 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 57abc5f2-97a3-41c2-8c74-148dd32caf9d; Thu, 15 Oct 2020 16:44:56 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id c141so4370022lfg.5 for ; Thu, 15 Oct 2020 09:44:56 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:54 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NF-0004yr-54 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:05 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 57abc5f2-97a3-41c2-8c74-148dd32caf9d; Thu, 15 Oct 2020 16:44:56 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id c141so4370022lfg.5 for ; Thu, 15 Oct 2020 09:44:56 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:54 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 57abc5f2-97a3-41c2-8c74-148dd32caf9d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mmqWLOWImzeNp56Irm3zs3dKXoDrkJx4SfyF9oEDLUs=; b=ZsOiFQnpPnZdCYSQBVWu3vFDVo/kvZhqIi2OHTmpT5tDFt99YhqKH/TTcZ2CmHnvT/ j+ie9IoJGAMqfAnD7BO4U9HVeu5a1tz8O+qkaP6ETO1Dq0KY6FL0rMWiJhWNFq30gRRP f6NnvRtAot/LJ68p4cMTYOKKI01Ki1/v5/qLYmIMzQTwjs+Qe4CVbS3ZI6blpFSuq3La knvLm2FlC7vCBLYhWPhJtGuBFSKeb76qg5H7YSOpPQzZnIBOCPxHvgXzEopSM1iYGnVl W10ihqrjlvYE4AiawmqyIUKaVJDUo6CdwyEF3/1ddzxDIBk6yOyem5tnwsWhuMPE+clD 3+zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mmqWLOWImzeNp56Irm3zs3dKXoDrkJx4SfyF9oEDLUs=; b=koR69UWYrvIDJAwXrafEG34W9LbwtPc732jE4KyayubzVJCoftIif2oQ/5IPoj1w0z 7dbUBYwp7kqXjQ22owbN0Z2cumgeO3GJwjl/S71luwMXOy/JNgnD1MGj0y+tvMHe/yCv I2AygLM3duQzcvlSq9+zWqEm5ufm/MHY0HTSvlQ9Uqc1QGx72soOp8WLLc/gOWeOIB56 Sx23zMddAdpkICeUrT0yoOBNRnfAsE2o1GZss+HR+hyuJxBXuIciAdLZaAr09Fb+/U5L CNV9n8HMpSuvxFTcNOYg58tfdyEIGXjNxYd9qGnsqDQTZl4TNdLakB3L4M9n+jsVQVSg f2JQ== X-Gm-Message-State: AOAM533waJkx7bqfe+y66XDkQTgwP0nyaXfvjcc7/dPm/7k3WpbL2OYu FbBqj6CF84jvPAzXsgmfRVRqCpEorEshlA== X-Google-Smtp-Source: ABdhPJy6EuL1uoUXZSQ5V/dK7lmolV1NcGlO5kNY6cD6uQs/KAZrdgDN68W8RZBxYvyVnopQ63KKng== X-Received: by 2002:ac2:5699:: with SMTP id 25mr1486368lfr.396.1602780295173; Thu, 15 Oct 2020 09:44:55 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V2 03/23] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common Date: Thu, 15 Oct 2020 19:44:14 +0300 Message-Id: <1602780274-29141-4-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and this helper will be used on Arm as is. Move it to xen/ioreq.h and remove "hvm" prefix. Although PIO handling on Arm is not introduced with the current series (it will be implemented when we add support for vPCI), technically the PIOs exist on Arm (however they are accessed the same way as MMIO) and it would be better not to diverge now. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" Changes V1 -> V2: - remove "hvm" prefix --- xen/arch/x86/hvm/emulate.c | 4 ++-- xen/arch/x86/hvm/io.c | 2 +- xen/common/ioreq.c | 4 ++-- xen/include/asm-x86/hvm/vcpu.h | 7 ------- xen/include/xen/ioreq.h | 7 +++++++ 5 files changed, 12 insertions(+), 12 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 24cf85f..5700274 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -336,7 +336,7 @@ static int hvmemul_do_io( rc =3D hvm_send_ioreq(s, &p, 0); if ( rc !=3D X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state =3D STATE_IOREQ_NONE; - else if ( !hvm_ioreq_needs_completion(&vio->io_req) ) + else if ( !ioreq_needs_completion(&vio->io_req) ) rc =3D X86EMUL_OKAY; } break; @@ -2649,7 +2649,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *= hvmemul_ctxt, if ( rc =3D=3D X86EMUL_OKAY && vio->mmio_retry ) rc =3D X86EMUL_RETRY; =20 - if ( !hvm_ioreq_needs_completion(&vio->io_req) ) + if ( !ioreq_needs_completion(&vio->io_req) ) completion =3D HVMIO_no_completion; else if ( completion =3D=3D HVMIO_no_completion ) completion =3D (vio->io_req.type !=3D IOREQ_TYPE_PIO || diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 3e09d9b..b220d6b 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -135,7 +135,7 @@ bool handle_pio(uint16_t port, unsigned int size, int d= ir) =20 rc =3D hvmemul_do_pio_buffer(port, size, dir, &data); =20 - if ( hvm_ioreq_needs_completion(&vio->io_req) ) + if ( ioreq_needs_completion(&vio->io_req) ) vio->io_completion =3D HVMIO_pio_completion; =20 switch ( rc ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index d3433d7..c89df7a 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -159,7 +159,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, = ioreq_t *p) } =20 p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; - if ( hvm_ioreq_needs_completion(p) ) + if ( ioreq_needs_completion(p) ) p->data =3D data; =20 sv->pending =3D false; @@ -185,7 +185,7 @@ bool handle_hvm_io_completion(struct vcpu *v) if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) return false; =20 - vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) ? + vio->io_req.state =3D ioreq_needs_completion(&vio->io_req) ? STATE_IORESP_READY : STATE_IOREQ_NONE; =20 msix_write_completion(v); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 5ccd075..6c1feda 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -91,13 +91,6 @@ struct hvm_vcpu_io { const struct g2m_ioport *g2m_ioport; }; =20 -static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) -{ - return ioreq->state =3D=3D STATE_IOREQ_READY && - !ioreq->data_is_ptr && - (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); -} - struct nestedvcpu { bool_t nv_guestmode; /* vcpu in guestmode? */ void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 6db4392..8e1603c 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -24,6 +24,13 @@ struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, unsigned int id); =20 +static inline bool ioreq_needs_completion(const ioreq_t *ioreq) +{ + return ioreq->state =3D=3D STATE_IOREQ_READY && + !ioreq->data_is_ptr && + (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); +} + bool hvm_io_pending(struct vcpu *v); bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780324; cv=none; d=zohomail.com; s=zohoarc; b=Ra4yvX6GU7fXfF/G+mnh0ZhoF4i+Rss0lDmjJttxu2lqIiPHCw6QSVtsfmfOqwepTHya485p2ecTkdl2vDIXM5XXJg1trAp0JZuyqrV8JkY+fk8sbXxx9eHmGcB5HCp8Q4lsECCLvvVEhliYpVNM4BgPsCOP0PvC9g6cN8VxIPw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780324; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=ttyepk04DroNuuyrE7kUju7fxJWgSwghlPrbVPtWZwk=; b=LHNIThhyI474GrT4bikg1cmDwgV3N9C9xu9J4w0yjCZzebibsb9Put0yGHfBSm5Qnwxi1coTJQotNFnf7sQsAHyzCj1FonGfpPkNGn1Vk5U/bMSy0iMlv4ie+DwX0rPbfg8917Unnmpu3er4Sc6MV9orB3FXKP2Xn2XxTBzL+Cc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780324172408.96260082702804; Thu, 15 Oct 2020 09:45:24 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7573.19936 (Exim 4.92) (envelope-from ) id 1kT6NK-00058e-NI; Thu, 15 Oct 2020 16:45:10 +0000 Received: by outflank-mailman (output) from mailman id 7573.19936; Thu, 15 Oct 2020 16:45:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NK-00058T-Jj; Thu, 15 Oct 2020 16:45:10 +0000 Received: by outflank-mailman (input) for mailman id 7573; Thu, 15 Oct 2020 16:45:10 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NK-0004yr-5E for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:10 +0000 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6ab12fba-4c45-497b-966b-2f9e79aa024b; Thu, 15 Oct 2020 16:44:57 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id l2so4403908lfk.0 for ; Thu, 15 Oct 2020 09:44:57 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:55 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NK-0004yr-5E for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:10 +0000 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6ab12fba-4c45-497b-966b-2f9e79aa024b; Thu, 15 Oct 2020 16:44:57 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id l2so4403908lfk.0 for ; Thu, 15 Oct 2020 09:44:57 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:55 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6ab12fba-4c45-497b-966b-2f9e79aa024b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ttyepk04DroNuuyrE7kUju7fxJWgSwghlPrbVPtWZwk=; b=V4XLO+24Y+705cePezQYGyFCz4z/qSTZsDUC1r3kjVWfPe15UvtNa51bdKnC5PWUEd HuWtcD7Hps6VST0HuSisyxAjgQjlvzRPCctuFl93LA8p6I/81eHSCyWXKUEdAHbZolUs +ojIFwxgYDR3sBsjU3H1hJbX/u9rWVuEzYb2lZwKHxjwJzY3Q1xwdQV+FkdkuSbk+FSo nyFAydRwuly3soU+DooQ5MflsgYO9bk6TX1huLRcMR1fxj9iIbd9I8bBnphK9KRGMSfE HlzC6xNxadcAs1+QR9E3UVcbKcP+f9VqklUmddNeKPQMjaZqnedTzuiuRKDfmP6rbM+A P36w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ttyepk04DroNuuyrE7kUju7fxJWgSwghlPrbVPtWZwk=; b=Jpl4B48GkjVE8iMaD/H9eTnAnGYv4Qf8znanKam6CE1tmUU7z+nV1nviQJkiqBSldR J2nOkAdu9gsFdIBDIO8fce4HD1DToEd3FW8ApzmO5SWss8zNUk1NhKCKr/EcUltr0+qj 5SSGf0vNjUMUGQI/Xbxh36zpGSSqWbh2667ZoB5bNooIAMjlYTBlITVOoxb2Urxm69Dp ely3lOh3dAvwGHI+I0M1jLbbe7ea1jUEX06uEI3q9uNSOTOGuTnoSZcuC69MF8+4OELy mqU3G3SFMeleSltaGmACBA3bKxeZTsK6QSnfK4kN7hKevk9iyeaZfH7n9q7M3dz0XgJ/ EAug== X-Gm-Message-State: AOAM531NmNYkUloCXs+WXtlPrjJKR0LP+ESzw2UA1DsYSE0WIn6Cn+Qr qUHnpL6FvSHPPU4h6sNygRlDY/ncexyzHw== X-Google-Smtp-Source: ABdhPJz/slHPdEYDiEnJBk3d7otX848eBJ8VApE6o12yUQc60uWrA2Q3qdoeaPfm/qaplDbFmlu88w== X-Received: by 2002:a19:c857:: with SMTP id y84mr1338613lff.432.1602780296167; Thu, 15 Oct 2020 09:44:56 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio() Date: Thu, 15 Oct 2020 19:44:15 +0300 Message-Id: <1602780274-29141-5-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and Arm will have its own implementation. But the name of the function is pretty generic and can be confusing on Arm (we already have a try_handle_mmio()). In order not to rename the function (which is used for a varying set of purposes on x86) globally and get non-confusing variant on Arm provide an alias ioreq_complete_mmio() to be used on common and Arm code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Acked-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - remove "handle" - add Jan's A-b --- xen/common/ioreq.c | 2 +- xen/include/asm-x86/hvm/ioreq.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index c89df7a..29ad48e 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -200,7 +200,7 @@ bool handle_hvm_io_completion(struct vcpu *v) break; =20 case HVMIO_mmio_completion: - return handle_mmio(); + return ioreq_complete_mmio(); =20 case HVMIO_pio_completion: return handle_pio(vio->io_req.addr, vio->io_req.size, diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index a3d8faa..a147856 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -181,6 +181,8 @@ static inline bool arch_hvm_ioreq_destroy(struct domain= *d) #define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE #define IOREQ_STATUS_RETRY X86EMUL_RETRY =20 +#define ioreq_complete_mmio handle_mmio + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780334; cv=none; d=zohomail.com; s=zohoarc; b=NwzOxQtjWd2FkaPgFfITpvrSs1BaFUcohDv3qz2OYb6rGENQSowUpXvXUwAuAU4yYpd0e9gjG3VFkxqXlpVBKPWBhFYIii5BBCKzlnIFYy7soPFY+jc4uphfacmK6N9Yii24PTPtbQKbr8biMnqW43RcBhs9N5RlesixnFZytBY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780334; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=KgNrI0jjP6nQ+Uo9tT9QhBTyECmVbDeACqoYsGAUBPM=; b=eSxq8EGRo0gyNfmRBZoMrsphEKOI8xHgWJXKE+p47fpkxResdyLyIRzqBtuJVFuxSONUlRFh1/aBI9v2e5RVnaEk+TZUnADLoftd7N74RXAlZudM/WqQgsvN04LHDd3deBBpValGjQjzbF29zrUE7098WSImsq7okhP4h+w9gY4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780334801244.03927674109457; Thu, 15 Oct 2020 09:45:34 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7574.19949 (Exim 4.92) (envelope-from ) id 1kT6NQ-0005Fc-8w; Thu, 15 Oct 2020 16:45:16 +0000 Received: by outflank-mailman (output) from mailman id 7574.19949; Thu, 15 Oct 2020 16:45:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NQ-0005FT-5H; Thu, 15 Oct 2020 16:45:16 +0000 Received: by outflank-mailman (input) for mailman id 7574; Thu, 15 Oct 2020 16:45:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NP-0004yr-5R for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:15 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ec0d2ac8-c18c-49dc-a0e7-e58aa7a9280c; Thu, 15 Oct 2020 16:44:58 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id c21so3880022ljj.0 for ; Thu, 15 Oct 2020 09:44:58 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:56 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NP-0004yr-5R for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:15 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ec0d2ac8-c18c-49dc-a0e7-e58aa7a9280c; Thu, 15 Oct 2020 16:44:58 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id c21so3880022ljj.0 for ; Thu, 15 Oct 2020 09:44:58 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:56 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ec0d2ac8-c18c-49dc-a0e7-e58aa7a9280c DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=KgNrI0jjP6nQ+Uo9tT9QhBTyECmVbDeACqoYsGAUBPM=; b=ARUsd/BRZI5s/0J26oyNn8wIKcKZFAYkEa6I4wJXTRpkFPlrQQwHcbEUI3Os0zKtVu ypzmn9T/WO7Br8v/RvWuYLid8C8xy+Z8sCQGfL6hf8lXdQmEBCTsOG1ebukCwPPDjfqI Gc5PEaYtZIXMLGKohs2D0J6/9WEEtgIT5TIK/8zQMlIMJucQxE0HDVzyUj7i78Hx5BaJ h4xXR3joTKyku6PHCeWFOtuF06OBTsMwRdOlJZa31schR18SRcGe+y44bxe0xTdeQnlT lvZN6qRGSo6i3wDz95REeelfvOVpBnNhAZGqI1o/tk7Wtdwx/EFyNR0laxKxkkC+pOKM +v9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=KgNrI0jjP6nQ+Uo9tT9QhBTyECmVbDeACqoYsGAUBPM=; b=BBMVDfEoCQpBODiWq2aOs9PxZ+KwYODtYtPzBQ7ROcJlmXf2+RYozj/FulwIS2La95 uajuuCYBpuhx/1OBgLhyfYoItEyeRhxPyL49VGi755ooykhQnNtn0zcszMg7UIVouVUJ NIN683PPQY09ONjun4X5PBGpH7+1S2ImgMEvp33uEInCzGsNj2VJlBoFPEcTGvgI/v8+ L0uUaKCkOlRopxQkL4eeQ0kgYHmopG9pOwVRZD0Gnjcks9nidFggyxunXQaNOH7+53Fl 8s0y3dnv9LFtip9/zg8ydUwiDqyVWHBO8nO5nGj8XTH/O5nm5pyfhHNgQx+wSJLNmvdz 3BrA== X-Gm-Message-State: AOAM530NuNbZVJ1Fk/PVc1oWkZxQT5Lt/VdeUm3uX1Jl2Jj1nabZf+Jp denRCId9rr2DDEFTB8ficFUkIcdT782ljQ== X-Google-Smtp-Source: ABdhPJwknzxpiS2ZDmlrF2YihbjX8Gao6YgxxSmqJ8ySUZVR2HPG3uujUIxGTa3tZl6W/rBa9AT94g== X-Received: by 2002:a2e:9a9a:: with SMTP id p26mr1523137lji.4.1602780297235; Thu, 15 Oct 2020 09:44:57 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V2 05/23] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common Date: Thu, 15 Oct 2020 19:44:16 +0300 Message-Id: <1602780274-29141-6-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these helpers will be used on Arm as is. Move them to xen/ioreq.h and replace "hvm" prefixes with "ioreq". Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - replace "hvm" prefix by "ioreq" --- xen/arch/x86/hvm/intercept.c | 5 +++-- xen/arch/x86/hvm/stdvga.c | 4 ++-- xen/common/ioreq.c | 4 ++-- xen/include/asm-x86/hvm/io.h | 16 ---------------- xen/include/xen/ioreq.h | 16 ++++++++++++++++ 5 files changed, 23 insertions(+), 22 deletions(-) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index cd4c4c1..02ca3b0 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -17,6 +17,7 @@ * this program; If not, see . */ =20 +#include #include #include #include @@ -34,7 +35,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler, const ioreq_t *p) { - paddr_t first =3D hvm_mmio_first_byte(p), last; + paddr_t first =3D ioreq_mmio_first_byte(p), last; =20 BUG_ON(handler->type !=3D IOREQ_TYPE_COPY); =20 @@ -42,7 +43,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler= *handler, return 0; =20 /* Make sure the handler will accept the whole access. */ - last =3D hvm_mmio_last_byte(p); + last =3D ioreq_mmio_last_byte(p); if ( last !=3D first && !handler->mmio.ops->check(current, last) ) domain_crash(current->domain); diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index e267513..e184664 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -524,8 +524,8 @@ static bool_t stdvga_mem_accept(const struct hvm_io_han= dler *handler, * deadlock when hvm_mmio_internal() is called from * hvm_copy_to/from_guest_phys() in hvm_process_io_intercept(). */ - if ( (hvm_mmio_first_byte(p) < VGA_MEM_BASE) || - (hvm_mmio_last_byte(p) >=3D (VGA_MEM_BASE + VGA_MEM_SIZE)) ) + if ( (ioreq_mmio_first_byte(p) < VGA_MEM_BASE) || + (ioreq_mmio_last_byte(p) >=3D (VGA_MEM_BASE + VGA_MEM_SIZE)) ) return 0; =20 spin_lock(&s->lock); diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 29ad48e..5fa10b6 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -1210,8 +1210,8 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, break; =20 case XEN_DMOP_IO_RANGE_MEMORY: - start =3D hvm_mmio_first_byte(p); - end =3D hvm_mmio_last_byte(p); + start =3D ioreq_mmio_first_byte(p); + end =3D ioreq_mmio_last_byte(p); =20 if ( rangeset_contains_range(r, start, end) ) return s; diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 558426b..fb64294 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -40,22 +40,6 @@ struct hvm_mmio_ops { hvm_mmio_write_t write; }; =20 -static inline paddr_t hvm_mmio_first_byte(const ioreq_t *p) -{ - return unlikely(p->df) ? - p->addr - (p->count - 1ul) * p->size : - p->addr; -} - -static inline paddr_t hvm_mmio_last_byte(const ioreq_t *p) -{ - unsigned long size =3D p->size; - - return unlikely(p->df) ? - p->addr + size - 1: - p->addr + (p->count * size) - 1; -} - typedef int (*portio_action_t)( int dir, unsigned int port, unsigned int bytes, uint32_t *val); =20 diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 8e1603c..768ac94 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -24,6 +24,22 @@ struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, unsigned int id); =20 +static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) +{ + return unlikely(p->df) ? + p->addr - (p->count - 1ul) * p->size : + p->addr; +} + +static inline paddr_t ioreq_mmio_last_byte(const ioreq_t *p) +{ + unsigned long size =3D p->size; + + return unlikely(p->df) ? + p->addr + size - 1: + p->addr + (p->count * size) - 1; +} + static inline bool ioreq_needs_completion(const ioreq_t *ioreq) { return ioreq->state =3D=3D STATE_IOREQ_READY && --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780344; cv=none; d=zohomail.com; s=zohoarc; b=anDpcFXkqIqPazQtnlucvtNEcHURr7gvoSRQdCEndTO12vC6MgKLAbVcQVYaDJmGiCbuWVQE8AG6ziA01S3cDvWniIOM5qtjcTK8VCS0vFM93U0ztVqhXyup5VFJtBDvzIFECfvvsemiwFBg1HNPUlr2RbOP0lMCaWOjtUPPTnI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780344; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=iOP5c373gWXxnyEQtQ6/39WWsVy1IddOyO5zW1UbPhE=; b=YXrJsdWe7ekdDpPwkLLY5vMY3Kc9/AgmOPgu/F2AMTw0ar92k7xpQOzGmH16GQEQlPgI8AyXaVqGRNFO8uL2ej1kX5C5HYWnlFDbueMisfYG4cVYVWDbUCAJDZ6ifhNjPoBWhxcTmlMX7ixapk+hYy3u43gfddU6QVo4ps+VlDI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 160278034447711.67540234390458; Thu, 15 Oct 2020 09:45:44 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7577.19972 (Exim 4.92) (envelope-from ) id 1kT6Na-0005RA-8N; Thu, 15 Oct 2020 16:45:26 +0000 Received: by outflank-mailman (output) from mailman id 7577.19972; Thu, 15 Oct 2020 16:45:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Na-0005R1-4P; Thu, 15 Oct 2020 16:45:26 +0000 Received: by outflank-mailman (input) for mailman id 7577; Thu, 15 Oct 2020 16:45:25 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NZ-0004yr-62 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:25 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8ed28b36-dc40-41de-a257-d042cfe62eca; Thu, 15 Oct 2020 16:44:59 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id l2so4404061lfk.0 for ; Thu, 15 Oct 2020 09:44:59 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:57 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6NZ-0004yr-62 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:25 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8ed28b36-dc40-41de-a257-d042cfe62eca; Thu, 15 Oct 2020 16:44:59 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id l2so4404061lfk.0 for ; Thu, 15 Oct 2020 09:44:59 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:57 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8ed28b36-dc40-41de-a257-d042cfe62eca DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iOP5c373gWXxnyEQtQ6/39WWsVy1IddOyO5zW1UbPhE=; b=kFavmZh0ZbnHd5xGoKY+YOWu68vMxGRVt2WafKGrCJngOb6uDjR/OMq8tjaaYgefme okFPYz4qFaKdQXGP1H1ZMrwxVfS6tDCbNnxs4abGaoRRzyo7JJSGlQX9hTZwzp5c8Sqe hqi+5eF8BMvKFDAM7UQ/ccFiwUWkzbOD6RRKWwTJdeZqiki5sOUEcjbLn1yCck9e4Ng9 McW0wAD7DFOMr3ZN1/1cbgdMlH1lY9Vn4TeCZ33ub0RzYWKmVwpzI8gzGFdZ1Ddyut7V Pu0OTUXIemWydghO53ERVwhEJbdQooYP6gWwlfdffuRIEO/HamwQ4Wh1j7iTnx0CqUyI t/eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iOP5c373gWXxnyEQtQ6/39WWsVy1IddOyO5zW1UbPhE=; b=oglCZ/u/TmxiZbg8cjRj4MTvLeMp7ChN5A+vQOZGZsclcGPs4igeeFAlWSFPiYkXl/ 155XTF5ov8hHkOt6vaur7VYD47WyWjE5kKRUh7g732po41xx7I7Qj04NhkaFnhqSR6Ka S9eZ45YXlENuKTONL1BMBHKFBOxAbdc5JsxlGQvzyOs+VO7n60XVCEL3KKIfwVVBDTl7 9QZBIceP/Huai7CV8adVsaxJMguWObpt99AwRFXP3NERto5oCXNLCaLjKrR6IEWKcgjS WvuXu3N82RY7Xy3fR+mO0UwxoiO7EBCcFEDDvBUfeO0dSULezYMo/8wveVJodGPl5sNt fmNQ== X-Gm-Message-State: AOAM531R6cRg+UK8OdoW8tYRx0jF9STivDPxWC0htabVz3QW00WwrALx zha9qjqpDOJ8UnPkKICAkvKYf5GhXaBUbQ== X-Google-Smtp-Source: ABdhPJwz+YDHJwfme7KDVmcq0ojADYmPGfgSySABhBOm8lRi65o2wRiQCuvgaOPCZMbIOk8CfIv52A== X-Received: by 2002:a19:ee12:: with SMTP id g18mr1526973lfb.515.1602780298323; Thu, 15 Oct 2020 09:44:58 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V2 06/23] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common Date: Thu, 15 Oct 2020 19:44:17 +0300 Message-Id: <1602780274-29141-7-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these structs will be used on Arm as is. Move them to xen/ioreq.h and remove "hvm" prefixes. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - remove "hvm" prefix --- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/mm/p2m.c | 8 +-- xen/common/ioreq.c | 134 +++++++++++++++++++----------------= ---- xen/include/asm-x86/hvm/domain.h | 36 +---------- xen/include/asm-x86/hvm/ioreq.h | 4 +- xen/include/asm-x86/p2m.h | 8 +-- xen/include/xen/ioreq.h | 44 +++++++++++-- 8 files changed, 119 insertions(+), 119 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 5700274..4746d5a 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -287,7 +287,7 @@ static int hvmemul_do_io( * However, there's no cheap approach to avoid above situations in= xen, * so the device model side needs to check the incoming ioreq even= t. */ - struct hvm_ioreq_server *s =3D NULL; + struct ioreq_server *s =3D NULL; p2m_type_t p2mt =3D p2m_invalid; =20 if ( is_mmio ) diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index e184664..bafb3f6 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler= *handler, .dir =3D IOREQ_WRITE, .data =3D data, }; - struct hvm_ioreq_server *srv; + struct ioreq_server *srv; =20 if ( !stdvga_cache_is_enabled(s) || !s->stdvga ) goto done; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 928344b..6102771 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -366,7 +366,7 @@ void p2m_memory_type_changed(struct domain *d) =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); int rc; @@ -414,11 +414,11 @@ int p2m_set_ioreq_server(struct domain *d, return rc; } =20 -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags) +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - struct hvm_ioreq_server *s; + struct ioreq_server *s; =20 spin_lock(&p2m->ioreq.lock); =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 5fa10b6..1d62d13 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -34,7 +34,7 @@ #include =20 static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); @@ -45,8 +45,8 @@ static void set_ioreq_server(struct domain *d, unsigned i= nt id, #define GET_IOREQ_SERVER(d, id) \ (d)->arch.hvm.ioreq_server.server[id] =20 -struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) +struct ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) { if ( id >=3D MAX_NR_IOREQ_SERVERS ) return NULL; @@ -68,7 +68,7 @@ struct hvm_ioreq_server *get_ioreq_server(const struct do= main *d, continue; \ else =20 -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v) { shared_iopage_t *p =3D s->ioreq.va; =20 @@ -78,16 +78,16 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, s= truct vcpu *v) return &p->vcpu_ioreq[v->vcpu_id]; } =20 -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **s= rvp) +static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct ioreq_server **srvp) { struct domain *d =3D v->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -110,7 +110,7 @@ bool hvm_io_pending(struct vcpu *v) return get_pending_vcpu(v, NULL); } =20 -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state =3D STATE_IOREQ_NONE; unsigned int state =3D p->state; @@ -171,8 +171,8 @@ bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; + struct ioreq_server *s; + struct ioreq_vcpu *sv; enum hvm_io_completion io_completion; =20 if ( has_vpci(d) && vpci_process_pending(v) ) @@ -213,7 +213,7 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } =20 -static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -229,7 +229,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_iore= q_server *s) return INVALID_GFN; } =20 -static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -247,7 +247,7 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_serve= r *s) return hvm_alloc_legacy_ioreq_gfn(s); } =20 -static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, +static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d =3D s->target; @@ -265,7 +265,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_= server *s, return true; } =20 -static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) +static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d =3D s->target; unsigned int i =3D gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; @@ -279,9 +279,9 @@ static void hvm_free_ioreq_gfn(struct hvm_ioreq_server = *s, gfn_t gfn) } } =20 -static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -293,10 +293,10 @@ static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_serv= er *s, bool buf) iorp->gfn =3D INVALID_GFN; } =20 -static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; int rc; =20 if ( iorp->page ) @@ -329,9 +329,9 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s= , bool buf) return rc; } =20 -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page; =20 if ( iorp->page ) @@ -377,9 +377,9 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server = *s, bool buf) return -ENOMEM; } =20 -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page =3D iorp->page; =20 if ( !page ) @@ -396,7 +396,7 @@ static void hvm_free_ioreq_mfn(struct hvm_ioreq_server = *s, bool buf) =20 bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { - const struct hvm_ioreq_server *s; + const struct ioreq_server *s; unsigned int id; bool found =3D false; =20 @@ -416,11 +416,11 @@ bool is_ioreq_server_page(struct domain *d, const str= uct page_info *page) return found; } =20 -static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) =20 { struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -431,10 +431,10 @@ static void hvm_remove_ioreq_gfn(struct hvm_ioreq_ser= ver *s, bool buf) clear_page(iorp->va); } =20 -static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; int rc; =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -450,8 +450,8 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s= , bool buf) return rc; } =20 -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) +static void hvm_update_ioreq_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); =20 @@ -466,13 +466,13 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_= server *s, #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) =20 -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; int rc; =20 - sv =3D xzalloc(struct hvm_ioreq_vcpu); + sv =3D xzalloc(struct ioreq_vcpu); =20 rc =3D -ENOMEM; if ( !sv ) @@ -518,10 +518,10 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq= _server *s, return rc; } =20 -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, +static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 spin_lock(&s->lock); =20 @@ -546,9 +546,9 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ior= eq_server *s, spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv, *next; + struct ioreq_vcpu *sv, *next; =20 spin_lock(&s->lock); =20 @@ -572,7 +572,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hv= m_ioreq_server *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) +static int hvm_ioreq_server_map_pages(struct ioreq_server *s) { int rc; =20 @@ -587,13 +587,13 @@ static int hvm_ioreq_server_map_pages(struct hvm_iore= q_server *s) return rc; } =20 -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_unmap_pages(struct ioreq_server *s) { hvm_unmap_ioreq_gfn(s, true); hvm_unmap_ioreq_gfn(s, false); } =20 -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; =20 @@ -608,13 +608,13 @@ static int hvm_ioreq_server_alloc_pages(struct hvm_io= req_server *s) return rc; } =20 -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_pages(struct ioreq_server *s) { hvm_free_ioreq_mfn(s, true); hvm_free_ioreq_mfn(s, false); } =20 -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; =20 @@ -622,7 +622,7 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_= ioreq_server *s) rangeset_destroy(s->range[i]); } =20 -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, ioservid_t id) { unsigned int i; @@ -660,9 +660,9 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_= ioreq_server *s, return rc; } =20 -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_enable(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 spin_lock(&s->lock); =20 @@ -683,7 +683,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_se= rver *s) spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); =20 @@ -699,7 +699,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_s= erver *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_init(struct ioreq_server *s, struct domain *d, int bufioreq_handling, ioservid_t id) { @@ -744,7 +744,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_serve= r *s, return rc; } =20 -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); @@ -769,14 +769,14 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_= server *s) int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, ioservid_t *id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int i; int rc; =20 if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) return -EINVAL; =20 - s =3D xzalloc(struct hvm_ioreq_server); + s =3D xzalloc(struct ioreq_server); if ( !s ) return -ENOMEM; =20 @@ -824,7 +824,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, =20 int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -869,7 +869,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, unsigned long *bufioreq_gfn, evtchn_port_t *bufioreq_port) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -914,7 +914,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, unsigned long idx, mfn_t *mfn) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 ASSERT(is_hvm_domain(d)); @@ -966,7 +966,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; =20 @@ -1018,7 +1018,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domai= n *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; =20 @@ -1069,7 +1069,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domai= n *d, ioservid_t id, int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, bool enabled) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1102,7 +1102,7 @@ int hvm_set_ioreq_server_state(struct domain *d, iose= rvid_t id, =20 int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; int rc; =20 @@ -1137,7 +1137,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) =20 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1150,7 +1150,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain = *d, struct vcpu *v) =20 void hvm_destroy_all_ioreq_servers(struct domain *d) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; =20 if ( !arch_hvm_ioreq_destroy(d) ) @@ -1177,10 +1177,10 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; uint8_t type; uint64_t addr; unsigned int id; @@ -1233,10 +1233,10 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, return NULL; } =20 -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) { struct domain *d =3D current->domain; - struct hvm_ioreq_page *iorp; + struct ioreq_page *iorp; buffered_iopage_t *pg; buf_ioreq_t bp =3D { .data =3D p->data, .addr =3D p->addr, @@ -1326,12 +1326,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq= _server *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } =20 -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered) { struct vcpu *curr =3D current; struct domain *d =3D curr->domain; - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 ASSERT(s); =20 @@ -1389,7 +1389,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_= t *proto_p, unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) { struct domain *d =3D current->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id, failed =3D 0; =20 FOR_EACH_IOREQ_SERVER(d, id, s) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 9d247ba..3b36c2f 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -30,40 +30,6 @@ =20 #include =20 -struct hvm_ioreq_page { - gfn_t gfn; - struct page_info *page; - void *va; -}; - -struct hvm_ioreq_vcpu { - struct list_head list_entry; - struct vcpu *vcpu; - evtchn_port_t ioreq_evtchn; - bool pending; -}; - -#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) -#define MAX_NR_IO_RANGES 256 - -struct hvm_ioreq_server { - struct domain *target, *emulator; - - /* Lock to serialize toolstack modifications */ - spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; - struct rangeset *range[NR_IO_RANGE_TYPES]; - bool enabled; - uint8_t bufioreq_handling; -}; - #ifdef CONFIG_MEM_SHARING struct mem_sharing_domain { @@ -110,7 +76,7 @@ struct hvm_domain { /* Lock protects all other values in the sub-struct and the default */ struct { spinlock_t lock; - struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; } ioreq_server; =20 /* Cached CF8 for guest PCI config cycles */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index a147856..d2d64a8 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -50,7 +50,7 @@ static inline bool arch_hvm_io_completion(enum hvm_io_com= pletion io_completion) } =20 /* Called when target domain is paused */ -static inline void arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *= s) +static inline void arch_hvm_destroy_ioreq_server(struct ioreq_server *s) { p2m_set_ioreq_server(s->target, 0, s); } @@ -68,7 +68,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct= domain *d, uint32_t type, uint32_t flags) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 if ( type !=3D HVMMEM_ioreq_server ) diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 8abae34..5f7ba31 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -350,7 +350,7 @@ struct p2m_domain { * ioreq server who's responsible for the emulation of * gfns with specific p2m type(for now, p2m_ioreq_server). */ - struct hvm_ioreq_server *server; + struct ioreq_server *server; /* * flags specifies whether read, write or both operations * are to be emulated by an ioreq server. @@ -933,9 +933,9 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type= _t p2mt, mfn_t mfn) } =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s); -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags); + struct ioreq_server *s); +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags); =20 static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt, p2m_type_t ot, mfn_t nfn, mfn_t ofn, diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 768ac94..8451866 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -21,8 +21,42 @@ =20 #include =20 -struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id); +struct ioreq_page { + gfn_t gfn; + struct page_info *page; + void *va; +}; + +struct ioreq_vcpu { + struct list_head list_entry; + struct vcpu *vcpu; + evtchn_port_t ioreq_evtchn; + bool pending; +}; + +#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) +#define MAX_NR_IO_RANGES 256 + +struct ioreq_server { + struct domain *target, *emulator; + + /* Lock to serialize toolstack modifications */ + spinlock_t lock; + + struct ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + struct rangeset *range[NR_IO_RANGE_TYPES]; + bool enabled; + uint8_t bufioreq_handling; +}; + +struct ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id); =20 static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) { @@ -73,9 +107,9 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, str= uct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); void hvm_destroy_all_ioreq_servers(struct domain *d); =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered); unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); =20 --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780350; cv=none; d=zohomail.com; s=zohoarc; b=lbzTxpyN8pDH6L+jJ6bau5ykmBO0sEqJ+TtpVAeHLUYDhPgJ2fF43UKwZQ77iq1aVTDn8nt5fmH7L7aQ7hvlM7BKzVGFZ8QZOUSG1SiBjsCMuyBknx4/Oom2RvtaEPRPbYWqRlYXvYBCAQaf0LdLMWhjDHuhomEe7cZlN090AN8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780350; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=/4N5432elpoCFCTQOFacUfSDjBxB67QedZM20URUDaY=; b=B4eqH6lvLjXaorkLZFA0V8Keyq/+ommqMN6miMBRj+DiIoQUHigRf/4NMrXUyioZIbmOlE/9Zh4m9w5Il4Rderm34xLxFIkHUfffYZcW4VTn03V1HU+GxgbLTyGysC7Az1+pvTJWzJmCI5VayVpLixMBHs6yRbkZCKR66DPSy44= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780350420181.51871788897847; Thu, 15 Oct 2020 09:45:50 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7579.19985 (Exim 4.92) (envelope-from ) id 1kT6Nf-0005XS-K6; Thu, 15 Oct 2020 16:45:31 +0000 Received: by outflank-mailman (output) from mailman id 7579.19985; Thu, 15 Oct 2020 16:45:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Nf-0005XK-Fh; Thu, 15 Oct 2020 16:45:31 +0000 Received: by outflank-mailman (input) for mailman id 7579; Thu, 15 Oct 2020 16:45:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Ne-0004yr-64 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:30 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7c3f3660-c6ec-4475-97e7-0649efa3f0a1; Thu, 15 Oct 2020 16:45:00 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id f29so3870725ljo.3 for ; Thu, 15 Oct 2020 09:45:00 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:58 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Ne-0004yr-64 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:30 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7c3f3660-c6ec-4475-97e7-0649efa3f0a1; Thu, 15 Oct 2020 16:45:00 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id f29so3870725ljo.3 for ; Thu, 15 Oct 2020 09:45:00 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:44:58 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7c3f3660-c6ec-4475-97e7-0649efa3f0a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/4N5432elpoCFCTQOFacUfSDjBxB67QedZM20URUDaY=; b=eXOMmNq1BoDmsLtsgbPKkshMqC5uNq7eP/4RsyTKNtjGB2u1Onj9Rvy9osZjcYZ6lG X8V6fniQ0m9KsZbJ/ohQFnFz55Ot8p7JUpBkpFlC9P+lFK+ndFFktGZorKuy1wW5kcvD Z836JZ3kv2CRw2dWI7IMc7hHmXu35RAJUMsUsejuARTeCgJbhrXwbfJDIocfGj5oBuDk rcvMeLf65qxUwC7pGZxFfyu9C1bOpTBRvjDng4YQPPViDIOB6i3Jnoixdu11en9Lv1SQ 1TpqrJde0BzbzuzvtPVbIIvL2iDGFI+NpWWOaMZxcNVefkqqIZPkpk7ZkMEBNinQx3Iq SrIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/4N5432elpoCFCTQOFacUfSDjBxB67QedZM20URUDaY=; b=OY3VyiqYSYdPIGs7UVg8fb+USTLowCstxJrNRtjf6aduqyw3ue6b28dDZ5Ys4xkUOd SfUrcyO8fXAylVFsI9pLqi8umlRO5RH4YrqSxzV081MyJHZMJ2oZEG4k231lVg6cmrB2 LdAKa4gieQ6v3xagCeyfWNQPEarCsb7H1/P2sIv6c4uHp0U37lbEVHOXrqZcb3QB/acW mwkDLFmPMNb2O+V533u3C+wetDAJfps7lq68XDtZcDycqHQcgdYRf+3vNTCvpM19THbA cHAgRoi60qt/fYDP+u5LzugNyS/L8yfQ7qoYx83Ddw7L9c2m0IJIGZkgzk2IGjt26kaQ rWDg== X-Gm-Message-State: AOAM530qlLalFOnfPDdH5FX8jp7mxJowCLG/mO3pHswCEIWYIE74UQk2 2a24sd1n25kkQaPxCFt3kC73QYIcy6+AAQ== X-Google-Smtp-Source: ABdhPJyGbG38G+DinLf0m/SSKtu4fphOzSZb8YAqDDXWMGNemxvpmbSJY6E4O5YzMID/CuL1sbHY2A== X-Received: by 2002:a2e:b557:: with SMTP id a23mr1746654ljn.5.1602780299477; Thu, 15 Oct 2020 09:44:59 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Paul Durrant , Julien Grall Subject: [PATCH V2 07/23] xen/ioreq: Move x86's ioreq_gfn(server) to struct domain Date: Thu, 15 Oct 2020 19:44:18 +0300 Message-Id: <1602780274-29141-8-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these structs will be used on Arm as is. Move them to common struct domain. This also significantly reduces the layering violation in the common code (*arch.hvm* usage). Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch --- xen/arch/x86/hvm/hvm.c | 12 +++---- xen/common/ioreq.c | 72 ++++++++++++++++++++----------------= ---- xen/include/asm-x86/hvm/domain.h | 15 --------- xen/include/asm-x86/hvm/ioreq.h | 4 +-- xen/include/xen/sched.h | 17 ++++++++++ 5 files changed, 61 insertions(+), 59 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 54e32e4..20376ce 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -4218,20 +4218,20 @@ static int hvm_set_param(struct domain *d, uint32_t= index, uint64_t value) rc =3D -EINVAL; break; case HVM_PARAM_IOREQ_SERVER_PFN: - d->arch.hvm.ioreq_gfn.base =3D value; + d->ioreq_gfn.base =3D value; break; case HVM_PARAM_NR_IOREQ_SERVER_PAGES: { unsigned int i; =20 if ( value =3D=3D 0 || - value > sizeof(d->arch.hvm.ioreq_gfn.mask) * 8 ) + value > sizeof(d->ioreq_gfn.mask) * 8 ) { rc =3D -EINVAL; break; } for ( i =3D 0; i < value; i++ ) - set_bit(i, &d->arch.hvm.ioreq_gfn.mask); + set_bit(i, &d->ioreq_gfn.mask); =20 break; } @@ -4239,11 +4239,11 @@ static int hvm_set_param(struct domain *d, uint32_t= index, uint64_t value) case HVM_PARAM_IOREQ_PFN: case HVM_PARAM_BUFIOREQ_PFN: BUILD_BUG_ON(HVM_PARAM_IOREQ_PFN > - sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8); + sizeof(d->ioreq_gfn.legacy_mask) * 8); BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN > - sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8); + sizeof(d->ioreq_gfn.legacy_mask) * 8); if ( value ) - set_bit(index, &d->arch.hvm.ioreq_gfn.legacy_mask); + set_bit(index, &d->ioreq_gfn.legacy_mask); break; =20 case HVM_PARAM_X87_FIP_WIDTH: diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 1d62d13..7f91bc2 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -37,13 +37,13 @@ static void set_ioreq_server(struct domain *d, unsigned= int id, struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + ASSERT(!s || !d->ioreq_server.server[id]); =20 - d->arch.hvm.ioreq_server.server[id] =3D s; + d->ioreq_server.server[id] =3D s; } =20 #define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] + (d)->ioreq_server.server[id] =20 struct ioreq_server *get_ioreq_server(const struct domain *d, unsigned int id) @@ -222,7 +222,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_se= rver *s) =20 for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) { - if ( !test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) ) + if ( !test_and_clear_bit(i, &d->ioreq_gfn.legacy_mask) ) return _gfn(d->arch.hvm.params[i]); } =20 @@ -234,10 +234,10 @@ static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server = *s) struct domain *d =3D s->target; unsigned int i; =20 - for ( i =3D 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ ) + for ( i =3D 0; i < sizeof(d->ioreq_gfn.mask) * 8; i++ ) { - if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) ) - return _gfn(d->arch.hvm.ioreq_gfn.base + i); + if ( test_and_clear_bit(i, &d->ioreq_gfn.mask) ) + return _gfn(d->ioreq_gfn.base + i); } =20 /* @@ -261,21 +261,21 @@ static bool hvm_free_legacy_ioreq_gfn(struct ioreq_se= rver *s, if ( i > HVM_PARAM_BUFIOREQ_PFN ) return false; =20 - set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask); + set_bit(i, &d->ioreq_gfn.legacy_mask); return true; } =20 static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d =3D s->target; - unsigned int i =3D gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; + unsigned int i =3D gfn_x(gfn) - d->ioreq_gfn.base; =20 ASSERT(!gfn_eq(gfn, INVALID_GFN)); =20 if ( !hvm_free_legacy_ioreq_gfn(s, gfn) ) { - ASSERT(i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8); - set_bit(i, &d->arch.hvm.ioreq_gfn.mask); + ASSERT(i < sizeof(d->ioreq_gfn.mask) * 8); + set_bit(i, &d->ioreq_gfn.mask); } } =20 @@ -400,7 +400,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) unsigned int id; bool found =3D false; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -411,7 +411,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) } } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return found; } @@ -781,7 +781,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, return -ENOMEM; =20 domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) { @@ -809,13 +809,13 @@ int hvm_create_ioreq_server(struct domain *d, int buf= ioreq_handling, if ( id ) *id =3D i; =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); =20 return 0; =20 fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); =20 xfree(s); @@ -827,7 +827,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -859,7 +859,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -872,7 +872,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -906,7 +906,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -919,7 +919,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, =20 ASSERT(is_hvm_domain(d)); =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -957,7 +957,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, } =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -973,7 +973,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -1009,7 +1009,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d= , ioservid_t id, rc =3D rangeset_add_range(r, start, end); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -1025,7 +1025,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domai= n *d, ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -1061,7 +1061,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domai= n *d, ioservid_t id, rc =3D rangeset_remove_range(r, start, end); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -1072,7 +1072,7 @@ int hvm_set_ioreq_server_state(struct domain *d, iose= rvid_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -1096,7 +1096,7 @@ int hvm_set_ioreq_server_state(struct domain *d, iose= rvid_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } =20 @@ -1106,7 +1106,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) unsigned int id; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -1115,7 +1115,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) goto fail; } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return 0; =20 @@ -1130,7 +1130,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) hvm_ioreq_server_remove_vcpu(s, v); } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -1140,12 +1140,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domai= n *d, struct vcpu *v) struct ioreq_server *s; unsigned int id; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) hvm_ioreq_server_remove_vcpu(s, v); =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } =20 void hvm_destroy_all_ioreq_servers(struct domain *d) @@ -1156,7 +1156,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) if ( !arch_hvm_ioreq_destroy(d) ) return; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 /* No need to domain_pause() as the domain is being torn down */ =20 @@ -1174,7 +1174,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) xfree(s); } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } =20 struct ioreq_server *hvm_select_ioreq_server(struct domain *d, @@ -1406,7 +1406,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buf= fered) =20 void hvm_ioreq_init(struct domain *d) { - spin_lock_init(&d->arch.hvm.ioreq_server.lock); + spin_lock_init(&d->ioreq_server.lock); =20 arch_hvm_ioreq_init(d); } diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 3b36c2f..5d60737 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -63,22 +63,7 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; =20 -#define MAX_NR_IOREQ_SERVERS 8 - struct hvm_domain { - /* Guest page range used for non-default ioreq servers */ - struct { - unsigned long base; - unsigned long mask; /* indexed by GFN minus base */ - unsigned long legacy_mask; /* indexed by HVM param number */ - } ioreq_gfn; - - /* Lock protects all other values in the sub-struct and the default */ - struct { - spinlock_t lock; - struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; - } ioreq_server; - /* Cached CF8 for guest PCI config cycles */ uint32_t pci_cf8; =20 diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index d2d64a8..0fccac5 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -77,7 +77,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct= domain *d, if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -92,7 +92,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct= domain *d, rc =3D p2m_set_ioreq_server(d, flags, s); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 if ( rc =3D=3D 0 && flags =3D=3D 0 ) { diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index d8ed83f..78761cd 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -314,6 +314,8 @@ struct sched_unit { =20 struct evtchn_port_ops; =20 +#define MAX_NR_IOREQ_SERVERS 8 + struct domain { domid_t domain_id; @@ -521,6 +523,21 @@ struct domain /* Argo interdomain communication support */ struct argo_domain *argo; #endif + +#ifdef CONFIG_IOREQ_SERVER + /* Guest page range used for non-default ioreq servers */ + struct { + unsigned long base; + unsigned long mask; + unsigned long legacy_mask; /* indexed by HVM param number */ + } ioreq_gfn; + + /* Lock protects all other values in the sub-struct and the default */ + struct { + spinlock_t lock; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + } ioreq_server; +#endif }; =20 static inline struct page_list_head *page_to_list( --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780352; cv=none; d=zohomail.com; s=zohoarc; b=CPANffjNNVLPe5Irn0WMxy66GVWUVxEPpzVJePcw28eeAfT9B/EsEqws4gyAUKjwkftj7f6xv6+6w/i5YeijUgYcNUR+0PokzBcI+YNVP5qzyWrw4IunJknxXLstH94V1q1CZfIxHsnOtZEEsbISGnXwSLLG0/mhtcgADUhiS6A= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780352; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=GUyEC+XSJJjDXTlfW49njL99iLeUHfZbzIxxr16Dn/Y=; b=KfHq5w/noKLywvBwE7miDZDb2JugpVCwrNj3wv1EyvbiFw8bhBUd+26F66XU+dGE1m0dInecBKT7/0Y6WdzIVWMGijwapyptBIhZOl6CrzaJyGxovEGkG3V50IRB6b6TcGWhthUYPh/56tghqTgk8jxz6lNjQGG6xU0yyARa1Pw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780352454719.9635395844075; Thu, 15 Oct 2020 09:45:52 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7581.19997 (Exim 4.92) (envelope-from ) id 1kT6Nk-0005dE-4h; Thu, 15 Oct 2020 16:45:36 +0000 Received: by outflank-mailman (output) from mailman id 7581.19997; Thu, 15 Oct 2020 16:45:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Nk-0005d3-0U; Thu, 15 Oct 2020 16:45:36 +0000 Received: by outflank-mailman (input) for mailman id 7581; Thu, 15 Oct 2020 16:45:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Nj-0004yr-6K for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:35 +0000 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f0db6749-9135-4c9b-88f8-d16a1f677e15; Thu, 15 Oct 2020 16:45:01 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id l28so4333489lfp.10 for ; Thu, 15 Oct 2020 09:45:01 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:00 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Nj-0004yr-6K for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:35 +0000 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f0db6749-9135-4c9b-88f8-d16a1f677e15; Thu, 15 Oct 2020 16:45:01 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id l28so4333489lfp.10 for ; Thu, 15 Oct 2020 09:45:01 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:00 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f0db6749-9135-4c9b-88f8-d16a1f677e15 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GUyEC+XSJJjDXTlfW49njL99iLeUHfZbzIxxr16Dn/Y=; b=mGoG7bqbIqXobTIQXqflKsraIQMcw+xybhJBtBKRipIqp3JjfVkDdIdquNTLmsFEVZ OX7u4ex9p0jTYLC4iZ1dTN7NgvLksWZR3DoBBM2MS3A6CptFuxxUYA9RhCRBjYxMXwfo v8pO7kvRRtFMBXlAZMbP11BSkHUqwSRX6US2QxneON1YAErbn3cnh1RYsd1tIPZ0+0C0 WTO8Czv4/4UJWW5GnBoUY7lqQpE1+6kcXjzzkpI+Uyy6epzfihSiEdceNv/1cjcvkCJx eM865/w6oHdfye86xH7U9BQGcwQbWex5/uI4qLPcmaWY02CoWWyBnPykIV3sbHcGLOcm KAeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GUyEC+XSJJjDXTlfW49njL99iLeUHfZbzIxxr16Dn/Y=; b=foaohpTpSBzBnXrPF28ohgmSe93DKY/K76ijTC9HERpA2av1yQ2QrXOSlkWSXO6fVc /qGr1L0lHKcuOQ/WaFHjeSFscXsg+eR84Rhlq+JLamqcheUFh0rLq1t/bBmqcMgiX3lR 6EtkZ+R0ywV7jsD+4TPM7WZcyhsiuL54NgyA2xNszxIi2g7JVgK63CaQAVDfhqRCteb9 aaC+YBd2hXEFwuq7OTe9m1+8JaoIBc6WxdiPSBsVSQWI1OCHQ5pz81HY/ZGGWkPyJK2v HiPyONfwIsf2Dm1h/WPJsieYwAdlaC6kex7AppecwKuv9nKunxY2z9HOfhNO1wb97PaK wfOQ== X-Gm-Message-State: AOAM530cRK511+X21Im7w53uZ+LbANA87MD2KjiATL0HRqbJxm+2JtUP u631liKjmQm63ULCN79k9ufndZP/VIg/4Q== X-Google-Smtp-Source: ABdhPJyX/2+Gjm52GKB7Dk0uJInSKiMEMavXGp73CqcZgSiu2tAFiQgTMx5j4mJ6uUtSSLFeXPC4Bw== X-Received: by 2002:a05:6512:dc:: with SMTP id c28mr1254916lfp.369.1602780300572; Thu, 15 Oct 2020 09:45:00 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V2 08/23] xen/ioreq: Introduce ioreq_params to abstract accesses to arch.hvm.params Date: Thu, 15 Oct 2020 19:44:19 +0300 Message-Id: <1602780274-29141-9-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko We don't want to move HVM params field out of *arch.hvm* in this particular case as although it stores a few IOREQ params, it is not a (completely) IOREQ stuff and is specific to the architecture. Instead, abstract accesses by the proposed macro. This is a follow up action to reduce layering violation in the common code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch --- xen/common/ioreq.c | 4 ++-- xen/include/asm-x86/hvm/domain.h | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 7f91bc2..a07f1d7 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -223,7 +223,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_se= rver *s) for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) { if ( !test_and_clear_bit(i, &d->ioreq_gfn.legacy_mask) ) - return _gfn(d->arch.hvm.params[i]); + return _gfn(ioreq_params(d, i)); } =20 return INVALID_GFN; @@ -255,7 +255,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct ioreq_serv= er *s, =20 for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) { - if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) ) + if ( gfn_eq(gfn, _gfn(ioreq_params(d, i))) ) break; } if ( i > HVM_PARAM_BUFIOREQ_PFN ) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 5d60737..c3af339 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -63,6 +63,8 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; =20 +#define ioreq_params(d, i) ((d)->arch.hvm.params[i]) + struct hvm_domain { /* Cached CF8 for guest PCI config cycles */ uint32_t pci_cf8; --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780822; cv=none; d=zohomail.com; s=zohoarc; b=AVkS0WdQe3BaVvnxdjS9ZU7SplXBbW6qJMZAg+wlN3w3ZCMdnroEPlAuKkFZkhzWRAcS5QtkAkYvDLrRsugYbVxh7xlo7AOvPilGjS8yDfDsus/DCAtwGSouj3teddeJxsfRxfZMn9vkFBHPNznnnW0CUkppWHHNC9nEJfL/8EY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780822; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=sfzmMa9E3DuJZ5oelKOpzY6Iaa3MXyP15AbWJ9Be3Qk=; b=JOys1iO1/ZQww36dnoMV41cKoX4zkhXnKD4YVMCf1t1+z7K0ifNxKTxzLZzPCuJkEs3R5bmAKnMIjkAtsEFYquMhZ1eZMCkG040blJ1uxucbJCYzx2qcFaxu/zytTY2h8hKwKzztciyOUIsFArP6CaamsCDr1rG7qWb4SNTnI2w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780822285558.80422055554; Thu, 15 Oct 2020 09:53:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7616.20109 (Exim 4.92) (envelope-from ) id 1kT6VF-0007JP-7U; Thu, 15 Oct 2020 16:53:21 +0000 Received: by outflank-mailman (output) from mailman id 7616.20109; Thu, 15 Oct 2020 16:53:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VE-0007IM-OA; Thu, 15 Oct 2020 16:53:20 +0000 Received: by outflank-mailman (input) for mailman id 7616; Thu, 15 Oct 2020 16:53:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Nt-0004yr-6W for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:45 +0000 Received: from mail-lj1-x232.google.com (unknown [2a00:1450:4864:20::232]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b5450b3d-aa1d-40c6-a3e9-164035a70f80; Thu, 15 Oct 2020 16:45:03 +0000 (UTC) Received: by mail-lj1-x232.google.com with SMTP id a5so3799244ljj.11 for ; Thu, 15 Oct 2020 09:45:03 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:01 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Nt-0004yr-6W for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:45 +0000 Received: from mail-lj1-x232.google.com (unknown [2a00:1450:4864:20::232]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b5450b3d-aa1d-40c6-a3e9-164035a70f80; Thu, 15 Oct 2020 16:45:03 +0000 (UTC) Received: by mail-lj1-x232.google.com with SMTP id a5so3799244ljj.11 for ; Thu, 15 Oct 2020 09:45:03 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:01 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b5450b3d-aa1d-40c6-a3e9-164035a70f80 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sfzmMa9E3DuJZ5oelKOpzY6Iaa3MXyP15AbWJ9Be3Qk=; b=jKxOhK8ZDlH0FqA/Gtr7bY6UBLngn0b5RU37H4EZVxQA3etBXTPTtsrXbdLwuJtUOm AgEBFq9GKaPyRmc2PdODcnrTd5tymGQ73in7CTCj9+iq9gbte9asfLL6TDMBYgt/dxb5 xgmFVxZaq+HlSBpDdTWwxGMn+HBtajBgvAWT7rBCej86suzjaUobdm3T6Ggu2ZVWvjl7 s22Qyq3O4ViGkQkFVBKUoFkTqhTQ4JR4q5eW83y73DYhz796aIT6PohZt4+hlMsy6poi wR6bZKqaRP1C9zoyRJ1pBrS5KLUwI5FEO6DnUmXqwUHGpaFxh7++LHhk7JWsckdp0WeW 9jHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=sfzmMa9E3DuJZ5oelKOpzY6Iaa3MXyP15AbWJ9Be3Qk=; b=jDE21zu213w8G9hmC89XwZ04hNXLOTUwJFkBtotw08I8ax60WatfcOk0njQYyhxjbJ eQRXaWFw3TNt1RrPBhjj9Vvd4fjQqJOpi/wXKmTbkiq3wdjVK7AlEBMiJDmSY8k55cCG 0dzHDwK+u+U/WBDhrfBlKLIUWvMU8sb7wLzUZgyEDzv6mwrnfPsUqM+v3nB4M0r9n5Ev YenHs5jlRJ6/gmJgZnddXodUmmNt0abxkrjTdaX63Y5LPXZyPDxM+mwc4PMeN8Zf1Jkd Sl6+bXGp4WJGvvzApLE8DZN8Zx6K79qVpgiHdrslHQK4nBAYXHUFcWNi+bdpOZRTu4X3 Iypg== X-Gm-Message-State: AOAM5329LOoqOmE/o7BKF5KclwuK7WsQvjmQuwKLCnkUtE0w5eqjCLpt pWKFWAMrs3GwpDAJe2xbUC/78vML2zMTzg== X-Google-Smtp-Source: ABdhPJzeC6CPrDSrI53xbPKpuh5RjvZblgP7T6mH6J0HyB851OnD3ZFRYihKV1c8ypODydHgu6U6cg== X-Received: by 2002:a2e:85cd:: with SMTP id h13mr1789424ljj.345.1602780301733; Thu, 15 Oct 2020 09:45:01 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Daniel De Graaf , Oleksandr Tyshchenko Subject: [PATCH V2 09/23] xen/dm: Make x86's DM feature common Date: Thu, 15 Oct 2020 19:44:20 +0300 Message-Id: <1602780274-29141-10-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall As a lot of x86 code can be re-used on Arm later on, this patch splits devicemodel support into common and arch specific parts. The common DM feature is supposed to be built with IOREQ_SERVER option enabled (as well as the IOREQ feature), which is selected for x86's config HVM for now. Also update XSM code a bit to let DM op be used on Arm. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - update XSM, related changes were pulled from: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/D= M features Changes V1 -> V2: - update the author of a patch - update patch description - introduce xen/dm.h and move definitions here --- xen/arch/x86/hvm/dm.c | 291 ++++----------------------------------------= ---- xen/common/Makefile | 1 + xen/common/dm.c | 291 ++++++++++++++++++++++++++++++++++++++++++++= ++++ xen/include/xen/dm.h | 44 ++++++++ xen/include/xsm/dummy.h | 4 +- xen/include/xsm/xsm.h | 6 +- xen/xsm/dummy.c | 2 +- xen/xsm/flask/hooks.c | 5 +- 8 files changed, 364 insertions(+), 280 deletions(-) create mode 100644 xen/common/dm.c create mode 100644 xen/include/xen/dm.h diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 71f5ca4..35f860a 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -16,6 +16,7 @@ =20 #include #include +#include #include #include #include @@ -29,13 +30,6 @@ =20 #include =20 -struct dmop_args { - domid_t domid; - unsigned int nr_bufs; - /* Reserve enough buf elements for all current hypercalls. */ - struct xen_dm_op_buf buf[2]; -}; - static bool _raw_copy_from_guest_buf_offset(void *dst, const struct dmop_args *args, unsigned int buf_idx, @@ -338,148 +332,20 @@ static int inject_event(struct domain *d, return 0; } =20 -static int dm_op(const struct dmop_args *op_args) +int arch_dm_op(struct xen_dm_op *op, struct domain *d, + const struct dmop_args *op_args, bool *const_op) { - struct domain *d; - struct xen_dm_op op; - bool const_op =3D true; long rc; - size_t offset; - - static const uint8_t op_size[] =3D { - [XEN_DMOP_create_ioreq_server] =3D sizeof(struct xen_= dm_op_create_ioreq_server), - [XEN_DMOP_get_ioreq_server_info] =3D sizeof(struct xen_= dm_op_get_ioreq_server_info), - [XEN_DMOP_map_io_range_to_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), - [XEN_DMOP_unmap_io_range_from_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), - [XEN_DMOP_set_ioreq_server_state] =3D sizeof(struct xen_= dm_op_set_ioreq_server_state), - [XEN_DMOP_destroy_ioreq_server] =3D sizeof(struct xen_= dm_op_destroy_ioreq_server), - [XEN_DMOP_track_dirty_vram] =3D sizeof(struct xen_= dm_op_track_dirty_vram), - [XEN_DMOP_set_pci_intx_level] =3D sizeof(struct xen_= dm_op_set_pci_intx_level), - [XEN_DMOP_set_isa_irq_level] =3D sizeof(struct xen_= dm_op_set_isa_irq_level), - [XEN_DMOP_set_pci_link_route] =3D sizeof(struct xen_= dm_op_set_pci_link_route), - [XEN_DMOP_modified_memory] =3D sizeof(struct xen_= dm_op_modified_memory), - [XEN_DMOP_set_mem_type] =3D sizeof(struct xen_= dm_op_set_mem_type), - [XEN_DMOP_inject_event] =3D sizeof(struct xen_= dm_op_inject_event), - [XEN_DMOP_inject_msi] =3D sizeof(struct xen_= dm_op_inject_msi), - [XEN_DMOP_map_mem_type_to_ioreq_server] =3D sizeof(struct xen_= dm_op_map_mem_type_to_ioreq_server), - [XEN_DMOP_remote_shutdown] =3D sizeof(struct xen_= dm_op_remote_shutdown), - [XEN_DMOP_relocate_memory] =3D sizeof(struct xen_= dm_op_relocate_memory), - [XEN_DMOP_pin_memory_cacheattr] =3D sizeof(struct xen_= dm_op_pin_memory_cacheattr), - }; - - rc =3D rcu_lock_remote_domain_by_id(op_args->domid, &d); - if ( rc ) - return rc; - - if ( !is_hvm_domain(d) ) - goto out; - - rc =3D xsm_dm_op(XSM_DM_PRIV, d); - if ( rc ) - goto out; - - offset =3D offsetof(struct xen_dm_op, u); - - rc =3D -EFAULT; - if ( op_args->buf[0].size < offset ) - goto out; - - if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset)= ) - goto out; - - if ( op.op >=3D ARRAY_SIZE(op_size) ) - { - rc =3D -EOPNOTSUPP; - goto out; - } - - op.op =3D array_index_nospec(op.op, ARRAY_SIZE(op_size)); - - if ( op_args->buf[0].size < offset + op_size[op.op] ) - goto out; - - if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset, - op_size[op.op]) ) - goto out; - - rc =3D -EINVAL; - if ( op.pad ) - goto out; - - switch ( op.op ) - { - case XEN_DMOP_create_ioreq_server: - { - struct xen_dm_op_create_ioreq_server *data =3D - &op.u.create_ioreq_server; - - const_op =3D false; - - rc =3D -EINVAL; - if ( data->pad[0] || data->pad[1] || data->pad[2] ) - break; - - rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); - break; - } =20 - case XEN_DMOP_get_ioreq_server_info: + switch ( op->op ) { - struct xen_dm_op_get_ioreq_server_info *data =3D - &op.u.get_ioreq_server_info; - const uint16_t valid_flags =3D XEN_DMOP_no_gfns; - - const_op =3D false; - - rc =3D -EINVAL; - if ( data->flags & ~valid_flags ) - break; - - rc =3D hvm_get_ioreq_server_info(d, data->id, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->ioreq_gfn, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->bufioreq_gfn, - &data->bufioreq_port); - break; - } - - case XEN_DMOP_map_io_range_to_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data =3D - &op.u.map_io_range_to_ioreq_server; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); - break; - } - - case XEN_DMOP_unmap_io_range_from_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data =3D - &op.u.unmap_io_range_from_ioreq_server; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, - data->start, data->end); - break; - } - case XEN_DMOP_map_mem_type_to_ioreq_server: { struct xen_dm_op_map_mem_type_to_ioreq_server *data =3D - &op.u.map_mem_type_to_ioreq_server; + &op->u.map_mem_type_to_ioreq_server; unsigned long first_gfn =3D data->opaque; =20 - const_op =3D false; + *const_op =3D false; =20 rc =3D -EOPNOTSUPP; if ( !hap_enabled(d) ) @@ -523,36 +389,10 @@ static int dm_op(const struct dmop_args *op_args) break; } =20 - case XEN_DMOP_set_ioreq_server_state: - { - const struct xen_dm_op_set_ioreq_server_state *data =3D - &op.u.set_ioreq_server_state; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); - break; - } - - case XEN_DMOP_destroy_ioreq_server: - { - const struct xen_dm_op_destroy_ioreq_server *data =3D - &op.u.destroy_ioreq_server; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_destroy_ioreq_server(d, data->id); - break; - } - case XEN_DMOP_track_dirty_vram: { const struct xen_dm_op_track_dirty_vram *data =3D - &op.u.track_dirty_vram; + &op->u.track_dirty_vram; =20 rc =3D -EINVAL; if ( data->pad ) @@ -568,7 +408,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_set_pci_intx_level: { const struct xen_dm_op_set_pci_intx_level *data =3D - &op.u.set_pci_intx_level; + &op->u.set_pci_intx_level; =20 rc =3D set_pci_intx_level(d, data->domain, data->bus, data->device, data->intx, @@ -579,7 +419,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_set_isa_irq_level: { const struct xen_dm_op_set_isa_irq_level *data =3D - &op.u.set_isa_irq_level; + &op->u.set_isa_irq_level; =20 rc =3D set_isa_irq_level(d, data->isa_irq, data->level); break; @@ -588,7 +428,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_set_pci_link_route: { const struct xen_dm_op_set_pci_link_route *data =3D - &op.u.set_pci_link_route; + &op->u.set_pci_link_route; =20 rc =3D hvm_set_pci_link_route(d, data->link, data->isa_irq); break; @@ -597,19 +437,19 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_modified_memory: { struct xen_dm_op_modified_memory *data =3D - &op.u.modified_memory; + &op->u.modified_memory; =20 rc =3D modified_memory(d, op_args, data); - const_op =3D !rc; + *const_op =3D !rc; break; } =20 case XEN_DMOP_set_mem_type: { struct xen_dm_op_set_mem_type *data =3D - &op.u.set_mem_type; + &op->u.set_mem_type; =20 - const_op =3D false; + *const_op =3D false; =20 rc =3D -EINVAL; if ( data->pad ) @@ -622,7 +462,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_inject_event: { const struct xen_dm_op_inject_event *data =3D - &op.u.inject_event; + &op->u.inject_event; =20 rc =3D -EINVAL; if ( data->pad0 || data->pad1 ) @@ -635,7 +475,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_inject_msi: { const struct xen_dm_op_inject_msi *data =3D - &op.u.inject_msi; + &op->u.inject_msi; =20 rc =3D -EINVAL; if ( data->pad ) @@ -648,7 +488,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_remote_shutdown: { const struct xen_dm_op_remote_shutdown *data =3D - &op.u.remote_shutdown; + &op->u.remote_shutdown; =20 domain_shutdown(d, data->reason); rc =3D 0; @@ -657,7 +497,7 @@ static int dm_op(const struct dmop_args *op_args) =20 case XEN_DMOP_relocate_memory: { - struct xen_dm_op_relocate_memory *data =3D &op.u.relocate_memory; + struct xen_dm_op_relocate_memory *data =3D &op->u.relocate_memory; struct xen_add_to_physmap xatp =3D { .domid =3D op_args->domid, .size =3D data->size, @@ -680,7 +520,7 @@ static int dm_op(const struct dmop_args *op_args) data->size -=3D rc; data->src_gfn +=3D rc; data->dst_gfn +=3D rc; - const_op =3D false; + *const_op =3D false; rc =3D -ERESTART; } break; @@ -689,7 +529,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_pin_memory_cacheattr: { const struct xen_dm_op_pin_memory_cacheattr *data =3D - &op.u.pin_memory_cacheattr; + &op->u.pin_memory_cacheattr; =20 if ( data->pad ) { @@ -707,97 +547,6 @@ static int dm_op(const struct dmop_args *op_args) break; } =20 - if ( (!rc || rc =3D=3D -ERESTART) && - !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, - (void *)&op.u, op_size[op.op]) ) - rc =3D -EFAULT; - - out: - rcu_unlock_domain(d); - - return rc; -} - -#include - -CHECK_dm_op_create_ioreq_server; -CHECK_dm_op_get_ioreq_server_info; -CHECK_dm_op_ioreq_server_range; -CHECK_dm_op_set_ioreq_server_state; -CHECK_dm_op_destroy_ioreq_server; -CHECK_dm_op_track_dirty_vram; -CHECK_dm_op_set_pci_intx_level; -CHECK_dm_op_set_isa_irq_level; -CHECK_dm_op_set_pci_link_route; -CHECK_dm_op_modified_memory; -CHECK_dm_op_set_mem_type; -CHECK_dm_op_inject_event; -CHECK_dm_op_inject_msi; -CHECK_dm_op_map_mem_type_to_ioreq_server; -CHECK_dm_op_remote_shutdown; -CHECK_dm_op_relocate_memory; -CHECK_dm_op_pin_memory_cacheattr; - -int compat_dm_op(domid_t domid, - unsigned int nr_bufs, - XEN_GUEST_HANDLE_PARAM(void) bufs) -{ - struct dmop_args args; - unsigned int i; - int rc; - - if ( nr_bufs > ARRAY_SIZE(args.buf) ) - return -E2BIG; - - args.domid =3D domid; - args.nr_bufs =3D array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); - - for ( i =3D 0; i < args.nr_bufs; i++ ) - { - struct compat_dm_op_buf cmp; - - if ( copy_from_guest_offset(&cmp, bufs, i, 1) ) - return -EFAULT; - -#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \ - guest_from_compat_handle((_d_)->h, (_s_)->h) - - XLAT_dm_op_buf(&args.buf[i], &cmp); - -#undef XLAT_dm_op_buf_HNDL_h - } - - rc =3D dm_op(&args); - - if ( rc =3D=3D -ERESTART ) - rc =3D hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", - domid, nr_bufs, bufs); - - return rc; -} - -long do_dm_op(domid_t domid, - unsigned int nr_bufs, - XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) -{ - struct dmop_args args; - int rc; - - if ( nr_bufs > ARRAY_SIZE(args.buf) ) - return -E2BIG; - - args.domid =3D domid; - args.nr_bufs =3D array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); - - if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) ) - return -EFAULT; - - rc =3D dm_op(&args); - - if ( rc =3D=3D -ERESTART ) - rc =3D hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", - domid, nr_bufs, bufs); - return rc; } =20 diff --git a/xen/common/Makefile b/xen/common/Makefile index cdb99fb..8c872d3 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -6,6 +6,7 @@ obj-$(CONFIG_CORE_PARKING) +=3D core_parking.o obj-y +=3D cpu.o obj-$(CONFIG_DEBUG_TRACE) +=3D debugtrace.o obj-$(CONFIG_HAS_DEVICE_TREE) +=3D device_tree.o +obj-$(CONFIG_IOREQ_SERVER) +=3D dm.o obj-y +=3D domain.o obj-y +=3D event_2l.o obj-y +=3D event_channel.o diff --git a/xen/common/dm.c b/xen/common/dm.c new file mode 100644 index 0000000..36e01a2 --- /dev/null +++ b/xen/common/dm.c @@ -0,0 +1,291 @@ +/* + * Copyright (c) 2016 Citrix Systems Inc. + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include + +static int dm_op(const struct dmop_args *op_args) +{ + struct domain *d; + struct xen_dm_op op; + long rc; + bool const_op =3D true; + const size_t offset =3D offsetof(struct xen_dm_op, u); + + static const uint8_t op_size[] =3D { + [XEN_DMOP_create_ioreq_server] =3D sizeof(struct xen_= dm_op_create_ioreq_server), + [XEN_DMOP_get_ioreq_server_info] =3D sizeof(struct xen_= dm_op_get_ioreq_server_info), + [XEN_DMOP_map_io_range_to_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), + [XEN_DMOP_unmap_io_range_from_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), + [XEN_DMOP_set_ioreq_server_state] =3D sizeof(struct xen_= dm_op_set_ioreq_server_state), + [XEN_DMOP_destroy_ioreq_server] =3D sizeof(struct xen_= dm_op_destroy_ioreq_server), + [XEN_DMOP_track_dirty_vram] =3D sizeof(struct xen_= dm_op_track_dirty_vram), + [XEN_DMOP_set_pci_intx_level] =3D sizeof(struct xen_= dm_op_set_pci_intx_level), + [XEN_DMOP_set_isa_irq_level] =3D sizeof(struct xen_= dm_op_set_isa_irq_level), + [XEN_DMOP_set_pci_link_route] =3D sizeof(struct xen_= dm_op_set_pci_link_route), + [XEN_DMOP_modified_memory] =3D sizeof(struct xen_= dm_op_modified_memory), + [XEN_DMOP_set_mem_type] =3D sizeof(struct xen_= dm_op_set_mem_type), + [XEN_DMOP_inject_event] =3D sizeof(struct xen_= dm_op_inject_event), + [XEN_DMOP_inject_msi] =3D sizeof(struct xen_= dm_op_inject_msi), + [XEN_DMOP_map_mem_type_to_ioreq_server] =3D sizeof(struct xen_= dm_op_map_mem_type_to_ioreq_server), + [XEN_DMOP_remote_shutdown] =3D sizeof(struct xen_= dm_op_remote_shutdown), + [XEN_DMOP_relocate_memory] =3D sizeof(struct xen_= dm_op_relocate_memory), + [XEN_DMOP_pin_memory_cacheattr] =3D sizeof(struct xen_= dm_op_pin_memory_cacheattr), + }; + + rc =3D rcu_lock_remote_domain_by_id(op_args->domid, &d); + if ( rc ) + return rc; + + if ( !is_hvm_domain(d) ) + goto out; + + rc =3D xsm_dm_op(XSM_DM_PRIV, d); + if ( rc ) + goto out; + + rc =3D -EFAULT; + if ( op_args->buf[0].size < offset ) + goto out; + + if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset)= ) + goto out; + + if ( op.op >=3D ARRAY_SIZE(op_size) ) + { + rc =3D -EOPNOTSUPP; + goto out; + } + + op.op =3D array_index_nospec(op.op, ARRAY_SIZE(op_size)); + + if ( op_args->buf[0].size < offset + op_size[op.op] ) + goto out; + + if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset, + op_size[op.op]) ) + goto out; + + rc =3D -EINVAL; + if ( op.pad ) + goto out; + + switch ( op.op ) + { + case XEN_DMOP_create_ioreq_server: + { + struct xen_dm_op_create_ioreq_server *data =3D + &op.u.create_ioreq_server; + + const_op =3D false; + + rc =3D -EINVAL; + if ( data->pad[0] || data->pad[1] || data->pad[2] ) + break; + + rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, + &data->id); + break; + } + + case XEN_DMOP_get_ioreq_server_info: + { + struct xen_dm_op_get_ioreq_server_info *data =3D + &op.u.get_ioreq_server_info; + const uint16_t valid_flags =3D XEN_DMOP_no_gfns; + + const_op =3D false; + + rc =3D -EINVAL; + if ( data->flags & ~valid_flags ) + break; + + rc =3D hvm_get_ioreq_server_info(d, data->id, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->iore= q_gfn, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->bufi= oreq_gfn, + &data->bufioreq_port); + break; + } + + case XEN_DMOP_map_io_range_to_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data =3D + &op.u.map_io_range_to_ioreq_server; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, + data->start, data->end); + break; + } + + case XEN_DMOP_unmap_io_range_from_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data =3D + &op.u.unmap_io_range_from_ioreq_server; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, + data->start, data->end); + break; + } + + case XEN_DMOP_set_ioreq_server_state: + { + const struct xen_dm_op_set_ioreq_server_state *data =3D + &op.u.set_ioreq_server_state; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + break; + } + + case XEN_DMOP_destroy_ioreq_server: + { + const struct xen_dm_op_destroy_ioreq_server *data =3D + &op.u.destroy_ioreq_server; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_destroy_ioreq_server(d, data->id); + break; + } + + default: + rc =3D arch_dm_op(&op, d, op_args, &const_op); + } + + if ( (!rc || rc =3D=3D -ERESTART) && + !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, + (void *)&op.u, op_size[op.op]) ) + rc =3D -EFAULT; + + out: + rcu_unlock_domain(d); + + return rc; +} + +#ifdef CONFIG_COMPAT +#include + +CHECK_dm_op_create_ioreq_server; +CHECK_dm_op_get_ioreq_server_info; +CHECK_dm_op_ioreq_server_range; +CHECK_dm_op_set_ioreq_server_state; +CHECK_dm_op_destroy_ioreq_server; +CHECK_dm_op_track_dirty_vram; +CHECK_dm_op_set_pci_intx_level; +CHECK_dm_op_set_isa_irq_level; +CHECK_dm_op_set_pci_link_route; +CHECK_dm_op_modified_memory; +CHECK_dm_op_set_mem_type; +CHECK_dm_op_inject_event; +CHECK_dm_op_inject_msi; +CHECK_dm_op_map_mem_type_to_ioreq_server; +CHECK_dm_op_remote_shutdown; +CHECK_dm_op_relocate_memory; +CHECK_dm_op_pin_memory_cacheattr; + +int compat_dm_op(domid_t domid, + unsigned int nr_bufs, + XEN_GUEST_HANDLE_PARAM(void) bufs) +{ + struct dmop_args args; + unsigned int i; + int rc; + + if ( nr_bufs > ARRAY_SIZE(args.buf) ) + return -E2BIG; + + args.domid =3D domid; + args.nr_bufs =3D array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); + + for ( i =3D 0; i < args.nr_bufs; i++ ) + { + struct compat_dm_op_buf cmp; + + if ( copy_from_guest_offset(&cmp, bufs, i, 1) ) + return -EFAULT; + +#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \ + guest_from_compat_handle((_d_)->h, (_s_)->h) + + XLAT_dm_op_buf(&args.buf[i], &cmp); + +#undef XLAT_dm_op_buf_HNDL_h + } + + rc =3D dm_op(&args); + + if ( rc =3D=3D -ERESTART ) + rc =3D hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", + domid, nr_bufs, bufs); + + return rc; +} +#endif + +long do_dm_op(domid_t domid, + unsigned int nr_bufs, + XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) +{ + struct dmop_args args; + int rc; + + if ( nr_bufs > ARRAY_SIZE(args.buf) ) + return -E2BIG; + + args.domid =3D domid; + args.nr_bufs =3D array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); + + if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) ) + return -EFAULT; + + rc =3D dm_op(&args); + + if ( rc =3D=3D -ERESTART ) + rc =3D hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", + domid, nr_bufs, bufs); + + return rc; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/xen/dm.h b/xen/include/xen/dm.h new file mode 100644 index 0000000..ef15edf --- /dev/null +++ b/xen/include/xen/dm.h @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#ifndef __XEN_DM_H__ +#define __XEN_DM_H__ + +#include + +struct dmop_args { + domid_t domid; + unsigned int nr_bufs; + /* Reserve enough buf elements for all current hypercalls. */ + struct xen_dm_op_buf buf[2]; +}; + +int arch_dm_op(struct xen_dm_op *op, + struct domain *d, + const struct dmop_args *op_args, + bool *const_op); + +#endif /* __XEN_DM_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 7ae3c40..5c61d8e 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -707,14 +707,14 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG str= uct domain *d, unsigned int } } =20 +#endif /* CONFIG_X86 */ + static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d) { XSM_ASSERT_ACTION(XSM_DM_PRIV); return xsm_default_action(action, current->domain, d); } =20 -#endif /* CONFIG_X86 */ - #ifdef CONFIG_ARGO static XSM_INLINE int xsm_argo_enable(const struct domain *d) { diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 358ec13..517f78a 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -177,8 +177,8 @@ struct xsm_operations { int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, ui= nt8_t allow); int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8= _t allow); int (*pmu_op) (struct domain *d, unsigned int op); - int (*dm_op) (struct domain *d); #endif + int (*dm_op) (struct domain *d); int (*xen_version) (uint32_t cmd); int (*domain_resource_map) (struct domain *d); #ifdef CONFIG_ARGO @@ -683,13 +683,13 @@ static inline int xsm_pmu_op (xsm_default_t def, stru= ct domain *d, unsigned int return xsm_ops->pmu_op(d, op); } =20 +#endif /* CONFIG_X86 */ + static inline int xsm_dm_op(xsm_default_t def, struct domain *d) { return xsm_ops->dm_op(d); } =20 -#endif /* CONFIG_X86 */ - static inline int xsm_xen_version (xsm_default_t def, uint32_t op) { return xsm_ops->xen_version(op); diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 9e09512..8bdffe7 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -147,8 +147,8 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, ioport_permission); set_to_dummy_if_null(ops, ioport_mapping); set_to_dummy_if_null(ops, pmu_op); - set_to_dummy_if_null(ops, dm_op); #endif + set_to_dummy_if_null(ops, dm_op); set_to_dummy_if_null(ops, xen_version); set_to_dummy_if_null(ops, domain_resource_map); #ifdef CONFIG_ARGO diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index de050cc..8f3f182 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1666,14 +1666,13 @@ static int flask_pmu_op (struct domain *d, unsigned= int op) return -EPERM; } } +#endif /* CONFIG_X86 */ =20 static int flask_dm_op(struct domain *d) { return current_has_perm(d, SECCLASS_HVM, HVM__DM); } =20 -#endif /* CONFIG_X86 */ - static int flask_xen_version (uint32_t op) { u32 dsid =3D domain_sid(current->domain); @@ -1875,8 +1874,8 @@ static struct xsm_operations flask_ops =3D { .ioport_permission =3D flask_ioport_permission, .ioport_mapping =3D flask_ioport_mapping, .pmu_op =3D flask_pmu_op, - .dm_op =3D flask_dm_op, #endif + .dm_op =3D flask_dm_op, .xen_version =3D flask_xen_version, .domain_resource_map =3D flask_domain_resource_map, #ifdef CONFIG_ARGO --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780356; cv=none; d=zohomail.com; s=zohoarc; b=F2tnv8WHzbX1MrYjyDJWFp5MVPdPIRGzDLaAcJ3gSuPDNtFmW/TEBbS1if5W858grpMBiEXljcBug1EI5f2aRd/GPUgUE0XKr+QhCRy8qqUUbLBJiUcANoRhPdMUklIRcEDCWIgyrLIBZpJc0QkAqXe96nJfcZbNWisvpm15TBg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780356; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=6/1vRBOVhlXpqAFPJrg7onrJHeVZj9qNd2N7AdSm6+I=; b=DKMT9Ku8yOleim3sbUquvNdTTbf0b/LrgmAa8DnmuSWNZ7FUNL6n3wB6nv6bjDSckQznseS54nt0wdEjQb7BYgZm1ZUbjyWwMPR/whtAGW0GtKGXvj9brYfJg90IqHbLyIs6nm343GjAKgJhJIfyY+JedvTt01KGIuNqcf6VHe0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780356383429.6158429071389; Thu, 15 Oct 2020 09:45:56 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7584.20009 (Exim 4.92) (envelope-from ) id 1kT6Np-0005jA-FV; Thu, 15 Oct 2020 16:45:41 +0000 Received: by outflank-mailman (output) from mailman id 7584.20009; Thu, 15 Oct 2020 16:45:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Np-0005iv-C9; Thu, 15 Oct 2020 16:45:41 +0000 Received: by outflank-mailman (input) for mailman id 7584; Thu, 15 Oct 2020 16:45:40 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6No-0004yr-6f for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:40 +0000 Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6ac59395-8402-4d7a-9ad9-0bb685af26e4; Thu, 15 Oct 2020 16:45:04 +0000 (UTC) Received: by mail-lj1-x244.google.com with SMTP id a5so3799292ljj.11 for ; Thu, 15 Oct 2020 09:45:04 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:02 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6No-0004yr-6f for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:40 +0000 Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6ac59395-8402-4d7a-9ad9-0bb685af26e4; Thu, 15 Oct 2020 16:45:04 +0000 (UTC) Received: by mail-lj1-x244.google.com with SMTP id a5so3799292ljj.11 for ; Thu, 15 Oct 2020 09:45:04 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:02 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6ac59395-8402-4d7a-9ad9-0bb685af26e4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6/1vRBOVhlXpqAFPJrg7onrJHeVZj9qNd2N7AdSm6+I=; b=NxUSlgwYMHInywlP9Vh/osxO+AX5/V1EMzEvRhEhDylOsNg3f8tznfM4IQ5gWoDWVf jDmIUKloygpCypaDkOIuFJm4xP5IDSECifyOO1/hMPSL3xWcvYUk5Lnp7qDdGuCuW2cT KorqU6bNJv3f89ex2T7QTmS0Q1hcAmoFFGZEFlCKovQQpefZd8qogldHxp8KBMK0qjvY PpNL2vNDyzn8VGManvXoUV1jtjzWjsvd/8+2tnT/3zA8xapUUIc6mZyM5kEjy7VcIRwz 9MsxvpzMkhSKMqwi9Iku60llBgDVmFVipfxHQMCeU9THU5JLZmtIC0l3pSNw/TwFAxFt orMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6/1vRBOVhlXpqAFPJrg7onrJHeVZj9qNd2N7AdSm6+I=; b=PAa6WKBV1UQJvqAq+OiPKmyOTVB3tXkaB1q3GUEwFOY5kvx6of1wQf9HwxkJ2eGV2X N2Z1PL3wexRGHc0D/zz4AitaasXYmRnqU7gqrGSXOJnfMnfgZ2B3CNGIuYP7Xvm8quXr uD9HRtJBgdTDki+8iIqVfULK7GX6CQ1OsNGOPH/JeNjfkUIQfrEX2ompOoFdMzSGURu3 PDb7cGGtGSmG9q4gk7RT1IPshdIfyD27GUvqbfMGtz7rVFkaZptFAiURNJ/KdR2lwulr dxm6FmyDgFKZppFIGSp/FgFw8ZImkyBf5qPUXTYilFUTzyqmvQ43fi1NmWJZc9omcGZ6 1CyA== X-Gm-Message-State: AOAM530FmC5idStLL4Rahs7VwqKvuN2z1e0f3pd1bcpfbVrL3r+ZmY0C wMa/DB/Ng7CG9w8/S23hBBjAt7K1jYpumA== X-Google-Smtp-Source: ABdhPJwtiagApOtfBMwcSILR3Uh9/7kqM7ZaVQuLTRR1ZqApXjgAWKgMdmZDz7fH0K3DqG2SmRW69A== X-Received: by 2002:a2e:8645:: with SMTP id i5mr1798706ljj.458.1602780302936; Thu, 15 Oct 2020 09:45:02 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V2 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Date: Thu, 15 Oct 2020 19:44:21 +0300 Message-Id: <1602780274-29141-11-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall As x86 implementation of XENMEM_resource_ioreq_server can be re-used on Arm later on, this patch makes it common and removes arch_acquire_resource as unneeded. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - no changes Changes V1 -> V2: - update the author of a patch --- xen/arch/x86/mm.c | 44 -------------------------------------------- xen/common/memory.c | 45 +++++++++++++++++++++++++++++++++++++++++++-- xen/include/asm-arm/mm.h | 8 -------- xen/include/asm-x86/mm.h | 4 ---- 4 files changed, 43 insertions(+), 58 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index b5865ae..df7619d 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4591,50 +4591,6 @@ int xenmem_add_to_physmap_one( return rc; } =20 -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]) -{ - int rc; - - switch ( type ) - { -#ifdef CONFIG_HVM - case XENMEM_resource_ioreq_server: - { - ioservid_t ioservid =3D id; - unsigned int i; - - rc =3D -EINVAL; - if ( !is_hvm_domain(d) ) - break; - - if ( id !=3D (unsigned int)ioservid ) - break; - - rc =3D 0; - for ( i =3D 0; i < nr_frames; i++ ) - { - mfn_t mfn; - - rc =3D hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); - if ( rc ) - break; - - mfn_list[i] =3D mfn_x(mfn); - } - break; - } -#endif - - default: - rc =3D -EOPNOTSUPP; - break; - } - - return rc; -} - long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { int rc; diff --git a/xen/common/memory.c b/xen/common/memory.c index 1bab0e8..83d800f 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -30,6 +30,10 @@ #include #include =20 +#ifdef CONFIG_IOREQ_SERVER +#include +#endif + #ifdef CONFIG_X86 #include #endif @@ -1045,6 +1049,38 @@ static int acquire_grant_table(struct domain *d, uns= igned int id, return 0; } =20 +#ifdef CONFIG_IOREQ_SERVER +static int acquire_ioreq_server(struct domain *d, + unsigned int id, + unsigned long frame, + unsigned int nr_frames, + xen_pfn_t mfn_list[]) +{ + ioservid_t ioservid =3D id; + unsigned int i; + int rc; + + if ( !is_hvm_domain(d) ) + return -EINVAL; + + if ( id !=3D (unsigned int)ioservid ) + return -EINVAL; + + for ( i =3D 0; i < nr_frames; i++ ) + { + mfn_t mfn; + + rc =3D hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + if ( rc ) + return rc; + + mfn_list[i] =3D mfn_x(mfn); + } + + return 0; +} +#endif + static int acquire_resource( XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg) { @@ -1103,9 +1139,14 @@ static int acquire_resource( mfn_list); break; =20 +#ifdef CONFIG_IOREQ_SERVER + case XENMEM_resource_ioreq_server: + rc =3D acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames, + mfn_list); + break; +#endif default: - rc =3D arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame, - xmar.nr_frames, mfn_list); + rc =3D -EOPNOTSUPP; break; } =20 diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index f8ba49b..0b7de31 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -358,14 +358,6 @@ static inline void put_page_and_type(struct page_info = *page) =20 void clear_and_clean_page(struct page_info *page); =20 -static inline -int arch_acquire_resource(struct domain *d, unsigned int type, unsigned in= t id, - unsigned long frame, unsigned int nr_frames, - xen_pfn_t mfn_list[]) -{ - return -EOPNOTSUPP; -} - unsigned int arch_get_dma_bitsize(void); =20 #endif /* __ARCH_ARM_MM__ */ diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index deeba75..859214e 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -639,8 +639,4 @@ static inline bool arch_mfn_in_directmap(unsigned long = mfn) return mfn <=3D (virt_to_mfn(eva - 1) + 1); } =20 -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]); - #endif /* __ASM_X86_MM_H__ */ --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780824; cv=none; d=zohomail.com; s=zohoarc; b=E5MxdrGppIY+j3EnJV9aolK8XGqECde4moM8ThqJ4s5eBtT+DDlTCkTgEq57S/CLUwdu+WFSPxUlQhUItPgYfkwHj+PXsv0l91n55MVj+zgtRdaIXBVsumol3hISMVPKxK8ae5hKcYU7IUb2d7yX3HYQSQMn5/zvu6uWwkjjE0Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780824; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=GIFa+EcRcUxKPr2Jpa1jfhOfy5HsLRU2xEhliboMdRU=; b=gpcbVMBkXQAZs/FCP90j14i9G/tGV/mpFHVAyLRDePyFtCBKayuVhZoTqe31x5b4HSPk5bUek5PjlaP1APLJd9hMWJ/UVfPl8vM2cnnc0mVU7GikXnx+ZvfcKRcyKpmsd8deMWgp6J6F5rNyASqA0aKTO4OpA/fGzyeiq/EFE80= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780824031933.1926814581385; Thu, 15 Oct 2020 09:53:44 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7619.20131 (Exim 4.92) (envelope-from ) id 1kT6VH-0007Pk-Hs; Thu, 15 Oct 2020 16:53:23 +0000 Received: by outflank-mailman (output) from mailman id 7619.20131; Thu, 15 Oct 2020 16:53:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VG-0007OG-Oj; Thu, 15 Oct 2020 16:53:22 +0000 Received: by outflank-mailman (input) for mailman id 7619; Thu, 15 Oct 2020 16:53:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Ny-0004yr-6m for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:50 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 82abcbad-658a-46ea-bdc7-8244eb16a9ab; Thu, 15 Oct 2020 16:45:05 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id r127so4314370lff.12 for ; Thu, 15 Oct 2020 09:45:05 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:03 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Ny-0004yr-6m for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:50 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 82abcbad-658a-46ea-bdc7-8244eb16a9ab; Thu, 15 Oct 2020 16:45:05 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id r127so4314370lff.12 for ; Thu, 15 Oct 2020 09:45:05 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:03 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 82abcbad-658a-46ea-bdc7-8244eb16a9ab DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GIFa+EcRcUxKPr2Jpa1jfhOfy5HsLRU2xEhliboMdRU=; b=UB0MpQUyBFx0qhLWnadeL1UXzp33IaWRUriTv4CK4hiHV8QQsOhShN8Ry5bycZ1ocz jI21J4JlDvFJFXsLeNaZmAdZKDkTAvst9UHyKM+KpsM88AgFv72UYLYFZXJVSpOuIpwE mO7hMeygJpeQHd0gG0nPK1I2BNoC4s9H55wHOrLmT/iLpfWBvgQLKWDlM7YQWwnRwAuz ndxe5JxiZ3uxrlMBciaKrbKQXqf1dE5shLh/Hi3xpMsF9i1WkW5+aUUQz6sYbta+hor7 LX3BpM/wrnxfFoXQisKl81neK5J8XQzerMW+yquGpT0YWmf98XhB3+7SIfLfDLZQQ220 fIdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GIFa+EcRcUxKPr2Jpa1jfhOfy5HsLRU2xEhliboMdRU=; b=Vow/5GMxmh6oOrjDBsxYZVaPxivClyY2yNKxHzkunOxnorg09L7jBm8sEtYnL43t6L jolmCte2+gvw0STRH+eR4H+C4Iwz8Jwd9seVZ8N3odPM5k80joBKspbbDFIoqSPveRLS UiorDTGDIjGtD1HTUxeFwigR4SL3G/xg7CuR/HWmfe/jtDvCV0bkiyOhpK8NO2fEFk6n kHQCC9h7yNBJXYtTlfJ5x00yj8P5acX0UyEuxlNsJjCzQPlyJMWyevFcgG8dGbcsCtDo LhQc9nkUy8WHE05IIDIDi7FZzbEWAV0vuaO2qzlprixonWYLj7EB+p96V+pLzsq+g/iv c1KA== X-Gm-Message-State: AOAM533bHk5b217HLIkkxVUIyDjbQAOiHFblf2Mnwz17R81aCkcDVnV2 qI7EAHWIcWcIK/LbvbIqa9fXd5vQq6pMGQ== X-Google-Smtp-Source: ABdhPJxuegf36jDJMZnb4uqeaF5mSemQuPEeLn4XCufxv1K20iqFeDNtPrA2vNZhJzFktUL0chy2hw== X-Received: by 2002:ac2:4dad:: with SMTP id h13mr1457757lfe.351.1602780304203; Thu, 15 Oct 2020 09:45:04 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V2 11/23] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu Date: Thu, 15 Oct 2020 19:44:22 +0300 Message-Id: <1602780274-29141-12-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these fields will be used on Arm as is. Move them to common struct vcpu as a part of new struct vcpu_io. Also move enum hvm_io_completion to xen/sched.h and remove "hvm" prefixes. This patch completely removes layering violation in the common code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** I was thinking that it may be better to place these two fields into struct vcpu directly (without intermediate "io" struct). I think, this way the code which operates with these fields would become cleaner. Another possible option would be either to rename "io" struct (I failed to think of a better name) or to drop(replace?) duplicating "io" prefixes from these fields. *** Changes V1 -> V2: - new patch --- xen/arch/x86/hvm/emulate.c | 50 +++++++++++++++++++----------------= ---- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 6 ++--- xen/arch/x86/hvm/svm/nestedsvm.c | 2 +- xen/arch/x86/hvm/vmx/realmode.c | 6 ++--- xen/common/ioreq.c | 14 +++++------ xen/include/asm-x86/hvm/emulate.h | 2 +- xen/include/asm-x86/hvm/ioreq.h | 4 ++-- xen/include/asm-x86/hvm/vcpu.h | 11 --------- xen/include/xen/sched.h | 17 +++++++++++++ 10 files changed, 60 insertions(+), 54 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 4746d5a..f6a4eef 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -142,8 +142,8 @@ void hvmemul_cancel(struct vcpu *v) { struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; =20 - vio->io_req.state =3D STATE_IOREQ_NONE; - vio->io_completion =3D HVMIO_no_completion; + v->io.io_req.state =3D STATE_IOREQ_NONE; + v->io.io_completion =3D IO_no_completion; vio->mmio_cache_count =3D 0; vio->mmio_insn_bytes =3D 0; vio->mmio_access =3D (struct npfec){}; @@ -159,7 +159,7 @@ static int hvmemul_do_io( { struct vcpu *curr =3D current; struct domain *currd =3D curr->domain; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct vcpu_io *vio =3D &curr->io; ioreq_t p =3D { .type =3D is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO, .addr =3D addr, @@ -1854,7 +1854,7 @@ static int hvmemul_rep_movs( * cheaper than multiple round trips through the device model. Yet * when processing a response we can always re-use the translatio= n. */ - (vio->io_req.state =3D=3D STATE_IORESP_READY || + (curr->io.io_req.state =3D=3D STATE_IORESP_READY || ((!df || *reps =3D=3D 1) && PAGE_SIZE - (saddr & ~PAGE_MASK) >=3D *reps * bytes_per_rep)) ) sgpa =3D pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK); @@ -1870,7 +1870,7 @@ static int hvmemul_rep_movs( if ( vio->mmio_access.write_access && (vio->mmio_gla =3D=3D (daddr & PAGE_MASK)) && /* See comment above. */ - (vio->io_req.state =3D=3D STATE_IORESP_READY || + (curr->io.io_req.state =3D=3D STATE_IORESP_READY || ((!df || *reps =3D=3D 1) && PAGE_SIZE - (daddr & ~PAGE_MASK) >=3D *reps * bytes_per_rep)) ) dgpa =3D pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK); @@ -2007,7 +2007,7 @@ static int hvmemul_rep_stos( if ( vio->mmio_access.write_access && (vio->mmio_gla =3D=3D (addr & PAGE_MASK)) && /* See respective comment in MOVS processing. */ - (vio->io_req.state =3D=3D STATE_IORESP_READY || + (curr->io.io_req.state =3D=3D STATE_IORESP_READY || ((!df || *reps =3D=3D 1) && PAGE_SIZE - (addr & ~PAGE_MASK) >=3D *reps * bytes_per_rep)) ) gpa =3D pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK); @@ -2613,13 +2613,13 @@ static const struct x86_emulate_ops hvm_emulate_ops= _no_write =3D { }; =20 /* - * Note that passing HVMIO_no_completion into this function serves as kind + * Note that passing IO_no_completion into this function serves as kind * of (but not fully) an "auto select completion" indicator. When there's * no completion needed, the passed in value will be ignored in any case. */ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, const struct x86_emulate_ops *ops, - enum hvm_io_completion completion) + enum io_completion completion) { const struct cpu_user_regs *regs =3D hvmemul_ctxt->ctxt.regs; struct vcpu *curr =3D current; @@ -2634,11 +2634,11 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt= *hvmemul_ctxt, */ if ( vio->cache->num_ents > vio->cache->max_ents ) { - ASSERT(vio->io_req.state =3D=3D STATE_IOREQ_NONE); + ASSERT(curr->io.io_req.state =3D=3D STATE_IOREQ_NONE); vio->cache->num_ents =3D 0; } else - ASSERT(vio->io_req.state =3D=3D STATE_IORESP_READY); + ASSERT(curr->io.io_req.state =3D=3D STATE_IORESP_READY); =20 hvm_emulate_init_per_insn(hvmemul_ctxt, vio->mmio_insn, vio->mmio_insn_bytes); @@ -2649,25 +2649,25 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt= *hvmemul_ctxt, if ( rc =3D=3D X86EMUL_OKAY && vio->mmio_retry ) rc =3D X86EMUL_RETRY; =20 - if ( !ioreq_needs_completion(&vio->io_req) ) - completion =3D HVMIO_no_completion; - else if ( completion =3D=3D HVMIO_no_completion ) - completion =3D (vio->io_req.type !=3D IOREQ_TYPE_PIO || - hvmemul_ctxt->is_mem_access) ? HVMIO_mmio_completion - : HVMIO_pio_completion; + if ( !ioreq_needs_completion(&curr->io.io_req) ) + completion =3D IO_no_completion; + else if ( completion =3D=3D IO_no_completion ) + completion =3D (curr->io.io_req.type !=3D IOREQ_TYPE_PIO || + hvmemul_ctxt->is_mem_access) ? IO_mmio_completion + : IO_pio_completion; =20 - switch ( vio->io_completion =3D completion ) + switch ( curr->io.io_completion =3D completion ) { - case HVMIO_no_completion: - case HVMIO_pio_completion: + case IO_no_completion: + case IO_pio_completion: vio->mmio_cache_count =3D 0; vio->mmio_insn_bytes =3D 0; vio->mmio_access =3D (struct npfec){}; hvmemul_cache_disable(curr); break; =20 - case HVMIO_mmio_completion: - case HVMIO_realmode_completion: + case IO_mmio_completion: + case IO_realmode_completion: BUILD_BUG_ON(sizeof(vio->mmio_insn) < sizeof(hvmemul_ctxt->insn_bu= f)); vio->mmio_insn_bytes =3D hvmemul_ctxt->insn_buf_bytes; memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, vio->mmio_insn_byte= s); @@ -2716,7 +2716,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *= hvmemul_ctxt, =20 int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion) + enum io_completion completion) { return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops, completion); } @@ -2754,7 +2754,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned = long gla) guest_cpu_user_regs()); ctxt.ctxt.data =3D &mmio_ro_ctxt; =20 - switch ( rc =3D _hvm_emulate_one(&ctxt, ops, HVMIO_no_completion) ) + switch ( rc =3D _hvm_emulate_one(&ctxt, ops, IO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: @@ -2782,7 +2782,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, un= signed int trapnr, { case EMUL_KIND_NOWRITE: rc =3D _hvm_emulate_one(&ctx, &hvm_emulate_ops_no_write, - HVMIO_no_completion); + IO_no_completion); break; case EMUL_KIND_SET_CONTEXT_INSN: { struct vcpu *curr =3D current; @@ -2803,7 +2803,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, un= signed int trapnr, /* Fall-through */ default: ctx.set_context =3D (kind =3D=3D EMUL_KIND_SET_CONTEXT_DATA); - rc =3D hvm_emulate_one(&ctx, HVMIO_no_completion); + rc =3D hvm_emulate_one(&ctx, IO_no_completion); } =20 switch ( rc ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 20376ce..341093b 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3800,7 +3800,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs) return; } =20 - switch ( hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( hvm_emulate_one(&ctxt, IO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index b220d6b..36584de 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -85,7 +85,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validat= e, const char *descr) =20 hvm_emulate_init_once(&ctxt, validate, guest_cpu_user_regs()); =20 - switch ( rc =3D hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( rc =3D hvm_emulate_one(&ctxt, IO_no_completion) ) { case X86EMUL_UNHANDLEABLE: hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt, rc); @@ -122,7 +122,7 @@ bool handle_mmio_with_translation(unsigned long gla, un= signed long gpfn, bool handle_pio(uint16_t port, unsigned int size, int dir) { struct vcpu *curr =3D current; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct vcpu_io *vio =3D &curr->io; unsigned int data; int rc; =20 @@ -136,7 +136,7 @@ bool handle_pio(uint16_t port, unsigned int size, int d= ir) rc =3D hvmemul_do_pio_buffer(port, size, dir, &data); =20 if ( ioreq_needs_completion(&vio->io_req) ) - vio->io_completion =3D HVMIO_pio_completion; + vio->io_completion =3D IO_pio_completion; =20 switch ( rc ) { diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nested= svm.c index fcfccf7..787d4a0 100644 --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -1266,7 +1266,7 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v) * Delay the injection because this would result in delivering * an interrupt *within* the execution of an instruction. */ - if ( v->arch.hvm.hvm_io.io_req.state !=3D STATE_IOREQ_NONE ) + if ( v->io.io_req.state !=3D STATE_IOREQ_NONE ) return hvm_intblk_shadow; =20 if ( !nv->nv_vmexit_pending && n2vmcb->exit_int_info.v ) diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmod= e.c index 768f01e..f5832a0 100644 --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -101,7 +101,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *= hvmemul_ctxt) =20 perfc_incr(realmode_emulations); =20 - rc =3D hvm_emulate_one(hvmemul_ctxt, HVMIO_realmode_completion); + rc =3D hvm_emulate_one(hvmemul_ctxt, IO_realmode_completion); =20 if ( rc =3D=3D X86EMUL_UNHANDLEABLE ) { @@ -188,7 +188,7 @@ void vmx_realmode(struct cpu_user_regs *regs) =20 vmx_realmode_emulate_one(&hvmemul_ctxt); =20 - if ( vio->io_req.state !=3D STATE_IOREQ_NONE || vio->mmio_retry ) + if ( curr->io.io_req.state !=3D STATE_IOREQ_NONE || vio->mmio_retr= y ) break; =20 /* Stop emulating unless our segment state is not safe */ @@ -202,7 +202,7 @@ void vmx_realmode(struct cpu_user_regs *regs) } =20 /* Need to emulate next time if we've started an IO operation */ - if ( vio->io_req.state !=3D STATE_IOREQ_NONE ) + if ( curr->io.io_req.state !=3D STATE_IOREQ_NONE ) curr->arch.hvm.vmx.vmx_emulate =3D 1; =20 if ( !curr->arch.hvm.vmx.vmx_emulate && !curr->arch.hvm.vmx.vmx_realmo= de ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index a07f1d7..57ddaaa 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -158,7 +158,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, iore= q_t *p) break; } =20 - p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; + p =3D &sv->vcpu->io.io_req; if ( ioreq_needs_completion(p) ) p->data =3D data; =20 @@ -170,10 +170,10 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, io= req_t *p) bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; - struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; + struct vcpu_io *vio =3D &v->io; struct ioreq_server *s; struct ioreq_vcpu *sv; - enum hvm_io_completion io_completion; + enum io_completion io_completion; =20 if ( has_vpci(d) && vpci_process_pending(v) ) { @@ -192,17 +192,17 @@ bool handle_hvm_io_completion(struct vcpu *v) vcpu_end_shutdown_deferral(v); =20 io_completion =3D vio->io_completion; - vio->io_completion =3D HVMIO_no_completion; + vio->io_completion =3D IO_no_completion; =20 switch ( io_completion ) { - case HVMIO_no_completion: + case IO_no_completion: break; =20 - case HVMIO_mmio_completion: + case IO_mmio_completion: return ioreq_complete_mmio(); =20 - case HVMIO_pio_completion: + case IO_pio_completion: return handle_pio(vio->io_req.addr, vio->io_req.size, vio->io_req.dir); =20 diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/em= ulate.h index 1620cc7..131cdf4 100644 --- a/xen/include/asm-x86/hvm/emulate.h +++ b/xen/include/asm-x86/hvm/emulate.h @@ -65,7 +65,7 @@ bool __nonnull(1, 2) hvm_emulate_one_insn( const char *descr); int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion); + enum io_completion completion); void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode); diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 0fccac5..5ed977e 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -26,11 +26,11 @@ =20 #include =20 -static inline bool arch_hvm_io_completion(enum hvm_io_completion io_comple= tion) +static inline bool arch_hvm_io_completion(enum io_completion io_completion) { switch ( io_completion ) { - case HVMIO_realmode_completion: + case IO_realmode_completion: { struct hvm_emulate_ctxt ctxt; =20 diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 6c1feda..8adf455 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -28,13 +28,6 @@ #include #include =20 -enum hvm_io_completion { - HVMIO_no_completion, - HVMIO_mmio_completion, - HVMIO_pio_completion, - HVMIO_realmode_completion -}; - struct hvm_vcpu_asid { uint64_t generation; uint32_t asid; @@ -52,10 +45,6 @@ struct hvm_mmio_cache { }; =20 struct hvm_vcpu_io { - /* I/O request in flight to device model. */ - enum hvm_io_completion io_completion; - ioreq_t io_req; - /* * HVM emulation: * Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn. diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 78761cd..f9ce14c 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -143,6 +143,19 @@ void evtchn_destroy_final(struct domain *d); /* from c= omplete_domain_destroy */ =20 struct waitqueue_vcpu; =20 +enum io_completion { + IO_no_completion, + IO_mmio_completion, + IO_pio_completion, + IO_realmode_completion +}; + +struct vcpu_io { + /* I/O request in flight to device model. */ + enum io_completion io_completion; + ioreq_t io_req; +}; + struct vcpu { int vcpu_id; @@ -254,6 +267,10 @@ struct vcpu struct vpci_vcpu vpci; =20 struct arch_vcpu arch; + +#ifdef CONFIG_IOREQ_SERVER + struct vcpu_io io; +#endif }; =20 struct sched_unit { --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780822; cv=none; d=zohomail.com; s=zohoarc; b=eqfzTI3JgdkDIKb0HsGh958GuW38diw+LNDEC3Yh4UYSxVpGbj9peqrrLgbmPazh5tKU6LEp5GqN6ZEoAmiRnnfA2pvCcpf+YVeHxM9N7moqZgR7c3tV7HKlOpSvGjrDXj0EWZzB3W7KNpUoftRvqMCA6O5fEbuRVYoI13Rga4M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780822; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=ATLX0ix+70BJdDYt/n8NIsAS6V9Lo7VwWj16nLbs74s=; b=bP5JZ0u3Rso/0sut0hrSX2gTw3BygIsmF1/ijuEXvMOvZ+dPxlqWALd+XRtCgmL/l5/FGsieCLFizZpGv9eCs3AJXuQeiwoFtSNzv0HCtMp6LnuAM36seI6fIeNdXdCIafcwu9ncbUr5yxYkwufyEaaqjf+74+E0vUnvbVKy1f8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780822503828.4058263916104; Thu, 15 Oct 2020 09:53:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7608.20039 (Exim 4.92) (envelope-from ) id 1kT6VB-00079D-2G; Thu, 15 Oct 2020 16:53:17 +0000 Received: by outflank-mailman (output) from mailman id 7608.20039; Thu, 15 Oct 2020 16:53:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VA-00078y-PL; Thu, 15 Oct 2020 16:53:16 +0000 Received: by outflank-mailman (input) for mailman id 7608; Thu, 15 Oct 2020 16:53:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6O8-0004yr-79 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:00 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 35a4e4e2-ae98-4920-98eb-57504424b4da; Thu, 15 Oct 2020 16:45:07 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id l2so4404490lfk.0 for ; Thu, 15 Oct 2020 09:45:07 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:04 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6O8-0004yr-79 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:00 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 35a4e4e2-ae98-4920-98eb-57504424b4da; Thu, 15 Oct 2020 16:45:07 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id l2so4404490lfk.0 for ; Thu, 15 Oct 2020 09:45:07 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:04 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 35a4e4e2-ae98-4920-98eb-57504424b4da DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ATLX0ix+70BJdDYt/n8NIsAS6V9Lo7VwWj16nLbs74s=; b=mZHr8xPWPC6BwR4Rvshv8pcGslgwoHeHR995f+dQp2HBfJc3N2sA66iK3KBHJZYxkN T/Q1JrQPnyjBHR2/3yAjBjkIeMIIhJDrHm6IP+qt2S/XUEmQu8rlcooJaRRD5y9wIMmu GnbdX09mpqOk1z3qj4Gz2tsgs/gQP1uOzFdLwpLIzmT7DwvmQF79h7+7kLz6iuWimTBa HvDIiqenfWrnRu+gHpWE3cIGhbBoWsjoUwsAr+7bPmnf3RIZmNiA55r0XSTOxfEgP5nl VD6KDMs895Kks5NfhecJPYNTrIbvzNyOOv8nx3zVd8lawUE/AIhUmZlcqlSpT4YWjmuO 5IgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ATLX0ix+70BJdDYt/n8NIsAS6V9Lo7VwWj16nLbs74s=; b=nYwGJmitEhoZxmrhN5XLsnacY3FYcKGXoM8CRHM9LiAye77C/0DSyi4JMAxdNj1Dfn HyNDdUX4mndwyWGJkp8a2AtlVgWo/VGEifgLcrNNJrPQDkmF9X9QPmawjl8zne3BUT/D 1xEZ52CG0PzzsgAhAtGfI9qYGBq4b0f8Y22jsiZm0WNI3ARrDvLw8en52p8csFJrAfsS S9UFBWBvj14g4j7x2mxe4NNfmWzQUd5oMdZVCWK6ME5E/iTflkuJbrdaxqJpZuOzbnKK MuRHqGiJs0eksU68hEjj2FxtjrWnQzG4vMDKDsWYggEp9aOnjhFtHkoCuIp0RniEToh8 LRiA== X-Gm-Message-State: AOAM533elyY6Ld87fcNXerFvGQ0JhNloYdK8C8KfAn4uzcGrGjQxOj2J ThbnGq0+PPemW4PVmak1FOrDGj8O+2bZAQ== X-Google-Smtp-Source: ABdhPJzoVD12a1fmtxYj1gr0tCGo+X9eLMlAepPqLgCiZ1fzx12+msmYl926taDLXf7MlR7mvynsKg== X-Received: by 2002:a19:cc8f:: with SMTP id c137mr1396381lfg.476.1602780305418; Thu, 15 Oct 2020 09:45:05 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names Date: Thu, 15 Oct 2020 19:44:23 +0300 Message-Id: <1602780274-29141-13-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch removes "hvm" prefixes and infixes from IOREQ related function names in the common code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch --- xen/arch/x86/hvm/emulate.c | 6 +- xen/arch/x86/hvm/hvm.c | 10 +- xen/arch/x86/hvm/io.c | 6 +- xen/arch/x86/hvm/stdvga.c | 4 +- xen/arch/x86/hvm/vmx/vvmx.c | 2 +- xen/common/dm.c | 28 ++--- xen/common/ioreq.c | 240 ++++++++++++++++++++----------------= ---- xen/common/memory.c | 2 +- xen/include/asm-x86/hvm/ioreq.h | 16 +-- xen/include/xen/ioreq.h | 58 +++++----- 10 files changed, 186 insertions(+), 186 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index f6a4eef..54cd493 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -261,7 +261,7 @@ static int hvmemul_do_io( * an ioreq server that can handle it. * * Rules: - * A> PIO or MMIO accesses run through hvm_select_ioreq_server() to + * A> PIO or MMIO accesses run through select_ioreq_server() to * choose the ioreq server by range. If no server is found, the ac= cess * is ignored. * @@ -323,7 +323,7 @@ static int hvmemul_do_io( } =20 if ( !s ) - s =3D hvm_select_ioreq_server(currd, &p); + s =3D select_ioreq_server(currd, &p); =20 /* If there is no suitable backing DM, just ignore accesses */ if ( !s ) @@ -333,7 +333,7 @@ static int hvmemul_do_io( } else { - rc =3D hvm_send_ioreq(s, &p, 0); + rc =3D send_ioreq(s, &p, 0); if ( rc !=3D X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state =3D STATE_IOREQ_NONE; else if ( !ioreq_needs_completion(&vio->io_req) ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 341093b..1e788b5 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -546,7 +546,7 @@ void hvm_do_resume(struct vcpu *v) =20 pt_restore_timer(v); =20 - if ( !handle_hvm_io_completion(v) ) + if ( !handle_io_completion(v) ) return; =20 if ( unlikely(v->arch.vm_event) ) @@ -677,7 +677,7 @@ int hvm_domain_initialise(struct domain *d) register_g2m_portio_handler(d); register_vpci_portio_handler(d); =20 - hvm_ioreq_init(d); + ioreq_init(d); =20 hvm_init_guest_time(d); =20 @@ -739,7 +739,7 @@ void hvm_domain_relinquish_resources(struct domain *d) =20 viridian_domain_deinit(d); =20 - hvm_destroy_all_ioreq_servers(d); + destroy_all_ioreq_servers(d); =20 msixtbl_pt_cleanup(d); =20 @@ -1582,7 +1582,7 @@ int hvm_vcpu_initialise(struct vcpu *v) if ( rc ) goto fail5; =20 - rc =3D hvm_all_ioreq_servers_add_vcpu(d, v); + rc =3D all_ioreq_servers_add_vcpu(d, v); if ( rc !=3D 0 ) goto fail6; =20 @@ -1618,7 +1618,7 @@ void hvm_vcpu_destroy(struct vcpu *v) { viridian_vcpu_deinit(v); =20 - hvm_all_ioreq_servers_remove_vcpu(v->domain, v); + all_ioreq_servers_remove_vcpu(v->domain, v); =20 if ( hvm_altp2m_supported() ) altp2m_vcpu_destroy(v); diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 36584de..2d03ffe 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -60,7 +60,7 @@ void send_timeoffset_req(unsigned long timeoff) if ( timeoff =3D=3D 0 ) return; =20 - if ( hvm_broadcast_ioreq(&p, true) !=3D 0 ) + if ( broadcast_ioreq(&p, true) !=3D 0 ) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } =20 @@ -74,7 +74,7 @@ void send_invalidate_req(void) .data =3D ~0UL, /* flush all */ }; =20 - if ( hvm_broadcast_ioreq(&p, false) !=3D 0 ) + if ( broadcast_ioreq(&p, false) !=3D 0 ) gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); } =20 @@ -155,7 +155,7 @@ bool handle_pio(uint16_t port, unsigned int size, int d= ir) * We should not advance RIP/EIP if the domain is shutting down or * if X86EMUL_RETRY has been returned by an internal handler. */ - if ( curr->domain->is_shutting_down || !hvm_io_pending(curr) ) + if ( curr->domain->is_shutting_down || !io_pending(curr) ) return false; break; =20 diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index bafb3f6..cb1cc7f 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handl= er *handler, } =20 done: - srv =3D hvm_select_ioreq_server(current->domain, &p); + srv =3D select_ioreq_server(current->domain, &p); if ( !srv ) return X86EMUL_UNHANDLEABLE; =20 - return hvm_send_ioreq(srv, &p, 1); + return send_ioreq(srv, &p, 1); } =20 static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 3a37e9e..d5a17f12 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1516,7 +1516,7 @@ void nvmx_switch_guest(void) * don't want to continue as this setup is not implemented nor support= ed * as of right now. */ - if ( hvm_io_pending(v) ) + if ( io_pending(v) ) return; /* * a softirq may interrupt us between a virtual vmentry is diff --git a/xen/common/dm.c b/xen/common/dm.c index 36e01a2..f3a8353 100644 --- a/xen/common/dm.c +++ b/xen/common/dm.c @@ -100,8 +100,8 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad[0] || data->pad[1] || data->pad[2] ) break; =20 - rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); + rc =3D create_ioreq_server(d, data->handle_bufioreq, + &data->id); break; } =20 @@ -117,12 +117,12 @@ static int dm_op(const struct dmop_args *op_args) if ( data->flags & ~valid_flags ) break; =20 - rc =3D hvm_get_ioreq_server_info(d, data->id, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : (unsigned long *)&data->iore= q_gfn, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : (unsigned long *)&data->bufi= oreq_gfn, - &data->bufioreq_port); + rc =3D get_ioreq_server_info(d, data->id, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->ioreq_gf= n, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->bufioreq= _gfn, + &data->bufioreq_port); break; } =20 @@ -135,8 +135,8 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; =20 - rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); + rc =3D map_io_range_to_ioreq_server(d, data->id, data->type, + data->start, data->end); break; } =20 @@ -149,8 +149,8 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; =20 - rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, - data->start, data->end); + rc =3D unmap_io_range_from_ioreq_server(d, data->id, data->type, + data->start, data->end); break; } =20 @@ -163,7 +163,7 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; =20 - rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + rc =3D set_ioreq_server_state(d, data->id, !!data->enabled); break; } =20 @@ -176,7 +176,7 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; =20 - rc =3D hvm_destroy_ioreq_server(d, data->id); + rc =3D destroy_ioreq_server(d, data->id); break; } =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 57ddaaa..98fffae 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -58,7 +58,7 @@ struct ioreq_server *get_ioreq_server(const struct domain= *d, * Iterate over all possible ioreq servers. * * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). + * ioreq servers are favoured in select_ioreq_server(). * This is a semantic that previously existed when ioreq servers * were held in a linked list. */ @@ -105,12 +105,12 @@ static struct ioreq_vcpu *get_pending_vcpu(const stru= ct vcpu *v, return NULL; } =20 -bool hvm_io_pending(struct vcpu *v) +bool io_pending(struct vcpu *v) { return get_pending_vcpu(v, NULL); } =20 -static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) +static bool wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state =3D STATE_IOREQ_NONE; unsigned int state =3D p->state; @@ -167,7 +167,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, iore= q_t *p) return true; } =20 -bool handle_hvm_io_completion(struct vcpu *v) +bool handle_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; struct vcpu_io *vio =3D &v->io; @@ -182,7 +182,7 @@ bool handle_hvm_io_completion(struct vcpu *v) } =20 sv =3D get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + if ( sv && !wait_for_io(sv, get_ioreq(s, v)) ) return false; =20 vio->io_req.state =3D ioreq_needs_completion(&vio->io_req) ? @@ -207,13 +207,13 @@ bool handle_hvm_io_completion(struct vcpu *v) vio->io_req.dir); =20 default: - return arch_hvm_io_completion(io_completion); + return arch_io_completion(io_completion); } =20 return true; } =20 -static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) +static gfn_t alloc_legacy_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -229,7 +229,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_se= rver *s) return INVALID_GFN; } =20 -static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s) +static gfn_t alloc_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -244,11 +244,11 @@ static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server = *s) * If we are out of 'normal' GFNs then we may still have a 'legacy' * GFN available. */ - return hvm_alloc_legacy_ioreq_gfn(s); + return alloc_legacy_ioreq_gfn(s); } =20 -static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s, - gfn_t gfn) +static bool free_legacy_ioreq_gfn(struct ioreq_server *s, + gfn_t gfn) { struct domain *d =3D s->target; unsigned int i; @@ -265,21 +265,21 @@ static bool hvm_free_legacy_ioreq_gfn(struct ioreq_se= rver *s, return true; } =20 -static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) +static void free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d =3D s->target; unsigned int i =3D gfn_x(gfn) - d->ioreq_gfn.base; =20 ASSERT(!gfn_eq(gfn, INVALID_GFN)); =20 - if ( !hvm_free_legacy_ioreq_gfn(s, gfn) ) + if ( !free_legacy_ioreq_gfn(s, gfn) ) { ASSERT(i < sizeof(d->ioreq_gfn.mask) * 8); set_bit(i, &d->ioreq_gfn.mask); } } =20 -static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf) +static void unmap_ioreq_gfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 @@ -289,11 +289,11 @@ static void hvm_unmap_ioreq_gfn(struct ioreq_server *= s, bool buf) destroy_ring_for_helper(&iorp->va, iorp->page); iorp->page =3D NULL; =20 - hvm_free_ioreq_gfn(s, iorp->gfn); + free_ioreq_gfn(s, iorp->gfn); iorp->gfn =3D INVALID_GFN; } =20 -static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) +static int map_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; @@ -303,7 +303,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bo= ol buf) { /* * If a page has already been allocated (which will happen on - * demand if hvm_get_ioreq_server_frame() is called), then + * demand if get_ioreq_server_frame() is called), then * mapping a guest frame is not permitted. */ if ( gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -315,7 +315,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bo= ol buf) if ( d->is_dying ) return -EINVAL; =20 - iorp->gfn =3D hvm_alloc_ioreq_gfn(s); + iorp->gfn =3D alloc_ioreq_gfn(s); =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return -ENOMEM; @@ -324,12 +324,12 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, = bool buf) &iorp->va); =20 if ( rc ) - hvm_unmap_ioreq_gfn(s, buf); + unmap_ioreq_gfn(s, buf); =20 return rc; } =20 -static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) +static int alloc_ioreq_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page; @@ -338,7 +338,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, = bool buf) { /* * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then + * on demand if get_ioreq_server_info() is called), then * allocating a page is not permitted. */ if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -377,7 +377,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, = bool buf) return -ENOMEM; } =20 -static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) +static void free_ioreq_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page =3D iorp->page; @@ -416,7 +416,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) return found; } =20 -static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) +static void remove_ioreq_gfn(struct ioreq_server *s, bool buf) =20 { struct domain *d =3D s->target; @@ -431,7 +431,7 @@ static void hvm_remove_ioreq_gfn(struct ioreq_server *s= , bool buf) clear_page(iorp->va); } =20 -static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) +static int add_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; @@ -450,8 +450,8 @@ static int hvm_add_ioreq_gfn(struct ioreq_server *s, bo= ol buf) return rc; } =20 -static void hvm_update_ioreq_evtchn(struct ioreq_server *s, - struct ioreq_vcpu *sv) +static void update_ioreq_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); =20 @@ -466,8 +466,8 @@ static void hvm_update_ioreq_evtchn(struct ioreq_server= *s, #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) =20 -static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, - struct vcpu *v) +static int ioreq_server_add_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; int rc; @@ -502,7 +502,7 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_serve= r *s, list_add(&sv->list_entry, &s->ioreq_vcpu_list); =20 if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); + update_ioreq_evtchn(s, sv); =20 spin_unlock(&s->lock); return 0; @@ -518,8 +518,8 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_serve= r *s, return rc; } =20 -static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, - struct vcpu *v) +static void ioreq_server_remove_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; =20 @@ -546,7 +546,7 @@ static void hvm_ioreq_server_remove_vcpu(struct ioreq_s= erver *s, spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) +static void ioreq_server_remove_all_vcpus(struct ioreq_server *s) { struct ioreq_vcpu *sv, *next; =20 @@ -572,49 +572,49 @@ static void hvm_ioreq_server_remove_all_vcpus(struct = ioreq_server *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_map_pages(struct ioreq_server *s) +static int ioreq_server_map_pages(struct ioreq_server *s) { int rc; =20 - rc =3D hvm_map_ioreq_gfn(s, false); + rc =3D map_ioreq_gfn(s, false); =20 if ( !rc && HANDLE_BUFIOREQ(s) ) - rc =3D hvm_map_ioreq_gfn(s, true); + rc =3D map_ioreq_gfn(s, true); =20 if ( rc ) - hvm_unmap_ioreq_gfn(s, false); + unmap_ioreq_gfn(s, false); =20 return rc; } =20 -static void hvm_ioreq_server_unmap_pages(struct ioreq_server *s) +static void ioreq_server_unmap_pages(struct ioreq_server *s) { - hvm_unmap_ioreq_gfn(s, true); - hvm_unmap_ioreq_gfn(s, false); + unmap_ioreq_gfn(s, true); + unmap_ioreq_gfn(s, false); } =20 -static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) +static int ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; =20 - rc =3D hvm_alloc_ioreq_mfn(s, false); + rc =3D alloc_ioreq_mfn(s, false); =20 if ( !rc && (s->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc =3D hvm_alloc_ioreq_mfn(s, true); + rc =3D alloc_ioreq_mfn(s, true); =20 if ( rc ) - hvm_free_ioreq_mfn(s, false); + free_ioreq_mfn(s, false); =20 return rc; } =20 -static void hvm_ioreq_server_free_pages(struct ioreq_server *s) +static void ioreq_server_free_pages(struct ioreq_server *s) { - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); + free_ioreq_mfn(s, true); + free_ioreq_mfn(s, false); } =20 -static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) +static void ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; =20 @@ -622,8 +622,8 @@ static void hvm_ioreq_server_free_rangesets(struct iore= q_server *s) rangeset_destroy(s->range[i]); } =20 -static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, - ioservid_t id) +static int ioreq_server_alloc_rangesets(struct ioreq_server *s, + ioservid_t id) { unsigned int i; int rc; @@ -655,12 +655,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct io= req_server *s, return 0; =20 fail: - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); =20 return rc; } =20 -static void hvm_ioreq_server_enable(struct ioreq_server *s) +static void ioreq_server_enable(struct ioreq_server *s) { struct ioreq_vcpu *sv; =20 @@ -669,29 +669,29 @@ static void hvm_ioreq_server_enable(struct ioreq_serv= er *s) if ( s->enabled ) goto done; =20 - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + remove_ioreq_gfn(s, false); + remove_ioreq_gfn(s, true); =20 s->enabled =3D true; =20 list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) - hvm_update_ioreq_evtchn(s, sv); + update_ioreq_evtchn(s, sv); =20 done: spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_disable(struct ioreq_server *s) +static void ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); =20 if ( !s->enabled ) goto done; =20 - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + add_ioreq_gfn(s, true); + add_ioreq_gfn(s, false); =20 s->enabled =3D false; =20 @@ -699,9 +699,9 @@ static void hvm_ioreq_server_disable(struct ioreq_serve= r *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_init(struct ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) +static int ioreq_server_init(struct ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) { struct domain *currd =3D current->domain; struct vcpu *v; @@ -719,7 +719,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, s->ioreq.gfn =3D INVALID_GFN; s->bufioreq.gfn =3D INVALID_GFN; =20 - rc =3D hvm_ioreq_server_alloc_rangesets(s, id); + rc =3D ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; =20 @@ -727,7 +727,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, =20 for_each_vcpu ( d, v ) { - rc =3D hvm_ioreq_server_add_vcpu(s, v); + rc =3D ioreq_server_add_vcpu(s, v); if ( rc ) goto fail_add; } @@ -735,39 +735,39 @@ static int hvm_ioreq_server_init(struct ioreq_server = *s, return 0; =20 fail_add: - hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s); + ioreq_server_remove_all_vcpus(s); + ioreq_server_unmap_pages(s); =20 - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); =20 put_domain(s->emulator); return rc; } =20 -static void hvm_ioreq_server_deinit(struct ioreq_server *s) +static void ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); + ioreq_server_remove_all_vcpus(s); =20 /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. + * NOTE: It is safe to call both ioreq_server_unmap_pages() and + * ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages * are not mapped, leaving the page to be freed by the latter. * However if the pages are mapped then the former will set * the page_info pointer to NULL, meaning the latter will do * nothing. */ - hvm_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); + ioreq_server_unmap_pages(s); + ioreq_server_free_pages(s); =20 - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); =20 put_domain(s->emulator); } =20 -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) +int create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id) { struct ioreq_server *s; unsigned int i; @@ -795,11 +795,11 @@ int hvm_create_ioreq_server(struct domain *d, int buf= ioreq_handling, =20 /* * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. + * ioreq_server_init() since the target domain is paused. */ set_ioreq_server(d, i, s); =20 - rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); + rc =3D ioreq_server_init(s, d, bufioreq_handling, i); if ( rc ) { set_ioreq_server(d, i, NULL); @@ -822,7 +822,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, return rc; } =20 -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +int destroy_ioreq_server(struct domain *d, ioservid_t id) { struct ioreq_server *s; int rc; @@ -841,15 +841,15 @@ int hvm_destroy_ioreq_server(struct domain *d, ioserv= id_t id) =20 domain_pause(d); =20 - arch_hvm_destroy_ioreq_server(s); + arch_destroy_ioreq_server(s); =20 - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); =20 /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is paused. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); =20 domain_unpause(d); @@ -864,10 +864,10 @@ int hvm_destroy_ioreq_server(struct domain *d, ioserv= id_t id) return rc; } =20 -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) +int get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) { struct ioreq_server *s; int rc; @@ -886,7 +886,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, =20 if ( ioreq_gfn || bufioreq_gfn ) { - rc =3D hvm_ioreq_server_map_pages(s); + rc =3D ioreq_server_map_pages(s); if ( rc ) goto out; } @@ -911,8 +911,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, return rc; } =20 -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) +int get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) { struct ioreq_server *s; int rc; @@ -931,7 +931,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, if ( s->emulator !=3D current->domain ) goto out; =20 - rc =3D hvm_ioreq_server_alloc_pages(s); + rc =3D ioreq_server_alloc_pages(s); if ( rc ) goto out; =20 @@ -962,9 +962,9 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, return rc; } =20 -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +int map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -1014,9 +1014,9 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d= , ioservid_t id, return rc; } =20 -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +int unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -1066,8 +1066,8 @@ int hvm_unmap_io_range_from_ioreq_server(struct domai= n *d, ioservid_t id, return rc; } =20 -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) +int set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled) { struct ioreq_server *s; int rc; @@ -1087,9 +1087,9 @@ int hvm_set_ioreq_server_state(struct domain *d, iose= rvid_t id, domain_pause(d); =20 if ( enabled ) - hvm_ioreq_server_enable(s); + ioreq_server_enable(s); else - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); =20 domain_unpause(d); =20 @@ -1100,7 +1100,7 @@ int hvm_set_ioreq_server_state(struct domain *d, iose= rvid_t id, return rc; } =20 -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +int all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -1110,7 +1110,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - rc =3D hvm_ioreq_server_add_vcpu(s, v); + rc =3D ioreq_server_add_vcpu(s, v); if ( rc ) goto fail; } @@ -1127,7 +1127,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) if ( !s ) continue; =20 - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); } =20 spin_unlock_recursive(&d->ioreq_server.lock); @@ -1135,7 +1135,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) return rc; } =20 -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +void all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -1143,17 +1143,17 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domai= n *d, struct vcpu *v) spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); =20 spin_unlock_recursive(&d->ioreq_server.lock); } =20 -void hvm_destroy_all_ioreq_servers(struct domain *d) +void destroy_all_ioreq_servers(struct domain *d) { struct ioreq_server *s; unsigned int id; =20 - if ( !arch_hvm_ioreq_destroy(d) ) + if ( !arch_ioreq_destroy(d) ) return; =20 spin_lock_recursive(&d->ioreq_server.lock); @@ -1162,13 +1162,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); =20 /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is being destroyed. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); =20 xfree(s); @@ -1177,15 +1177,15 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->ioreq_server.lock); } =20 -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *select_ioreq_server(struct domain *d, + ioreq_t *p) { struct ioreq_server *s; uint8_t type; uint64_t addr; unsigned int id; =20 - if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) ) + if ( ioreq_server_get_type_addr(d, p, &type, &addr) ) return NULL; =20 FOR_EACH_IOREQ_SERVER(d, id, s) @@ -1233,7 +1233,7 @@ struct ioreq_server *hvm_select_ioreq_server(struct d= omain *d, return NULL; } =20 -static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) +static int send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) { struct domain *d =3D current->domain; struct ioreq_page *iorp; @@ -1326,8 +1326,8 @@ static int hvm_send_buffered_ioreq(struct ioreq_serve= r *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } =20 -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered) +int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered) { struct vcpu *curr =3D current; struct domain *d =3D curr->domain; @@ -1336,7 +1336,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *p= roto_p, ASSERT(s); =20 if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); + return send_buffered_ioreq(s, proto_p); =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return IOREQ_STATUS_RETRY; @@ -1386,7 +1386,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *p= roto_p, return IOREQ_STATUS_UNHANDLED; } =20 -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +unsigned int broadcast_ioreq(ioreq_t *p, bool buffered) { struct domain *d =3D current->domain; struct ioreq_server *s; @@ -1397,18 +1397,18 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool b= uffered) if ( !s->enabled ) continue; =20 - if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) + if ( send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) failed++; } =20 return failed; } =20 -void hvm_ioreq_init(struct domain *d) +void ioreq_init(struct domain *d) { spin_lock_init(&d->ioreq_server.lock); =20 - arch_hvm_ioreq_init(d); + arch_ioreq_init(d); } =20 /* diff --git a/xen/common/memory.c b/xen/common/memory.c index 83d800f..cf53ca3 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1070,7 +1070,7 @@ static int acquire_ioreq_server(struct domain *d, { mfn_t mfn; =20 - rc =3D hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + rc =3D get_ioreq_server_frame(d, id, frame + i, &mfn); if ( rc ) return rc; =20 diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 5ed977e..1340441 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -26,7 +26,7 @@ =20 #include =20 -static inline bool arch_hvm_io_completion(enum io_completion io_completion) +static inline bool arch_io_completion(enum io_completion io_completion) { switch ( io_completion ) { @@ -50,7 +50,7 @@ static inline bool arch_hvm_io_completion(enum io_complet= ion io_completion) } =20 /* Called when target domain is paused */ -static inline void arch_hvm_destroy_ioreq_server(struct ioreq_server *s) +static inline void arch_destroy_ioreq_server(struct ioreq_server *s) { p2m_set_ioreq_server(s->target, 0, s); } @@ -105,10 +105,10 @@ static inline int hvm_map_mem_type_to_ioreq_server(st= ruct domain *d, return rc; } =20 -static inline int hvm_ioreq_server_get_type_addr(const struct domain *d, - const ioreq_t *p, - uint8_t *type, - uint64_t *addr) +static inline int ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) { uint32_t cf8 =3D d->arch.hvm.pci_cf8; =20 @@ -164,12 +164,12 @@ static inline int hvm_access_cf8( return X86EMUL_UNHANDLEABLE; } =20 -static inline void arch_hvm_ioreq_init(struct domain *d) +static inline void arch_ioreq_init(struct domain *d) { register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); } =20 -static inline bool arch_hvm_ioreq_destroy(struct domain *d) +static inline bool arch_ioreq_destroy(struct domain *d) { if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) ) return false; diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 8451866..7b03ab5 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -81,39 +81,39 @@ static inline bool ioreq_needs_completion(const ioreq_t= *ioreq) (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); } =20 -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); +bool io_pending(struct vcpu *v); +bool handle_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); =20 -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, +int create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id); +int destroy_ioreq_server(struct domain *d, ioservid_t id); +int get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port); +int get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); +int map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); - -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); - -void hvm_ioreq_init(struct domain *d); +int set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled); + +int all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); +void all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); +void destroy_all_ioreq_servers(struct domain *d); + +struct ioreq_server *select_ioreq_server(struct domain *d, + ioreq_t *p); +int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int broadcast_ioreq(ioreq_t *p, bool buffered); + +void ioreq_init(struct domain *d); =20 #endif /* __XEN_IOREQ_H__ */ =20 --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780817; cv=none; d=zohomail.com; s=zohoarc; b=fybZNz334Lk22j8UaoBMkVt9R1bQSSfQS4V+dYJy/rRTSEIsg1KO/G4SsYqALm98Y1iua65eeCkU2pZxMrhi/fKF8dmHULkOk9b8zZ5hhfOjDaSebgnrZUe0vVju6pqjoqcKzn1jhv/cTGTacBX0oqeBTQoVFtldUA2Jci68EoA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780817; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=N4sHZjBrvdP2nIgCwj1N8kQLOeVN1MHVkf9kqSUqzcc=; b=QcRv3j2q1TrgXTqy0BXkLjz2WyRa8/FgxD3cVZWPGB3hjjHeoIujKhrqiRx3isQ0d2NUdiK43yQTtRktfMNzXi/+HkNLX52tVhQAJsdBAbsK1TTFTFu4YqiRQnNbXsSrZ02NxMCIGOV1PiYCmyJ/fJoNJKWOah6YM1+xTcLXxv8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780817503178.69589606823718; Thu, 15 Oct 2020 09:53:37 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7612.20077 (Exim 4.92) (envelope-from ) id 1kT6VC-0007Ca-PF; Thu, 15 Oct 2020 16:53:18 +0000 Received: by outflank-mailman (output) from mailman id 7612.20077; Thu, 15 Oct 2020 16:53:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VC-0007By-AI; Thu, 15 Oct 2020 16:53:18 +0000 Received: by outflank-mailman (input) for mailman id 7612; Thu, 15 Oct 2020 16:53:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6O3-0004yr-6v for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:55 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f79a2b4f-f75a-40b3-8756-6dcebdebe072; Thu, 15 Oct 2020 16:45:07 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id l2so4404508lfk.0 for ; Thu, 15 Oct 2020 09:45:07 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:05 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6O3-0004yr-6v for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:55 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f79a2b4f-f75a-40b3-8756-6dcebdebe072; Thu, 15 Oct 2020 16:45:07 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id l2so4404508lfk.0 for ; Thu, 15 Oct 2020 09:45:07 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:05 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f79a2b4f-f75a-40b3-8756-6dcebdebe072 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=N4sHZjBrvdP2nIgCwj1N8kQLOeVN1MHVkf9kqSUqzcc=; b=By1+sZx/D6fqGrtAeMnWCLm09vNkxpnpy1Oh0Ydg6B/fHRp9Eviq44+yZSendm4Mlz eClZjLs66cfhBMRPcJRLl3oYkHIXXnFNtscK2P8CI8eyvnAWyra3P/bzaUXqW2CAK8wa QL3Xjbg/T0W2xOI7Ijx+p9QHpNJ28y4Q+89b3dpHDO1RvWmpO94UpPBNC6prmpqpp/oS OxriwipPOOH7AEEe6BkOwWjlIrbAXg8HHRPx+8kh39tJ8GSpnw7HLGZwMpt/Fw8Z5UaE VFbBBQ9Y38smL9Np9XmjvQjiE8qHCySg/II1Rl+LI42vhBJefGorQzGy0B8WJ3TpfiXO Olpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N4sHZjBrvdP2nIgCwj1N8kQLOeVN1MHVkf9kqSUqzcc=; b=IqW703Q9/JUAg1mvP2iG/5Ab0eZ2OEpS+erqwBPKzIa7tudreTRfswHUM+u1AcF+nr RsmckKlxW5CSCsitmOMEwpa3UtKJAqg670FKG1OMY1rDru03GfF1e391gjpXEOzRejaN mqUo+5SHlKPA/uuekd0YySeFnUu6vhd0BI7gJAqLFBEqRRFg97UBssec/O0lQ7iCdFL2 VMKEcRAQw8RAwfiz+qIOZKjCz9KDM/ML9OiTeMqmtWoqHVyME1lhFOCJK7Srm5SttXFX jdC+kAXuD+RY1aRLNH8jzn/uDhn/uvX0/5Xj5yUZ/5AS01Th8DYU2AnYgJfE3BIuLuGl C9Dg== X-Gm-Message-State: AOAM53196ntoFRXDeQk7WwRt3u7WYmT+uWJlMM0O496JSQxcBP3/6pc4 tZbyIzaUw3R0x0zA7zSp3//NPXV5iTAA4w== X-Google-Smtp-Source: ABdhPJwhpoAS/UbyhzkuTItYICvt8rJ4Yol/uOLfyGdXlBHd66uiyGvQNddJO4lJY91mCIZH3jo+ww== X-Received: by 2002:a19:824f:: with SMTP id e76mr1281242lfd.572.1602780306328; Thu, 15 Oct 2020 09:45:06 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Paul Durrant , Julien Grall Subject: [PATCH V2 13/23] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() Date: Thu, 15 Oct 2020 19:44:24 +0300 Message-Id: <1602780274-29141-14-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The cmpxchg() in hvm_send_buffered_ioreq() operates on memory shared with the emulator domain (and the target domain if the legacy interface is used). In order to be on the safe side we need to switch to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm. As there is no plan to support the legacy interface on Arm, we will have a page to be mapped in a single domain at the time, so we can use s->emulator in guest_cmpxchg64() safely. Thankfully the only user of the legacy interface is x86 so far and there is not concern regarding the atomics operations. Please note, that the legacy interface *must* not be used on Arm without revisiting the code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - move earlier to avoid breaking arm32 compilation - add an explanation to commit description and hvm_allow_set_param() - pass s->emulator --- xen/arch/arm/hvm.c | 4 ++++ xen/common/ioreq.c | 3 ++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c index 8951b34..9694e5a 100644 --- a/xen/arch/arm/hvm.c +++ b/xen/arch/arm/hvm.c @@ -31,6 +31,10 @@ =20 #include =20 +/* + * The legacy interface (which involves magic IOREQ pages) *must* not be u= sed + * without revisiting the code. + */ static int hvm_allow_set_param(const struct domain *d, unsigned int param) { switch ( param ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 98fffae..8612159 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -28,6 +28,7 @@ #include #include =20 +#include #include =20 #include @@ -1317,7 +1318,7 @@ static int send_buffered_ioreq(struct ioreq_server *s= , ioreq_t *p) =20 new.read_pointer =3D old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; new.write_pointer =3D old.write_pointer - n * IOREQ_BUFFER_SLOT_NU= M; - cmpxchg(&pg->ptrs.full, old.full, new.full); + guest_cmpxchg64(s->emulator, &pg->ptrs.full, old.full, new.full); } =20 notify_via_xen_event_channel(d, s->bufioreq_evtchn); --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780830; cv=none; d=zohomail.com; s=zohoarc; b=n+wWZu3e/CqRDE5gKCQefYBCLvVBLvcX5ggxrE2n8VZQuVFse4cSqL8i11hckkER9hR1Wt+5peYD7gi8NsWQxinG9Fp1OKGEWUzW+Z35xHHKoiBKiBb08vLQNwqsUGZnavbptaRxBqjwDz+Tz3qe10tSjchOvvaNcZlVw+5QW2I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780830; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=sOB0KRQzNqQKgBnd+DVp46wCjPUsJTci+TWGlyq6IHk=; b=UF8Q3Q3u5UDJTi1cKjInfkkp2Rnde8DUog+KnXAHIw9cW18YwGk5ccdcPp1Mh/aJdjNNHoqs3kwrz2oGFvyWmuqcovgXK04oOr8OYxDCeGEQAB39rcd72D8t5sm/TcUJUX0RyuBj0AsdfI2mlCCNnOf1pu8j3h9iQeE5ueXGBCo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780830031180.56896548486236; Thu, 15 Oct 2020 09:53:50 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7622.20174 (Exim 4.92) (envelope-from ) id 1kT6VN-0007hV-SU; Thu, 15 Oct 2020 16:53:29 +0000 Received: by outflank-mailman (output) from mailman id 7622.20174; Thu, 15 Oct 2020 16:53:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VM-0007fO-Gm; Thu, 15 Oct 2020 16:53:28 +0000 Received: by outflank-mailman (input) for mailman id 7622; Thu, 15 Oct 2020 16:53:17 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6OI-0004yr-7O for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:10 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8dbf7494-b1e5-4da9-aef3-a0e8cd19873d; Thu, 15 Oct 2020 16:45:09 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id p15so3831734ljj.8 for ; Thu, 15 Oct 2020 09:45:09 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:06 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6OI-0004yr-7O for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:10 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8dbf7494-b1e5-4da9-aef3-a0e8cd19873d; Thu, 15 Oct 2020 16:45:09 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id p15so3831734ljj.8 for ; Thu, 15 Oct 2020 09:45:09 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:06 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8dbf7494-b1e5-4da9-aef3-a0e8cd19873d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sOB0KRQzNqQKgBnd+DVp46wCjPUsJTci+TWGlyq6IHk=; b=XsQT+M7/kBteWBIsYxbEXnOvjFm4tf1tIglvblWEBKCE9jmqGtNPZDxBa07y2Byl3c ZfUwLl/xvcX6t5yKY4qkB1ZR7JPmaZ9CYgFpyWrQ9AZKYmgRIlT50R6bdHiNPk1EEmv5 tzEdjIrGeE9rjaO6NsbIg9Ssc+QvnMhesOD/GkOye1AINuo5IYqpBuM/VZH1SqIEIs/8 iwKPbJdLd78LG/ZknnGhh3JuQP7xBCtHek9neMg7aSauD5ZwU16vluQMW7V+DfypYbMG nFcuIhqsX6YfEx6XmxxoKx/s/rDePKzSm5VR6GAsL+4IYlfncwX7Iv+bG100ly22a0Ce YVfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=sOB0KRQzNqQKgBnd+DVp46wCjPUsJTci+TWGlyq6IHk=; b=n8TPxujkZPn/VVVsKxLhnbaXjetwxQxcLn6knjxvzQGeF6UUpMMho5+V9i8UY86VCT ohWOd8PNHsWvN5cw87v41kx1ZAqH5rYIEh+EACe5lpbSNrT8lGzAtG+GWKc/mZkMSPM8 xdAr/9XgYDDXVfja+sv+I/bHxvup25lP+UEc5KtAn6lUeJj7rriiudd7EJTtvLBEhCzZ XXUIAkxyP1QYjoQ4bzy/XtJVOCuQGU1lSFi3RwO98S0JnLlgcYXwYtocATklu3Vi60n/ 0cILCKonRu4QUuy2vsxYyaKOE/QQs04zWk1pizKtwc5yStmDiR1TF3jIKp0HaikMlwaD twaw== X-Gm-Message-State: AOAM532+NQmMSGUXqckyorzUFz0Aiss28L3BjkzpsOsNFPcyrcAV7YJ2 gt8THBMJyjTcxa67yLWENDBgEvZrwPAxIw== X-Google-Smtp-Source: ABdhPJwZZv2XOIEHlMZjS7b2fFtS4U+URg/aCid+fgq+JXBnk2tT6Jv191S0C+c6pfQ/2P+NWZ6a5A== X-Received: by 2002:a2e:8816:: with SMTP id x22mr1261041ljh.377.1602780307354; Thu, 15 Oct 2020 09:45:07 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Paul Durrant , Oleksandr Tyshchenko Subject: [PATCH V2 14/23] arm/ioreq: Introduce arch specific bits for IOREQ/DM features Date: Thu, 15 Oct 2020 19:44:25 +0300 Message-Id: <1602780274-29141-15-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall This patch adds basic IOREQ/DM support on Arm. The subsequent patches will improve functionality and add remaining bits. The IOREQ/DM features are supposed to be built with IOREQ_SERVER option enabled, which is disabled by default on Arm for now. Please note, the "PIO handling" TODO is expected to left unaddressed for the current series. It is not an big issue for now while Xen doesn't have support for vPCI on Arm. On Arm64 they are only used for PCI IO Bar and we would probably want to expose them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling" should be implemented when we add support for vPCI. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was split into: - arm/ioreq: Introduce arch specific bits for IOREQ/DM features - xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm - update patch description - update asm-arm/hvm/ioreq.h according to the newly introduced arch func= tions: - arch_hvm_destroy_ioreq_server() - arch_handle_hvm_io_completion() - update arch files to include xen/ioreq.h - remove HVMOP plumbing - rewrite a logic to handle properly case when hvm_send_ioreq() returns = IO_RETRY - add a logic to handle properly handle_hvm_io_completion() return value - rename handle_mmio() to ioreq_handle_complete_mmio() - move paging_mark_pfn_dirty() to asm-arm/paging.h - remove forward declaration for hvm_ioreq_server in asm-arm/paging.h - move try_fwd_ioserv() to ioreq.c, provide stubs if !CONFIG_IOREQ_SERVER - do not remove #ifdef CONFIG_IOREQ_SERVER in memory.c for guarding xen/= ioreq.h - use gdprintk in try_fwd_ioserv(), remove unneeded prints - update list of #include-s - move has_vpci() to asm-arm/domain.h - add a comment (TODO) to unimplemented yet handle_pio() - remove hvm_mmio_first(last)_byte() and hvm_ioreq_(page/vcpu/server) st= ructs from the arch files, they were already moved to the common code - remove set_foreign_p2m_entry() changes, they will be properly implemen= ted in the follow-up patch - select IOREQ_SERVER for Arm instead of Arm64 in Kconfig - remove x86's realmode and other unneeded stubs from xen/ioreq.h - clafify ioreq_t p.df usage in try_fwd_ioserv() - set ioreq_t p.count to 1 in try_fwd_ioserv() Changes V2 -> V1: - was split into: - arm/ioreq: Introduce arch specific bits for IOREQ/DM features - xen/arm: Stick around in leave_hypervisor_to_guest until I/O has com= pleted - update the author of a patch - update patch description - move a loop in leave_hypervisor_to_guest() to a separate patch - set IOREQ_SERVER disabled by default - remove already clarified /* XXX */ - replace BUG() by ASSERT_UNREACHABLE() in handle_pio() - remove default case for handling the return value of try_handle_mmio() - remove struct hvm_domain, enum hvm_io_completion, struct hvm_vcpu_io, struct hvm_vcpu from asm-arm/domain.h, these are common materials now - update everything according to the recent changes (IOREQ related funct= ion names don't contain "hvm" prefixes/infixes anymore, IOREQ related fiel= ds are part of common struct vcpu/domain now, etc) --- xen/arch/arm/Makefile | 2 + xen/arch/arm/dm.c | 34 ++++++++++ xen/arch/arm/domain.c | 9 +++ xen/arch/arm/io.c | 11 +++- xen/arch/arm/ioreq.c | 141 ++++++++++++++++++++++++++++++++++++= ++++ xen/arch/arm/traps.c | 13 ++++ xen/common/ioreq.c | 1 + xen/include/asm-arm/domain.h | 5 ++ xen/include/asm-arm/hvm/ioreq.h | 109 +++++++++++++++++++++++++++++++ xen/include/asm-arm/mmio.h | 1 + xen/include/asm-arm/paging.h | 4 ++ 11 files changed, 329 insertions(+), 1 deletion(-) create mode 100644 xen/arch/arm/dm.c create mode 100644 xen/arch/arm/ioreq.c create mode 100644 xen/include/asm-arm/hvm/ioreq.h diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 296c5e6..c3ff454 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -13,6 +13,7 @@ obj-y +=3D cpuerrata.o obj-y +=3D cpufeature.o obj-y +=3D decode.o obj-y +=3D device.o +obj-$(CONFIG_IOREQ_SERVER) +=3D dm.o obj-y +=3D domain.o obj-y +=3D domain_build.init.o obj-y +=3D domctl.o @@ -27,6 +28,7 @@ obj-y +=3D guest_atomics.o obj-y +=3D guest_walk.o obj-y +=3D hvm.o obj-y +=3D io.o +obj-$(CONFIG_IOREQ_SERVER) +=3D ioreq.o obj-y +=3D irq.o obj-y +=3D kernel.init.o obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c new file mode 100644 index 0000000..5d3da37 --- /dev/null +++ b/xen/arch/arm/dm.c @@ -0,0 +1,34 @@ +/* + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include + +int arch_dm_op(struct xen_dm_op *op, struct domain *d, + const struct dmop_args *op_args, bool *const_op) +{ + return -EOPNOTSUPP; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 3b37f89..ba9b1fb 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -694,6 +695,10 @@ int arch_domain_create(struct domain *d, =20 ASSERT(config !=3D NULL); =20 +#ifdef CONFIG_IOREQ_SERVER + ioreq_init(d); +#endif + /* p2m_init relies on some value initialized by the IOMMU subsystem */ if ( (rc =3D iommu_domain_init(d, config->iommu_opts)) !=3D 0 ) goto fail; @@ -1012,6 +1017,10 @@ int domain_relinquish_resources(struct domain *d) if (ret ) return ret; =20 +#ifdef CONFIG_IOREQ_SERVER + destroy_all_ioreq_servers(d); +#endif + PROGRESS(xen): ret =3D relinquish_memory(d, &d->xenpage_list); if ( ret ) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index ae7ef96..f44cfd4 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -23,6 +23,7 @@ #include #include #include +#include =20 #include "decode.h" =20 @@ -123,7 +124,15 @@ enum io_state try_handle_mmio(struct cpu_user_regs *re= gs, =20 handler =3D find_mmio_handler(v->domain, info.gpa); if ( !handler ) - return IO_UNHANDLED; + { + int rc; + + rc =3D try_fwd_ioserv(regs, v, &info); + if ( rc =3D=3D IO_HANDLED ) + return handle_ioserv(regs, v); + + return rc; + } =20 /* All the instructions used on emulated MMIO region should be valid */ if ( !dabt.valid ) diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c new file mode 100644 index 0000000..da5ceac --- /dev/null +++ b/xen/arch/arm/ioreq.c @@ -0,0 +1,141 @@ +/* + * arm/ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include + +#include + +#include + +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v) +{ + const union hsr hsr =3D { .bits =3D regs->hsr }; + const struct hsr_dabt dabt =3D hsr.dabt; + /* Code is similar to handle_read */ + uint8_t size =3D (1 << dabt.size) * 8; + register_t r =3D v->io.io_req.data; + + /* We are done with the IO */ + v->io.io_req.state =3D STATE_IOREQ_NONE; + + if ( dabt.write ) + return IO_HANDLED; + + /* + * Sign extend if required. + * Note that we expect the read handler to have zeroed the bits + * outside the requested access size. + */ + if ( dabt.sign && (r & (1UL << (size - 1))) ) + { + /* + * We are relying on register_t using the same as + * an unsigned long in order to keep the 32-bit assembly + * code smaller. + */ + BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long)); + r |=3D (~0UL) << size; + } + + set_user_reg(regs, dabt.reg, r); + + return IO_HANDLED; +} + +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info) +{ + struct vcpu_io *vio =3D &v->io; + ioreq_t p =3D { + .type =3D IOREQ_TYPE_COPY, + .addr =3D info->gpa, + .size =3D 1 << info->dabt.size, + .count =3D 1, + .dir =3D !info->dabt.write, + /* + * On x86, df is used by 'rep' instruction to tell the direction + * to iterate (forward or backward). + * On Arm, all the accesses to MMIO region will do a single + * memory access. So for now, we can safely always set to 0. + */ + .df =3D 0, + .data =3D get_user_reg(regs, info->dabt.reg), + .state =3D STATE_IOREQ_READY, + }; + struct ioreq_server *s =3D NULL; + enum io_state rc; + + switch ( vio->io_req.state ) + { + case STATE_IOREQ_NONE: + break; + + case STATE_IORESP_READY: + return IO_HANDLED; + + default: + gdprintk(XENLOG_ERR, "wrong state %u\n", vio->io_req.state); + return IO_ABORT; + } + + s =3D select_ioreq_server(v->domain, &p); + if ( !s ) + return IO_UNHANDLED; + + if ( !info->dabt.valid ) + return IO_ABORT; + + vio->io_req =3D p; + + rc =3D send_ioreq(s, &p, 0); + if ( rc !=3D IO_RETRY || v->domain->is_shutting_down ) + vio->io_req.state =3D STATE_IOREQ_NONE; + else if ( !ioreq_needs_completion(&vio->io_req) ) + rc =3D IO_HANDLED; + else + vio->io_completion =3D IO_mmio_completion; + + return rc; +} + +bool ioreq_complete_mmio(void) +{ + struct vcpu *v =3D current; + struct cpu_user_regs *regs =3D guest_cpu_user_regs(); + const union hsr hsr =3D { .bits =3D regs->hsr }; + paddr_t addr =3D v->io.io_req.addr; + + if ( try_handle_mmio(regs, hsr, addr) =3D=3D IO_HANDLED ) + { + advance_pc(regs, hsr); + return true; + } + + return false; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 8f40d0e..b154837 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -1384,6 +1385,9 @@ static arm_hypercall_t arm_hypercall_table[] =3D { #ifdef CONFIG_HYPFS HYPERCALL(hypfs_op, 5), #endif +#ifdef CONFIG_IOREQ_SERVER + HYPERCALL(dm_op, 3), +#endif }; =20 #ifndef NDEBUG @@ -1955,6 +1959,9 @@ static void do_trap_stage2_abort_guest(struct cpu_use= r_regs *regs, case IO_HANDLED: advance_pc(regs, hsr); return; + case IO_RETRY: + /* finish later */ + return; case IO_UNHANDLED: /* IO unhandled, try another way to handle it. */ break; @@ -2253,6 +2260,12 @@ static void check_for_vcpu_work(void) { struct vcpu *v =3D current; =20 +#ifdef CONFIG_IOREQ_SERVER + local_irq_enable(); + handle_io_completion(v); + local_irq_disable(); +#endif + if ( likely(!v->arch.need_flush_to_ram) ) return; =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 8612159..bcd4961 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -18,6 +18,7 @@ =20 #include #include +#include #include #include #include diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 6819a3b..d4c3da5 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -10,6 +10,7 @@ #include #include #include +#include #include =20 struct hvm_domain @@ -17,6 +18,8 @@ struct hvm_domain uint64_t params[HVM_NR_PARAMS]; }; =20 +#define ioreq_params(d, i) ((d)->arch.hvm.params[i]) + #ifdef CONFIG_ARM_64 enum domain_type { DOMAIN_32BIT, @@ -262,6 +265,8 @@ static inline void arch_vcpu_block(struct vcpu *v) {} =20 #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_f= lag) =20 +#define has_vpci(d) ({ (void)(d); false; }) + #endif /* __ASM_DOMAIN_H__ */ =20 /* diff --git a/xen/include/asm-arm/hvm/ioreq.h b/xen/include/asm-arm/hvm/iore= q.h new file mode 100644 index 0000000..9f59f23 --- /dev/null +++ b/xen/include/asm-arm/hvm/ioreq.h @@ -0,0 +1,109 @@ +/* + * hvm.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#ifndef __ASM_ARM_HVM_IOREQ_H__ +#define __ASM_ARM_HVM_IOREQ_H__ + +#include + +#include + +#ifdef CONFIG_IOREQ_SERVER +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v); +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info); +#else +static inline enum io_state handle_ioserv(struct cpu_user_regs *regs, + struct vcpu *v) +{ + return IO_UNHANDLED; +} + +static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *in= fo) +{ + return IO_UNHANDLED; +} +#endif + +bool ioreq_complete_mmio(void); + +static inline bool handle_pio(uint16_t port, unsigned int size, int dir) +{ + /* + * TODO: For Arm64, the main user will be PCI. So this should be + * implemented when we add support for vPCI. + */ + ASSERT_UNREACHABLE(); + return true; +} + +static inline void arch_destroy_ioreq_server(struct ioreq_server *s) +{ +} + +static inline void msix_write_completion(struct vcpu *v) +{ +} + +static inline bool arch_io_completion(enum io_completion io_completion) +{ + ASSERT_UNREACHABLE(); + return true; +} + +static inline int ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) +{ + if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) + return -EINVAL; + + *type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr =3D p->addr; + + return 0; +} + +static inline void arch_ioreq_init(struct domain *d) +{ +} + +static inline bool arch_ioreq_destroy(struct domain *d) +{ + return true; +} + +#define IOREQ_STATUS_HANDLED IO_HANDLED +#define IOREQ_STATUS_UNHANDLED IO_UNHANDLED +#define IOREQ_STATUS_RETRY IO_RETRY + +#endif /* __ASM_ARM_HVM_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h index 8dbfb27..7ab873c 100644 --- a/xen/include/asm-arm/mmio.h +++ b/xen/include/asm-arm/mmio.h @@ -37,6 +37,7 @@ enum io_state IO_ABORT, /* The IO was handled by the helper and led to an abor= t. */ IO_HANDLED, /* The IO was successfully handled by the helper. */ IO_UNHANDLED, /* The IO was not handled by the helper. */ + IO_RETRY, /* Retry the emulation for some reason */ }; =20 typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info, diff --git a/xen/include/asm-arm/paging.h b/xen/include/asm-arm/paging.h index 6d1a000..0550c55 100644 --- a/xen/include/asm-arm/paging.h +++ b/xen/include/asm-arm/paging.h @@ -4,6 +4,10 @@ #define paging_mode_translate(d) (1) #define paging_mode_external(d) (1) =20 +static inline void paging_mark_pfn_dirty(struct domain *d, pfn_t pfn) +{ +} + #endif /* XEN_PAGING_H */ =20 /* --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780819; cv=none; d=zohomail.com; s=zohoarc; b=lfaDLw1g9YIAuuYvLxIPAKhpMmDmlgf+2yFeagLBUTLUSTe+UQSQDwQNfvhhYo8CaDkaqJyX3Uk8Wdj535gebUIHrcUL7JVgCmrMIGyixckYrWwaE2uy4gdVAsT3Arc6YWb64HMcWQzbx5C1TQuo2CxovSkZqxS3oZOQ65ikRaw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780819; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=Keo31MrYeOTyvKyfr544wWTp0SuVVjzyXDaBlqBdDas=; b=dSVr2UHx5baIEw980qY+hXWOHynrgeOtuBktTewSOH4I2JRAToSGpJhJOE3Pb9Wz7IDmBlfjIWZoaoZeNYSz6vnbDHokr/7LIQ+aM/5eIOab6DFli47IO4IGw+6HAJpyFzVcy7lke0ByN+I+yVnP4UdIE0genfnOyxQS2Iqa4VY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780819164842.8847589292973; Thu, 15 Oct 2020 09:53:39 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7611.20060 (Exim 4.92) (envelope-from ) id 1kT6VB-0007Ar-V9; Thu, 15 Oct 2020 16:53:17 +0000 Received: by outflank-mailman (output) from mailman id 7611.20060; Thu, 15 Oct 2020 16:53:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VB-0007Ab-JQ; Thu, 15 Oct 2020 16:53:17 +0000 Received: by outflank-mailman (input) for mailman id 7611; Thu, 15 Oct 2020 16:53:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6OD-0004yr-7J for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:05 +0000 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e03c66cd-2b77-4141-b88d-5a04b589861c; Thu, 15 Oct 2020 16:45:09 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id h6so4385287lfj.3 for ; Thu, 15 Oct 2020 09:45:09 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:07 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6OD-0004yr-7J for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:05 +0000 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e03c66cd-2b77-4141-b88d-5a04b589861c; Thu, 15 Oct 2020 16:45:09 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id h6so4385287lfj.3 for ; Thu, 15 Oct 2020 09:45:09 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:07 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e03c66cd-2b77-4141-b88d-5a04b589861c DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Keo31MrYeOTyvKyfr544wWTp0SuVVjzyXDaBlqBdDas=; b=OBlvZjvFoAJltafpJoEkNBdBI7u2knDZkjagqQ73OHny6SZJkL4xG8AFhUM/hl431F Mncj45jQ6T3PvIVxRnY693tUgV2xhrrkit06SZAg2jlsFiL1RV29WoiJj7J4cLDj9jN5 Zkmrt+H7sBOaCa8bAH+OyFI730CV4BNb279HRhNSBunIQhDKfWv2d8sJ5CnsBkx4/ovf Y3SjDNPUsx5W0uGVsXV/82YQ3Y7Ykqae6DfkawPlYlSlwaZTI0wvuq8Pn2Ao6dWImK9L N5YcVH3gNDY0Jy8EYwxffQIFh8wxFoqME4zrM/VEGQaklUf1ySYkKb0/YBJSIm4jiUwB ppPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Keo31MrYeOTyvKyfr544wWTp0SuVVjzyXDaBlqBdDas=; b=TkXP0i81NKR7vPphQLzzgbjFXNThvu5L3aD3KD/7OaOb+bmEWPkxtmRsZ5aCmELvlh bgz/oJhzCl2JtPubIc2hH2gJkD75Pbi6zdz23X4nxUidVSKFBhdkBd/gSd5dHd+W8bJL JRKwuI7pad7jiy8h+UehfhD+DlPqc8P0cBpTNnz4E5UZCzpS0ngxS2sfpj+XNATfB3Nd 6XdMkGxM+bXtLJFhi3xM2DRaa0pIlIevCdS2fhVFDPRU00ROXB0gjJMNBhnRL5H2BPGL Ii5/WAtzos/delEXbhI7AD+VYjpQqnuax9KSQEoYfmJZ8jTJcS1V+hhqnJhO4NohVHjR Bl0A== X-Gm-Message-State: AOAM531X7byo80d6Gb6QvbAXqqryLq1govcHSQ5l6guJxYdr7CzLqw2N hdE+PlNoYEw4atYSI5huLrTRjGicYw0+JA== X-Google-Smtp-Source: ABdhPJxKP4JYrVOERvQAq7fAe3D43owO+copPgDkP5/Hg9N9VUFmZEhVveSzbzH8T+uUi6c3GZj9ig== X-Received: by 2002:a19:c883:: with SMTP id y125mr1282972lff.485.1602780308177; Thu, 15 Oct 2020 09:45:08 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V2 15/23] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed Date: Thu, 15 Oct 2020 19:44:26 +0300 Message-Id: <1602780274-29141-16-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch adds proper handling of return value of handle_io_completion() which involves using a loop in leave_hypervisor_to_guest(). The reason to use an unbounded loop here is the fact that vCPU shouldn't continue until an I/O has completed. In Xen case, if an I/O never completes then it most likely means that something went horribly wrong with the Device Emulator. And it is most likely not safe to continue. So letting the vCPU to spin forever if I/O never completes is a safer action than letting it continue and leaving the guest in unclear state and is the best what we can do for now. This wouldn't be an issue for Xen as do_softirq() would be called at every loop. In case of failure, the guest will crash and the vCPU will be unscheduled. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch, changes were derived from (+ new explanation): arm/ioreq: Introduce arch specific bits for IOREQ/DM features --- xen/arch/arm/traps.c | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index b154837..507c095 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2256,18 +2256,23 @@ static void check_for_pcpu_work(void) * Process pending work for the vCPU. Any call should be fast or * implement preemption. */ -static void check_for_vcpu_work(void) +static bool check_for_vcpu_work(void) { struct vcpu *v =3D current; =20 #ifdef CONFIG_IOREQ_SERVER + bool handled; + local_irq_enable(); - handle_io_completion(v); + handled =3D handle_io_completion(v); local_irq_disable(); + + if ( !handled ) + return true; #endif =20 if ( likely(!v->arch.need_flush_to_ram) ) - return; + return false; =20 /* * Give a chance for the pCPU to process work before handling the vCPU @@ -2278,6 +2283,8 @@ static void check_for_vcpu_work(void) local_irq_enable(); p2m_flush_vm(v); local_irq_disable(); + + return false; } =20 /* @@ -2290,8 +2297,22 @@ void leave_hypervisor_to_guest(void) { local_irq_disable(); =20 - check_for_vcpu_work(); - check_for_pcpu_work(); + /* + * The reason to use an unbounded loop here is the fact that vCPU + * shouldn't continue until an I/O has completed. In Xen case, if an I= /O + * never completes then it most likely means that something went horri= bly + * wrong with the Device Emulator. And it is most likely not safe to + * continue. So letting the vCPU to spin forever if I/O never completes + * is a safer action than letting it continue and leaving the guest in + * unclear state and is the best what we can do for now. + * + * This wouldn't be an issue for Xen as do_softirq() would be called at + * every loop. In case of failure, the guest will crash and the vCPU + * will be unscheduled. + */ + do { + check_for_pcpu_work(); + } while ( check_for_vcpu_work() ); =20 vgic_sync_to_lrs(); =20 --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780821; cv=none; d=zohomail.com; s=zohoarc; b=hZS6V8L/rIx9mdnUPWzKexsPdNPEqIWfLtfT+W13qjKeFc1ZHDCBEu4yYpIV347TS2J6H6HmWaMLekW2TQXRYwUjiliLAt23IMW5BLYHdrXQ0HM3LYEZW63YY00kcZBnjEn+d+LwNHdsJT1lK/gpaYR0qsRLifAqaB8XEcw/Sjg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780821; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=/fPLWXB55C3rjnJr1JaRD7nWk5g9/E1PMqJP4mwbwZc=; b=JYhF0+49JUvgmtMkBenR8AYchJdcE4/xY4zO5Pm7zH3emD0rsGtjJymVVtfze2oXilp77bqRgmuk3HtqnAU7bbd5cPwLeWsKxQhT8bc0oDyyAxUhd/D2MnRP43Nu9rn08p9koj5G4b0PgyEMQFkp2xbMOP+VSpG2sjUi8koOiz0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780821907819.554513925065; Thu, 15 Oct 2020 09:53:41 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7620.20150 (Exim 4.92) (envelope-from ) id 1kT6VJ-0007Wf-TW; Thu, 15 Oct 2020 16:53:25 +0000 Received: by outflank-mailman (output) from mailman id 7620.20150; Thu, 15 Oct 2020 16:53:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VJ-0007VA-3n; Thu, 15 Oct 2020 16:53:25 +0000 Received: by outflank-mailman (input) for mailman id 7620; Thu, 15 Oct 2020 16:53:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6ON-0004yr-89 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:15 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 46fd3a8f-55f0-4ee0-8179-f6bd7276b904; Thu, 15 Oct 2020 16:45:10 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id h20so3822805lji.9 for ; Thu, 15 Oct 2020 09:45:10 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:08 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6ON-0004yr-89 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:15 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 46fd3a8f-55f0-4ee0-8179-f6bd7276b904; Thu, 15 Oct 2020 16:45:10 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id h20so3822805lji.9 for ; Thu, 15 Oct 2020 09:45:10 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:08 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 46fd3a8f-55f0-4ee0-8179-f6bd7276b904 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/fPLWXB55C3rjnJr1JaRD7nWk5g9/E1PMqJP4mwbwZc=; b=HitaxQWEsvEv9/Rz9DvKDF6ecJ85ZC9GLKwviZZplNEQWjYaIpccBmQeTM19Dd2R7V 1kXh8VU1tQevPtf26zRsdREGD2tfhx9Qn0YmijO+CiC1k/LFBEWMmWGSuhn1SgRLP6mf BJ8w5GOQ8D7jhxMIdYZn32MP8QoAlExjpexjxUFolKpCck8Dleo5k/g6bNNhzFkdBw9X 8KqHNDMzkdG5LTGvJQcIU/Cd93rdALS/sUUTISGKgIg0yABiL4kOEV1qSVfhxAQ5kCVb Wz8RupR+PmexrOcxslMMqtAGhd2qdvc8FL7x6FTbyxqnYdi/U57R7JsiM74qVDbe7abw UAgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/fPLWXB55C3rjnJr1JaRD7nWk5g9/E1PMqJP4mwbwZc=; b=eQtLWMdwwU43KvqCGiox2k6wrW2v1iOR+GtsQmO4Mz0kDu/0AW8GmLdRNtuKWU2nAw ua0FfUsLMQupEAf4ghskU256974DysqHkKGGujzREf6o8cBUF2bnzQ2dcr0oP2CtIxuA jl3KdvLL4PNtoU/50IPAL78ZUJ5x7bvJdsIMm+8C26usr37am970DNDRSk9lYXYHcixf 8UguEjEZuYGMJSl8j0KNy0PkpDGLamoNkcE95/HO9IeI6Y16tLklGsDZHuvirxJlUd2B QZy7b3xGUnAwG4FUJTneRLM6Ittl8NQd+60w5eJUxjaOmU5lcFkAbQcgaxYxg+hveb2j 3ERg== X-Gm-Message-State: AOAM531DRggBqNyEd8lbJ4+Vd5JRA76zBoVRbf3z9IqepOolcUx4+Qg8 fxpVpURNZntciKtL6IhY/SlA2l3qZtK5FQ== X-Google-Smtp-Source: ABdhPJwACpsMFGhkuraX68uLZBKp6BNDpiiOkvIcf85E72rgjoANcmS+h8Z4oafvGu1h220iQ29gKw== X-Received: by 2002:a2e:9618:: with SMTP id v24mr1687846ljh.191.1602780309232; Thu, 15 Oct 2020 09:45:09 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall Subject: [PATCH V2 16/23] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm Date: Thu, 15 Oct 2020 19:44:27 +0300 Message-Id: <1602780274-29141-17-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch implements reference counting of foreign entries in in set_foreign_p2m_entry() on Arm. This is a mandatory action if we want to run emulator (IOREQ server) in other than dom0 domain, as we can't trust it to do the right thing if it is not running in dom0. So we need to grab a reference on the page to avoid it disappearing. It is valid to always pass "p2m_map_foreign_rw" type to guest_physmap_add_entry() since the current and foreign domains would be always different. A case when they are equal would be rejected by rcu_lock_remote_domain_by_id(). It was tested with IOREQ feature to confirm that all the pages given to this function belong to a domain, so we can use the same approach as for XENMAPSPACE_gmfn_foreign handling in xenmem_add_to_physmap_one(). This involves adding an extra parameter for the foreign domain to set_foreign_p2m_entry() and a helper to indicate whether the arch supports the reference counting of foreign entries and the restriction for the hardware domain in the common code can be skipped for it. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/= DM features" - rewrite a logic to handle properly reference in set_foreign_p2m_entry() instead of treating foreign entries as p2m_ram_rw Changes V1 -> V2: - rebase according to the recent changes to acquire_resource() - update patch description - introduce arch_refcounts_p2m() - add an explanation why p2m_map_foreign_rw is valid - move set_foreign_p2m_entry() to p2m-common.h - add const to new parameter --- xen/arch/arm/p2m.c | 21 +++++++++++++++++++++ xen/arch/x86/mm/p2m.c | 5 +++-- xen/common/memory.c | 5 +++-- xen/include/asm-arm/p2m.h | 19 +++++++++---------- xen/include/asm-x86/p2m.h | 12 +++++++++--- xen/include/xen/p2m-common.h | 4 ++++ 6 files changed, 49 insertions(+), 17 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 4eeb867..370173c 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1380,6 +1380,27 @@ int guest_physmap_remove_page(struct domain *d, gfn_= t gfn, mfn_t mfn, return p2m_remove_mapping(d, gfn, (1 << page_order), mfn); } =20 +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) +{ + struct page_info *page =3D mfn_to_page(mfn); + int rc; + + if ( !get_page(page, fd) ) + return -EINVAL; + + /* + * It is valid to always use p2m_map_foreign_rw here as if this gets + * called that d !=3D fd. A case when d =3D=3D fd would be rejected by + * rcu_lock_remote_domain_by_id() earlier. + */ + rc =3D guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_r= w); + if ( rc ) + put_page(page); + + return 0; +} + static struct page_info *p2m_allocate_root(void) { struct page_info *page; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 6102771..8d03ab4 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1320,7 +1320,8 @@ static int set_typed_p2m_entry(struct domain *d, unsi= gned long gfn_l, } =20 /* Set foreign mfn in the given guest's p2m table. */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) { return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign, p2m_get_hostp2m(d)->default_access); @@ -2620,7 +2621,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned lon= g fgfn, * will update the m2p table which will result in mfn -> gpfn of dom0 * and not fgfn of domU. */ - rc =3D set_foreign_p2m_entry(tdom, gpfn, mfn); + rc =3D set_foreign_p2m_entry(tdom, fdom, gpfn, mfn); if ( rc ) gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. " "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n", diff --git a/xen/common/memory.c b/xen/common/memory.c index cf53ca3..fb9ea96 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1099,7 +1099,8 @@ static int acquire_resource( * reference counted, it is unsafe to allow mapping of * resource pages unless the caller is the hardware domain. */ - if ( paging_mode_translate(currd) && !is_hardware_domain(currd) ) + if ( paging_mode_translate(currd) && !is_hardware_domain(currd) && + !arch_refcounts_p2m() ) return -EACCES; =20 if ( copy_from_guest(&xmar, arg, 1) ) @@ -1168,7 +1169,7 @@ static int acquire_resource( =20 for ( i =3D 0; !rc && i < xmar.nr_frames; i++ ) { - rc =3D set_foreign_p2m_entry(currd, gfn_list[i], + rc =3D set_foreign_p2m_entry(currd, d, gfn_list[i], _mfn(mfn_list[i])); /* rc should be -EIO for any iteration other than the first */ if ( rc && i ) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 28ca9a8..d11be80 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -161,6 +161,15 @@ typedef enum { #endif #include =20 +static inline bool arch_refcounts_p2m(void) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is supported on Arm. + */ + return true; +} + static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) { @@ -392,16 +401,6 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsig= ned int order) return gfn_add(gfn, 1UL << order); } =20 -static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gf= n, - mfn_t mfn) -{ - /* - * NOTE: If this is implemented then proper reference counting of - * foreign entries will need to be implemented. - */ - return -EOPNOTSUPP; -} - /* * A vCPU has cache enabled only when the MMU is enabled and data cache * is enabled. diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 5f7ba31..6c42022 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -369,6 +369,15 @@ struct p2m_domain { #endif #include =20 +static inline bool arch_refcounts_p2m(void) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is not supported on x86. + */ + return false; +} + /* * Updates vCPU's n2pm to match its np2m_base in VMCx12 and returns that n= p2m. */ @@ -634,9 +643,6 @@ int p2m_finish_type_change(struct domain *d, int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start, unsigned long end); =20 -/* Set foreign entry in the p2m table (for priv-mapping) */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn); - /* Set mmio addresses in the p2m table (for pass-through) */ int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int order); diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h index 58031a6..b4bc709 100644 --- a/xen/include/xen/p2m-common.h +++ b/xen/include/xen/p2m-common.h @@ -3,6 +3,10 @@ =20 #include =20 +/* Set foreign entry in the p2m table */ +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn); + /* Remove a page from a domain's p2m table */ int __must_check guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780819; cv=none; d=zohomail.com; s=zohoarc; b=LBcl/ok2E+nnoPMKswBJTIWDP7vVT9X5gwca6SEwo26KpLZa0qGolpBCOfahxqbtUbVYmcpEotcrlbzh1y2Erc4Q/ir7S8IW2pGcaVPLhfxL5gAC9jGRL5sW251i8hLBAPhv9NX70CfsOwr0shFAj6zve3pbK8KqFa0iSdcA8MM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780819; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=zwTLjXhVFQZPY98bDXqDnVXlV5nN/zRcFtlAgAIIKtA=; b=UfLEAg1Mc1sO+RvMfMATBgXC2RoCy6tXdbefCXXKRuRxW/oQdDL6tb72h0R33xGVOJu2CVagN95jcV0drnC+515WKOShtdrRPbyUNTUuTy6XdRIxuEb+YCKVxXrWhdllSgcbU8ci1NZ0JbXPofMdWhD/0sYfGMkwhN7M0tj6kZQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780819073843.5995519848411; Thu, 15 Oct 2020 09:53:39 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7618.20120 (Exim 4.92) (envelope-from ) id 1kT6VG-0007Me-Jr; Thu, 15 Oct 2020 16:53:22 +0000 Received: by outflank-mailman (output) from mailman id 7618.20120; Thu, 15 Oct 2020 16:53:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VF-0007Lc-QV; Thu, 15 Oct 2020 16:53:21 +0000 Received: by outflank-mailman (input) for mailman id 7618; Thu, 15 Oct 2020 16:53:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6OS-0004yr-7c for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:20 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 32af136a-5429-4b00-9e08-60c144cc0ab3; Thu, 15 Oct 2020 16:45:12 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id j30so4384033lfp.4 for ; Thu, 15 Oct 2020 09:45:11 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:09 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6OS-0004yr-7c for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:20 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 32af136a-5429-4b00-9e08-60c144cc0ab3; Thu, 15 Oct 2020 16:45:12 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id j30so4384033lfp.4 for ; Thu, 15 Oct 2020 09:45:11 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:09 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 32af136a-5429-4b00-9e08-60c144cc0ab3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zwTLjXhVFQZPY98bDXqDnVXlV5nN/zRcFtlAgAIIKtA=; b=lrST4bt64gs4PNz3UdY5vzi3xN2250k4Zd38/7wosQDzG03cHDGXRkLEDBmTeiyRxt FrRApvPNjs9JWltwznEGxCZYSdEiTXQ/I1kijXKVHPEIIQVvOyFPeayMMa6QqwoIgRDO 1Nj7CwAyQ3npadYy6JQiUHwh4xCynnUK+i13pyr9/c9+ms4oDHWa1Gk1PwgzYZDLNTES cKuLeCcb8NHUR5VZ7SCQlMACJ65DlDs+drUw31zdxlZDY92CJkeYcCLoODKKdirD0Xnd ktsrDym6ha0aADhF2FdOs5XCLKiRAli200vSwDOX6RIwcnBllyH+nUZMkjQ2hjAj0PjS ySWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zwTLjXhVFQZPY98bDXqDnVXlV5nN/zRcFtlAgAIIKtA=; b=anyWYJGhqJBwqg15O2wDlAT8c/BZcuhV2wziqUkiLc0cboE6iOIJ+zxgUwyDsUo9Qm WsLAcvLqDl3DgBNnf9v4PfSRNl2Dvwe6AyX8CmkS602JPSE8gvTGmCXg5iVs1znRuYrD 54HXGOszZG5xzun7Xtd+bkejvYHjDrGiMH1n0WENw5hxeU7t5Us6SWJKlRSq0zcYfLog lqBusVMp+/MDdOxJdatVnX1nnABZggcu7+ZtbO0tSHqsFeKY1KfPrL1YbA2i4FffofM6 tvESRlDyOzsC3QiutNCy4e1MvTvb0RcSimvgI8Twpzg1ems9HVQLPjhx9ifE6Zc9UzjV uJ0g== X-Gm-Message-State: AOAM533KE4+UARVkHh20gBLeqrjC0AdCApcPFLUQdLuhwltlFXF8gLbT 2U2lhoDOdJrT9NnzgG+b8NwDjqdKv6sn2w== X-Google-Smtp-Source: ABdhPJxqqHSRzjcRFUYAOzYiBglt3Ho3KYEWGxvHa5/nJyWPqDD9K40O/lvReItlJHMTfAJak3MEdg== X-Received: by 2002:ac2:592d:: with SMTP id v13mr1271605lfi.355.1602780310299; Thu, 15 Oct 2020 09:45:10 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Wei Liu , Paul Durrant , Julien Grall Subject: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server() Date: Thu, 15 Oct 2020 19:44:28 +0300 Message-Id: <1602780274-29141-18-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch introduces a helper the main purpose of which is to check if a domain is using IOREQ server(s). On Arm the current benefit is to avoid calling handle_io_completion() (which implies iterating over all possible IOREQ servers anyway) on every return in leave_hypervisor_to_guest() if there is no active servers for the particular domain. Also this helper will be used by one of the subsequent patches on Arm. This involves adding an extra per-domain variable to store the count of servers in use. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - update patch description - guard helper with CONFIG_IOREQ_SERVER - remove "hvm" prefix - modify helper to just return d->arch.hvm.ioreq_server.nr_servers - put suitable ASSERT()s - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server() - remove d->ioreq_server.nr_servers =3D 0 from hvm_ioreq_init() --- xen/arch/arm/traps.c | 15 +++++++++------ xen/common/ioreq.c | 7 ++++++- xen/include/xen/ioreq.h | 14 ++++++++++++++ xen/include/xen/sched.h | 1 + 4 files changed, 30 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 507c095..a8f5fdf 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void) struct vcpu *v =3D current; =20 #ifdef CONFIG_IOREQ_SERVER - bool handled; + if ( domain_has_ioreq_server(v->domain) ) + { + bool handled; =20 - local_irq_enable(); - handled =3D handle_io_completion(v); - local_irq_disable(); + local_irq_enable(); + handled =3D handle_io_completion(v); + local_irq_disable(); =20 - if ( !handled ) - return true; + if ( !handled ) + return true; + } #endif =20 if ( likely(!v->arch.need_flush_to_ram) ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index bcd4961..a72bc0e 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, unsigned = int id, struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->ioreq_server.server[id]); + ASSERT(d->ioreq_server.server[id] ? !s : !!s); =20 d->ioreq_server.server[id] =3D s; + + if ( s ) + d->ioreq_server.nr_servers++; + else + d->ioreq_server.nr_servers--; } =20 #define GET_IOREQ_SERVER(d, id) \ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 7b03ab5..0679fef 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -55,6 +55,20 @@ struct ioreq_server { uint8_t bufioreq_handling; }; =20 +#ifdef CONFIG_IOREQ_SERVER +static inline bool domain_has_ioreq_server(const struct domain *d) +{ + ASSERT((current->domain =3D=3D d) || atomic_read(&d->pause_count)); + + return d->ioreq_server.nr_servers; +} +#else +static inline bool domain_has_ioreq_server(const struct domain *d) +{ + return false; +} +#endif + struct ioreq_server *get_ioreq_server(const struct domain *d, unsigned int id); =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index f9ce14c..290cddb 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -553,6 +553,7 @@ struct domain struct { spinlock_t lock; struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + unsigned int nr_servers; } ioreq_server; #endif }; --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780826; cv=none; d=zohomail.com; s=zohoarc; b=I5Tzn+tP8AuPBozZklxiWPFnWDw22ikCxOjFQyLj6wX61o4A8sYpRRq22fcenVtFh/ao6RYRECXfBW6K76diC7R/PzX8oZ3f9joWSG4Q9Q5/FHacbZ8F7RO+NLhwd2lkNLaGn8rNkzoHQ1uvlDX6bIuqxVaH/sSyY2co8DPuoFc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780826; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=HaoSNmuxwJzJR/fKLMbZwT91GS4ftwQys6lt/km7J2g=; b=Y17IsA7FG9bUP907FhGRXC+9dAhTk3Ypk3Fc3jU8ORBx//jlmJ3tUYvPzXqrLGCE1aFu8CgOnH1/02MrGn8xh9ibcwpG2Cc0xGGspblhbmHVVTWvt5LU+4PYbeRh7riNP+rR5V/PIAfLuwACRklZZch+j1DeZq3tLv9sJDMH69g= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780826215695.432112736985; Thu, 15 Oct 2020 09:53:46 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7621.20162 (Exim 4.92) (envelope-from ) id 1kT6VM-0007c1-0X; Thu, 15 Oct 2020 16:53:28 +0000 Received: by outflank-mailman (output) from mailman id 7621.20162; Thu, 15 Oct 2020 16:53:27 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VK-0007ah-U9; Thu, 15 Oct 2020 16:53:26 +0000 Received: by outflank-mailman (input) for mailman id 7621; Thu, 15 Oct 2020 16:53:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6OX-0004yr-7t for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:25 +0000 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 495c746c-867e-4b9f-bfdc-3ebc58d73db7; Thu, 15 Oct 2020 16:45:12 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id b1so4320376lfp.11 for ; Thu, 15 Oct 2020 09:45:12 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.10 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:10 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6OX-0004yr-7t for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:25 +0000 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 495c746c-867e-4b9f-bfdc-3ebc58d73db7; Thu, 15 Oct 2020 16:45:12 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id b1so4320376lfp.11 for ; Thu, 15 Oct 2020 09:45:12 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.10 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:10 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 495c746c-867e-4b9f-bfdc-3ebc58d73db7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HaoSNmuxwJzJR/fKLMbZwT91GS4ftwQys6lt/km7J2g=; b=CeJiG4Nb1/Pb2sgz3TGnNpjrTYoj+Xr3WHwfBpq5V6VWIWlkhvHc7IQUISflX9dJj7 if3zUD6eV9aAWdcwjWmJPw//abOUNr0ZKgvyO5Ahb5dHFjnfkUGa/8P03fJX30cM3k9p g+vGnWOZF7yWTckBaAA7IvsqTmBw5fkEe4Z0P36cfABlNKjsyTWMWnqKrcYtsJJPeqG4 TmK6f6Fq3KY1RzealYEn3Suy9d5vzktZxa5nr6CWeX7Xl1sZfLhuU9U7B0j4jl6UHtYf jr/QLcxoOytiBaF6DvLENgCoD3tY4r62xmGOzwSeXVZUr1w8IAkZk3/5/KV+OYebiibl M2oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HaoSNmuxwJzJR/fKLMbZwT91GS4ftwQys6lt/km7J2g=; b=cccfG/8y+r747gyUUQ3ncLm3JtEl2qs7ESVhRqVsPq0FTOXg85PedmnTRI3+uCwFAx xHl/ywfcLjqRTjcIFyX9dt2tZSGakaxMMJkZLb1b/OLvsy7bxle7SGW4j07Ri3LryMNK 96xL9PywSQsVwHVrcwUVgGPJJYagnIzOQlETwehp2ywzkg9HyftM3eHjXMM7KtlKiG6g 7xY9DAzRcp/+JBsUyvG+pkzx90+bLi9iKb7zEy0NmtPpBriG0N+arUuQyXGgIMtRptrd QXkozgVZ2Wxb0BroDRFYY+te2bL4uB3wiDbcPeLMLNAWjOsW39ozlQ+nz6OtWn/TWpEs ycmQ== X-Gm-Message-State: AOAM5321bnqy8wPj76WWE0Nnoti+dttpk85ZUwWg/7gmgUfHWfn5JVb5 X7p3wfKhzUgcawUYQ0L6zJNMKtXXYC9BPQ== X-Google-Smtp-Source: ABdhPJxxnhYf1jU2Q+ohzP7ySf8wbYADbqSgg/hOvijB/xB3BRucdTDPEe7sx0CDqgM+hg30Trv1Pg== X-Received: by 2002:ac2:5449:: with SMTP id d9mr1270229lfn.546.1602780311348; Thu, 15 Oct 2020 09:45:11 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Ian Jackson , Wei Liu , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V2 18/23] xen/dm: Introduce xendevicemodel_set_irq_level DM op Date: Thu, 15 Oct 2020 19:44:29 +0300 Message-Id: <1602780274-29141-19-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall This patch adds ability to the device emulator to notify otherend (some entity running in the guest) using a SPI and implements Arm specific bits for it. Proposed interface allows emulator to set the logical level of a one of a domain's IRQ lines. We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level) to inject an interrupt as the "isa_irq" field is only 8-bit and able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020). Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** Please note, I left interface untouched since there is still an open discussion what interface to use/what information to pass to the hypervisor. The question whether we should abstract away the state of the line or not. *** Changes RFC -> V1: - check incoming parameters in arch_dm_op() - add explicit padding to struct xen_dm_op_set_irq_level Changes V1 -> V2: - update the author of a patch - update patch description - check that padding is always 0 - mention that interface is Arm only and only SPIs are supported for now - allow to set the logical level of a line for non-allocated interrupts only - add xen_dm_op_set_irq_level_t --- tools/libs/devicemodel/core.c | 18 ++++++++ tools/libs/devicemodel/include/xendevicemodel.h | 4 ++ tools/libs/devicemodel/libxendevicemodel.map | 1 + xen/arch/arm/dm.c | 57 +++++++++++++++++++++= +++- xen/common/dm.c | 1 + xen/include/public/hvm/dm_op.h | 16 +++++++ 6 files changed, 96 insertions(+), 1 deletion(-) diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c index 4d40639..30bd79f 100644 --- a/tools/libs/devicemodel/core.c +++ b/tools/libs/devicemodel/core.c @@ -430,6 +430,24 @@ int xendevicemodel_set_isa_irq_level( return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); } =20 +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, uint32_t irq, + unsigned int level) +{ + struct xen_dm_op op; + struct xen_dm_op_set_irq_level *data; + + memset(&op, 0, sizeof(op)); + + op.op =3D XEN_DMOP_set_irq_level; + data =3D &op.u.set_irq_level; + + data->irq =3D irq; + data->level =3D level; + + return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); +} + int xendevicemodel_set_pci_link_route( xendevicemodel_handle *dmod, domid_t domid, uint8_t link, uint8_t irq) { diff --git a/tools/libs/devicemodel/include/xendevicemodel.h b/tools/libs/d= evicemodel/include/xendevicemodel.h index e877f5c..c06b3c8 100644 --- a/tools/libs/devicemodel/include/xendevicemodel.h +++ b/tools/libs/devicemodel/include/xendevicemodel.h @@ -209,6 +209,10 @@ int xendevicemodel_set_isa_irq_level( xendevicemodel_handle *dmod, domid_t domid, uint8_t irq, unsigned int level); =20 +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, unsigned int irq, + unsigned int level); + /** * This function maps a PCI INTx line to a an IRQ line. * diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devi= cemodel/libxendevicemodel.map index 561c62d..a0c3012 100644 --- a/tools/libs/devicemodel/libxendevicemodel.map +++ b/tools/libs/devicemodel/libxendevicemodel.map @@ -32,6 +32,7 @@ VERS_1.2 { global: xendevicemodel_relocate_memory; xendevicemodel_pin_memory_cacheattr; + xendevicemodel_set_irq_level; } VERS_1.1; =20 VERS_1.3 { diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c index 5d3da37..e4bb233 100644 --- a/xen/arch/arm/dm.c +++ b/xen/arch/arm/dm.c @@ -17,10 +17,65 @@ #include #include =20 +#include + int arch_dm_op(struct xen_dm_op *op, struct domain *d, const struct dmop_args *op_args, bool *const_op) { - return -EOPNOTSUPP; + int rc; + + switch ( op->op ) + { + case XEN_DMOP_set_irq_level: + { + const struct xen_dm_op_set_irq_level *data =3D + &op->u.set_irq_level; + unsigned int i; + + /* Only SPIs are supported */ + if ( (data->irq < NR_LOCAL_IRQS) || (data->irq >=3D vgic_num_irqs(= d)) ) + { + rc =3D -EINVAL; + break; + } + + if ( data->level !=3D 0 && data->level !=3D 1 ) + { + rc =3D -EINVAL; + break; + } + + /* Check that padding is always 0 */ + for ( i =3D 0; i < sizeof(data->pad); i++ ) + { + if ( data->pad[i] ) + { + rc =3D -EINVAL; + break; + } + } + + /* + * Allow to set the logical level of a line for non-allocated + * interrupts only. + */ + if ( test_bit(data->irq, d->arch.vgic.allocated_irqs) ) + { + rc =3D -EINVAL; + break; + } + + vgic_inject_irq(d, NULL, data->irq, data->level); + rc =3D 0; + break; + } + + default: + rc =3D -EOPNOTSUPP; + break; + } + + return rc; } =20 /* diff --git a/xen/common/dm.c b/xen/common/dm.c index f3a8353..5f23420 100644 --- a/xen/common/dm.c +++ b/xen/common/dm.c @@ -48,6 +48,7 @@ static int dm_op(const struct dmop_args *op_args) [XEN_DMOP_remote_shutdown] =3D sizeof(struct xen_= dm_op_remote_shutdown), [XEN_DMOP_relocate_memory] =3D sizeof(struct xen_= dm_op_relocate_memory), [XEN_DMOP_pin_memory_cacheattr] =3D sizeof(struct xen_= dm_op_pin_memory_cacheattr), + [XEN_DMOP_set_irq_level] =3D sizeof(struct xen_= dm_op_set_irq_level), }; =20 rc =3D rcu_lock_remote_domain_by_id(op_args->domid, &d); diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index 66cae1a..1f70d58 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -434,6 +434,21 @@ struct xen_dm_op_pin_memory_cacheattr { }; typedef struct xen_dm_op_pin_memory_cacheattr xen_dm_op_pin_memory_cacheat= tr_t; =20 +/* + * XEN_DMOP_set_irq_level: Set the logical level of a one of a domain's + * IRQ lines (currently Arm only). + * Only SPIs are supported. + */ +#define XEN_DMOP_set_irq_level 19 + +struct xen_dm_op_set_irq_level { + uint32_t irq; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; + uint8_t pad[3]; +}; +typedef struct xen_dm_op_set_irq_level xen_dm_op_set_irq_level_t; + struct xen_dm_op { uint32_t op; uint32_t pad; @@ -447,6 +462,7 @@ struct xen_dm_op { xen_dm_op_track_dirty_vram_t track_dirty_vram; xen_dm_op_set_pci_intx_level_t set_pci_intx_level; xen_dm_op_set_isa_irq_level_t set_isa_irq_level; + xen_dm_op_set_irq_level_t set_irq_level; xen_dm_op_set_pci_link_route_t set_pci_link_route; xen_dm_op_modified_memory_t modified_memory; xen_dm_op_set_mem_type_t set_mem_type; --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780818; cv=none; d=zohomail.com; s=zohoarc; b=f26iTR5OB98bnI5xIvyM8A676w05Pjcid19GEnTuhpUKeXE3pICdkwvpS3WP5yc8u2TAS9ZlU/wzdiA7zgtQV88v9ua5fqoyAOSZhi1xUoW04Ow19kzx6Km7C7Htdk7mJzT0nof+o3i0UbBhzHC/MhV8mPHwfD0tD8oQIvfwk9E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780818; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=47uSMzel/qyKvhdlJs+JuPDeminjR3OYh46RDdFyYQ0=; b=GL22IojCs4RrAdqFwmmJ2Sf+nTpB0toiMIizAl6/uufEQ/nt6A56ZGwaC0AfYdHLUXh8VFDmmSCIaxq3gJwekf9rrUspwmlbPO/sr6PhSCeVuFplwdUDHTSzd8pgu7Ko4UlNEWun5UJvEyW4oGXf5+QvMeV6uGBJiJHJSkLTxAE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780818965207.37963924245344; Thu, 15 Oct 2020 09:53:38 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7609.20047 (Exim 4.92) (envelope-from ) id 1kT6VB-00079w-Ed; Thu, 15 Oct 2020 16:53:17 +0000 Received: by outflank-mailman (output) from mailman id 7609.20047; Thu, 15 Oct 2020 16:53:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VB-00079d-5Q; Thu, 15 Oct 2020 16:53:17 +0000 Received: by outflank-mailman (input) for mailman id 7609; Thu, 15 Oct 2020 16:53:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Oc-0004yr-7t for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:30 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 811b3513-adc9-4cb8-b290-00b531824646; Thu, 15 Oct 2020 16:45:15 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id d24so3810777ljg.10 for ; Thu, 15 Oct 2020 09:45:13 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:11 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Oc-0004yr-7t for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:30 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 811b3513-adc9-4cb8-b290-00b531824646; Thu, 15 Oct 2020 16:45:15 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id d24so3810777ljg.10 for ; Thu, 15 Oct 2020 09:45:13 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:11 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 811b3513-adc9-4cb8-b290-00b531824646 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=47uSMzel/qyKvhdlJs+JuPDeminjR3OYh46RDdFyYQ0=; b=fG3fb6woHBFXSTVeiwPo9CpE+xX7xDaX3P/Ipp+QjUKLXEPN7utLwZsQnDYIXMZRhY sPxnO3QYtPIwozFSmAwgEqqXOZi2a3qGVy7JMgDgFFWzSAGSRUXkfhwD26mPruIIsgud 5B1pJBwOjUw1CKnNCvZpxhWVWm4PqNPaegEzPTetVmJNgAsKmJQqqLx4V2qBWXjPmZXQ XVdn043F/aLeDZ8cwaeoNLq/fRNu2MBsCkNNlY31vhkb3IwiccabZxC81G+/juWmu+0m j3PjHniDNzVNyRul3upS2vMh/XMkbfpcLh/vKfO5t+z+vtinujboWf4TBxO6hBmvrlrU /5Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=47uSMzel/qyKvhdlJs+JuPDeminjR3OYh46RDdFyYQ0=; b=WsXEXKtNeJ3f6R5sdvVlN25wbpeAIAm81v4/0uwKKyq65tZar637QfDu8UodFFb88i lEukt4RjigORHCneruwrricmumQWEWktMa2Msa9lOu4H/qH3N4pzV1pWBuPML9U1pgQQ n6TgE510h83aaGMcTd+/AByKJb59x3UiD38wp5fao+gQX31r5vDjJ3miE+gn2vuRRQZZ XZ6A5zOcqBDnhdzekwtiaezWI4SI3JKVNkDSZDSaDwAwTCpcjfrcWyy7ecZiWXAflcPh HIedMZubJ65UPntoVepZ9cTxj1OjrG/7EjNzK0+GYWaF4E5KNpc//eR6f7wiC1jaq+vi qjaQ== X-Gm-Message-State: AOAM531vJpSqLL6+A0J5/8wERo4zad12c/uh3O7Ejeoqwe5JhLJ7rvdY jFiEDQtzVLOfQRnECcvhdETzgHwRLEXewQ== X-Google-Smtp-Source: ABdhPJywyMpzDJLnEtIr2K9tTpva81uus1tSMSmwnOhgcs9eIfdM4k5UwxqS97psn9uT12wdbF171g== X-Received: by 2002:a2e:8582:: with SMTP id b2mr1570200lji.376.1602780312263; Thu, 15 Oct 2020 09:45:12 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V2 19/23] xen/arm: io: Abstract sign-extension Date: Thu, 15 Oct 2020 19:44:30 +0300 Message-Id: <1602780274-29141-20-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko In order to avoid code duplication (both handle_read() and handle_ioserv() contain the same code for the sign-extension) put this code to a common helper to be used for both. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch --- xen/arch/arm/io.c | 18 ++---------------- xen/arch/arm/ioreq.c | 17 +---------------- xen/include/asm-arm/traps.h | 24 ++++++++++++++++++++++++ 3 files changed, 27 insertions(+), 32 deletions(-) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index f44cfd4..8d6ec6c 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -23,6 +23,7 @@ #include #include #include +#include #include =20 #include "decode.h" @@ -39,26 +40,11 @@ static enum io_state handle_read(const struct mmio_hand= ler *handler, * setting r). */ register_t r =3D 0; - uint8_t size =3D (1 << dabt.size) * 8; =20 if ( !handler->ops->read(v, info, &r, handler->priv) ) return IO_ABORT; =20 - /* - * Sign extend if required. - * Note that we expect the read handler to have zeroed the bits - * outside the requested access size. - */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) - { - /* - * We are relying on register_t using the same as - * an unsigned long in order to keep the 32-bit assembly - * code smaller. - */ - BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long)); - r |=3D (~0UL) << size; - } + r =3D sign_extend(dabt, r); =20 set_user_reg(regs, dabt.reg, r); =20 diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c index da5ceac..ad17b80 100644 --- a/xen/arch/arm/ioreq.c +++ b/xen/arch/arm/ioreq.c @@ -28,7 +28,6 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, s= truct vcpu *v) const union hsr hsr =3D { .bits =3D regs->hsr }; const struct hsr_dabt dabt =3D hsr.dabt; /* Code is similar to handle_read */ - uint8_t size =3D (1 << dabt.size) * 8; register_t r =3D v->io.io_req.data; =20 /* We are done with the IO */ @@ -37,21 +36,7 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, = struct vcpu *v) if ( dabt.write ) return IO_HANDLED; =20 - /* - * Sign extend if required. - * Note that we expect the read handler to have zeroed the bits - * outside the requested access size. - */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) - { - /* - * We are relying on register_t using the same as - * an unsigned long in order to keep the 32-bit assembly - * code smaller. - */ - BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long)); - r |=3D (~0UL) << size; - } + r =3D sign_extend(dabt, r); =20 set_user_reg(regs, dabt.reg, r); =20 diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h index 997c378..e301c44 100644 --- a/xen/include/asm-arm/traps.h +++ b/xen/include/asm-arm/traps.h @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_= user_regs *regs) (unsigned long)abort_guest_exit_end =3D=3D regs->pc; } =20 +/* Check whether the sign extension is required and perform it */ +static inline register_t sign_extend(const struct hsr_dabt dabt, register_= t r) +{ + uint8_t size =3D (1 << dabt.size) * 8; + + /* + * Sign extend if required. + * Note that we expect the read handler to have zeroed the bits + * outside the requested access size. + */ + if ( dabt.sign && (r & (1UL << (size - 1))) ) + { + /* + * We are relying on register_t using the same as + * an unsigned long in order to keep the 32-bit assembly + * code smaller. + */ + BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long)); + r |=3D (~0UL) << size; + } + + return r; +} + #endif /* __ASM_ARM_TRAPS__ */ /* * Local variables: --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780819; cv=none; d=zohomail.com; s=zohoarc; b=F9v0TrlUC/ShPCJMtdE7U0rJSRNyJ2346I66yiqDD7/KfvAqP/lSifZvaVlhlDCeReLDDyad/d8H1/Fq8q5/c26tlUIkGnhqzGM/ev5PXNrIWXKiKDJ5yFfqTS6ddM5HCizz/RgSr5z0oUWgjzzpXQ7stxdHhfDuoHJMiLQCO1k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780819; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=BOyu/SSFPhkGm8O1hW7HvONLcLZpvripmtFylb3J5Vs=; b=b2hiCUPDll3wJp0kErYt0y3qGq0NGkUSJWQ3xjUTyKOFvkewj5K019uTvaqjNOlCeRRRhKjM7wpa5MnKYbb7dXB/aFfgOLnDOo2Hxpb6MtNC9AfuEqW1FIxrYZb6LnfAc930EMuciKvRrnl4EQqBFQ3DDroxraFPITlLJx/XUKM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780819811121.15503083785507; Thu, 15 Oct 2020 09:53:39 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7614.20086 (Exim 4.92) (envelope-from ) id 1kT6VD-0007FA-Gk; Thu, 15 Oct 2020 16:53:19 +0000 Received: by outflank-mailman (output) from mailman id 7614.20086; Thu, 15 Oct 2020 16:53:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VD-0007EW-93; Thu, 15 Oct 2020 16:53:19 +0000 Received: by outflank-mailman (input) for mailman id 7614; Thu, 15 Oct 2020 16:53:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Oh-0004yr-7y for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:35 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3aec3c3f-05a3-4c14-a864-c5f2f4906ad1; Thu, 15 Oct 2020 16:45:15 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id l28so4334194lfp.10 for ; Thu, 15 Oct 2020 09:45:14 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:12 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Oh-0004yr-7y for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:35 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3aec3c3f-05a3-4c14-a864-c5f2f4906ad1; Thu, 15 Oct 2020 16:45:15 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id l28so4334194lfp.10 for ; Thu, 15 Oct 2020 09:45:14 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:12 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3aec3c3f-05a3-4c14-a864-c5f2f4906ad1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BOyu/SSFPhkGm8O1hW7HvONLcLZpvripmtFylb3J5Vs=; b=BhO2bvJcrw8cmXzgh8FAEJGWMR7kX2kyYNercuZdW3SkczOyhcnUcr4vLXos0iC3ka nL0Tz86eumq55mJCrTHl8so/ZvOogPPOHyrFOHN0AkmEkwN6rPj0i01N0Dwx9R+U4zDX JF4YZoUwKi6O2V9Kjg4It9tknTxvWmERZYCIHTk6yv1twKRY8omy8rMbGqGym9+bDZ9T p5RFYIUcQZuPV9IEQXk4F3NgPAA9JC4MfXfSPsXUGNbBBiG7zPVZZwPbpqkqh7Yf92tO vnJlIDsEFIlJ+jybi++uhaF3EQwNKNdbm4d6n6c7fYQs3b7w8faFRtJblZJEbSprEwhw hwQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BOyu/SSFPhkGm8O1hW7HvONLcLZpvripmtFylb3J5Vs=; b=VGm98orza2/3sOx/GS9ROYyyHM6ndnseoxZcL/S8MxGaxwmbRQ3RR1YcMJWiBM+UwA ji00UXEWKoBEIcO5APjCJj03gXVmWhdtN8CEnXa4eYPD7WHUn45iNIKqwqBLNLJq6YBg eXLUIjXCAU0W3T4zAQi0XRsbAwH832S1jlZEniSFKLzf6t4I8/57l15fc3XbGcF409MM 9S+MBweyU0OMeQjSSKP5RlPDeKVvSaVMVnJtBRAj7IVsvVJmr3wD1E+Cn/FVZ2/WJQam WeP41XiDKbhI833Jny6jLJg6IgG/qDc3diM5BFQpACY+p925tlU8nICQCEQGLTjTlKDq WqFA== X-Gm-Message-State: AOAM530A5L3uK3C07skdDOPykOaKtzjBwo6QvVwRx5kOntkiVdjU48GQ VvDFPT2qwvCDmYqDxPo0i/2y3Tv+dQqcLA== X-Google-Smtp-Source: ABdhPJyV7f83GC/ZdOMR1ad9pWY1O8LMTxCKcMAKIhVnQUqNl0rqnxcFvevOv7/DxFCafpHRoGleAQ== X-Received: by 2002:ac2:55a5:: with SMTP id y5mr1550077lfg.473.1602780313306; Thu, 15 Oct 2020 09:45:13 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Paul Durrant , Julien Grall Subject: [PATCH V2 20/23] xen/ioreq: Make x86's send_invalidate_req() common Date: Thu, 15 Oct 2020 19:44:31 +0300 Message-Id: <1602780274-29141-21-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko As the IOREQ is a common feature now and we also need to invalidate qemu/demu mapcache on Arm when the required condition occurs this patch moves this function to the common code (and remames it to send_invalidate_ioreq). This patch also moves per-domain qemu_mapcache_invalidate variable out of the arch sub-struct. The subsequent patch will add mapcache invalidation handling on Arm. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** Please note, this patch depends on the following which is on review: https://patchwork.kernel.org/patch/11803383/ *** Changes RFC -> V1: - move send_invalidate_req() to the common code - update patch subject/description - move qemu_mapcache_invalidate out of the arch sub-struct, update checks - remove #if defined(CONFIG_ARM64) from the common code Changes V1 -> V2: - was split into: - xen/ioreq: Make x86's send_invalidate_req() common - xen/arm: Add mapcache invalidation handling - update patch description/subject - move Arm bits to a separate patch - don't alter the common code, the flag is set by arch code - rename send_invalidate_req() to send_invalidate_ioreq() - guard qemu_mapcache_invalidate with CONFIG_IOREQ_SERVER - use bool instead of bool_t - remove blank line blank line between head comment and #include-s --- xen/arch/x86/hvm/hypercall.c | 9 +++++---- xen/arch/x86/hvm/io.c | 14 -------------- xen/common/ioreq.c | 14 ++++++++++++++ xen/include/asm-x86/hvm/domain.h | 1 - xen/include/asm-x86/hvm/io.h | 1 - xen/include/xen/ioreq.h | 1 + xen/include/xen/sched.h | 2 ++ 7 files changed, 22 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c index b6ccaf4..324ff97 100644 --- a/xen/arch/x86/hvm/hypercall.c +++ b/xen/arch/x86/hvm/hypercall.c @@ -20,6 +20,7 @@ */ #include #include +#include #include =20 #include @@ -47,7 +48,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM= (void) arg) rc =3D compat_memory_op(cmd, arg); =20 if ( (cmd & MEMOP_CMD_MASK) =3D=3D XENMEM_decrease_reservation ) - curr->domain->arch.hvm.qemu_mapcache_invalidate =3D true; + curr->domain->qemu_mapcache_invalidate =3D true; =20 return rc; } @@ -329,9 +330,9 @@ int hvm_hypercall(struct cpu_user_regs *regs) if ( curr->hcall_preempted ) return HVM_HCALL_preempted; =20 - if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) && - test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) ) - send_invalidate_req(); + if ( unlikely(currd->qemu_mapcache_invalidate) && + test_and_clear_bool(currd->qemu_mapcache_invalidate) ) + send_invalidate_ioreq(); =20 return HVM_HCALL_completed; } diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 2d03ffe..e51304c 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -64,20 +64,6 @@ void send_timeoffset_req(unsigned long timeoff) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } =20 -/* Ask ioemu mapcache to invalidate mappings. */ -void send_invalidate_req(void) -{ - ioreq_t p =3D { - .type =3D IOREQ_TYPE_INVALIDATE, - .size =3D 4, - .dir =3D IOREQ_WRITE, - .data =3D ~0UL, /* flush all */ - }; - - if ( broadcast_ioreq(&p, false) !=3D 0 ) - gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); -} - bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *de= scr) { struct hvm_emulate_ctxt ctxt; diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index a72bc0e..2203cf0 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -35,6 +35,20 @@ #include #include =20 +/* Ask ioemu mapcache to invalidate mappings. */ +void send_invalidate_ioreq(void) +{ + ioreq_t p =3D { + .type =3D IOREQ_TYPE_INVALIDATE, + .size =3D 4, + .dir =3D IOREQ_WRITE, + .data =3D ~0UL, /* flush all */ + }; + + if ( broadcast_ioreq(&p, false) !=3D 0 ) + gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); +} + static void set_ioreq_server(struct domain *d, unsigned int id, struct ioreq_server *s) { diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index c3af339..caab3a9 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -117,7 +117,6 @@ struct hvm_domain { =20 struct viridian_domain *viridian; =20 - bool_t qemu_mapcache_invalidate; bool_t is_s3_suspended; =20 /* diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index fb64294..3da0136 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -97,7 +97,6 @@ bool relocate_portio_handler( unsigned int size); =20 void send_timeoffset_req(unsigned long timeoff); -void send_invalidate_req(void); bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec); bool handle_pio(uint16_t port, unsigned int size, int dir); diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 0679fef..aad682f 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -126,6 +126,7 @@ struct ioreq_server *select_ioreq_server(struct domain = *d, int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered); unsigned int broadcast_ioreq(ioreq_t *p, bool buffered); +void send_invalidate_ioreq(void); =20 void ioreq_init(struct domain *d); =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 290cddb..1b8c6eb 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -555,6 +555,8 @@ struct domain struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; unsigned int nr_servers; } ioreq_server; + + bool qemu_mapcache_invalidate; #endif }; =20 --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780829; cv=none; d=zohomail.com; s=zohoarc; b=IEza1KMRgjCq0+trkTdRC/xi5l9LtPO+ZJks//wToV29WNo1703idyj8W8W3stF/T+txe+8fXyivbUE6a9dTUzoXA78FPtbxu2AO0JtEF7OHu1qmV1mQU6s4NeMrPxf4TsI4duUNleP/XmUkhJKYH+vz7W1RjiWZtwGI/GKbSUA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780829; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=2XrnkyCm2XtFs8+4UKcvf9+akIJtw9Lb3qRoVqK2C5U=; b=XWJYMRdM4fos5mEN3WXbyemmtaKFp/adu4v0Dxx5NMw6+RMiY6/2r9F5c9KgQpUw+MjGKI6aqn5wmUoAX/+h4MXOjm73L2g8hpCcbxlcAY7QNXyiQWjtGbaRplvnwKIwC1leFccfr7io3atLLDz2mwruwGeoIQscDuIX/NIoics= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780829926119.75836798167745; Thu, 15 Oct 2020 09:53:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7623.20181 (Exim 4.92) (envelope-from ) id 1kT6VP-0007lZ-4D; Thu, 15 Oct 2020 16:53:31 +0000 Received: by outflank-mailman (output) from mailman id 7623.20181; Thu, 15 Oct 2020 16:53:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VO-0007jv-2C; Thu, 15 Oct 2020 16:53:30 +0000 Received: by outflank-mailman (input) for mailman id 7623; Thu, 15 Oct 2020 16:53:17 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Om-0004yr-8H for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:40 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 58805a19-f509-4ef7-b1f1-945e5768e9fe; Thu, 15 Oct 2020 16:45:15 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id 184so4357721lfd.6 for ; Thu, 15 Oct 2020 09:45:15 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:13 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Om-0004yr-8H for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:40 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 58805a19-f509-4ef7-b1f1-945e5768e9fe; Thu, 15 Oct 2020 16:45:15 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id 184so4357721lfd.6 for ; Thu, 15 Oct 2020 09:45:15 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:13 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 58805a19-f509-4ef7-b1f1-945e5768e9fe DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2XrnkyCm2XtFs8+4UKcvf9+akIJtw9Lb3qRoVqK2C5U=; b=Pn9R4BXFpFWBtqgH7N8kD9d9XZroBl6pZJcDRGLAyalJNeKV0GAgAl81CXyg304D3C 1M9/eAvRA+IriO/UFwvseXxiX8z2lxPhTw+RflHYDjWK2Opf+N/PB4qDR4Tflip+Qz3l yEOaGsG4xYNV5vbxEGmZpOt02p46Qo+fekkh92N7rDpIrFhhZmqJZQpUaKp51YKu3Oxm Y1UbzJrG4PAyvaVFjIgl2cbnAAjomKXXrk3fBwcM3QJx3GaJRBOoEsbTAK2fzoFqah11 ltb2dsGo3/fNavYpZYJxZVzGk+ESuNbjRT5OmMFw/LpiVUYSyTOv7ItUdpn7YOZleWPa aa2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2XrnkyCm2XtFs8+4UKcvf9+akIJtw9Lb3qRoVqK2C5U=; b=RYMjrS8BgMTPt3oKsga1U5k9T2E7D54Fnyma9ixEwaMIJpn66rVKDMA1rgkBlHw42o qP6vtXaWaYu/y0hzt89gnqihiFnF7GX14J4mfQsUPyFIaRcinCf+rhDj7PjX6dlZeFWG c3wrczjDfN6DI8JuC0e5/aINplHV/KksutNjYCSyIUk8vGIl3nKZ+q+CjPUnujSh+agI zdSlUtwDN9MjRnaRRTM4feGL+Ui9lmFWQsyxjzYBFtU2a15mu5qrcZaw+G83cqPK5axa 7u56ELuDJAl1aEs6dUa5UJIOB5SQ4Fnjozm3ObSYYteQCmILq/EQsjMoZZQzBzMuw3lB dpfg== X-Gm-Message-State: AOAM531oTJtQ4o/9vx2aYSYLuOQkrXBXx6XOiWrhab58xsu8TArGvRjB CsYBMD4nSQP7fJDuIyDRv2SrnSH4nD1lSA== X-Google-Smtp-Source: ABdhPJwsQ5M/L3bwdFzhQ3KPuy/TYNQk5lUusJYwKYci7YsHMXQCx16rzuX7lAW/CXtxXZ+WZpjnjw== X-Received: by 2002:a19:8888:: with SMTP id k130mr1414548lfd.265.1602780314167; Thu, 15 Oct 2020 09:45:14 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling Date: Thu, 15 Oct 2020 19:44:32 +0300 Message-Id: <1602780274-29141-22-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko We need to send mapcache invalidation request to qemu/demu everytime the page gets removed from a guest. At the moment, the Arm code doesn't explicitely remove the existing mapping before inserting the new mapping. Instead, this is done implicitely by __p2m_set_entry(). So the corresponding flag will be set in __p2m_set_entry() if old entry is a RAM page *and* the new MFN is different. And the invalidation request will be sent in do_trap_hypercall() later on. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch, some changes were derived from (+ new explanation): xen/ioreq: Make x86's invalidate qemu mapcache handling common - put setting of the flag into __p2m_set_entry() - clarify the conditions when the flag should be set - use domain_has_ioreq_server() - update do_trap_hypercall() by adding local variable --- xen/arch/arm/p2m.c | 8 ++++++++ xen/arch/arm/traps.c | 13 ++++++++++--- 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 370173c..2693b0c 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -1067,7 +1068,14 @@ static int __p2m_set_entry(struct p2m_domain *p2m, */ if ( p2m_is_valid(orig_pte) && !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) ) + { +#ifdef CONFIG_IOREQ_SERVER + if ( domain_has_ioreq_server(p2m->domain) && + (p2m->domain =3D=3D current->domain) && p2m_is_ram(orig_pte.p= 2m.type) ) + p2m->domain->qemu_mapcache_invalidate =3D true; +#endif p2m_free_entry(p2m, orig_pte, level); + } =20 out: unmap_domain_page(table); diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index a8f5fdf..9eaa342 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1442,6 +1442,7 @@ static void do_trap_hypercall(struct cpu_user_regs *r= egs, register_t *nr, const union hsr hsr) { arm_hypercall_fn_t call =3D NULL; + struct vcpu *v =3D current; =20 BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) ); =20 @@ -1458,7 +1459,7 @@ static void do_trap_hypercall(struct cpu_user_regs *r= egs, register_t *nr, return; } =20 - current->hcall_preempted =3D false; + v->hcall_preempted =3D false; =20 perfc_incra(hypercalls, *nr); call =3D arm_hypercall_table[*nr].fn; @@ -1471,7 +1472,7 @@ static void do_trap_hypercall(struct cpu_user_regs *r= egs, register_t *nr, HYPERCALL_RESULT_REG(regs) =3D call(HYPERCALL_ARGS(regs)); =20 #ifndef NDEBUG - if ( !current->hcall_preempted ) + if ( !v->hcall_preempted ) { /* Deliberately corrupt parameter regs used by this hypercall. */ switch ( arm_hypercall_table[*nr].nr_args ) { @@ -1488,8 +1489,14 @@ static void do_trap_hypercall(struct cpu_user_regs *= regs, register_t *nr, #endif =20 /* Ensure the hypercall trap instruction is re-executed. */ - if ( current->hcall_preempted ) + if ( v->hcall_preempted ) regs->pc -=3D 4; /* re-execute 'hvc #XEN_HYPERCALL_TAG' */ + +#ifdef CONFIG_IOREQ_SERVER + if ( unlikely(v->domain->qemu_mapcache_invalidate) && + test_and_clear_bool(v->domain->qemu_mapcache_invalidate) ) + send_invalidate_ioreq(); +#endif } =20 void arch_hypercall_tasklet_result(struct vcpu *v, long res) --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780817; cv=none; d=zohomail.com; s=zohoarc; b=i2R6Q4Fi3cihEmT/tkc7CYs83xGScwoVwJqFCa09cRBiOOA5LBcg6QaSgoD9h7dP+T+f/ZxsvyJqpEb4Tn+F8i7d6lPPQ3lsCTxVc3RxhdZT3Q59q9T/HtkiVKqY/w8WH6tVjT0Rs5BqUW+qmdeWWDEJdORZ+sDyb1vjezcafHI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780817; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=Io9UJaZa2vr40+lSLt6wvD6ynVflJI9Al4nh6JXoutI=; b=TgcgCvpCGt1vQv1gyR2Cne+qIh5niE/cwiZekXMFwK2C/XGjiJ+X8ehx+NroKM1fTQSldYHN+dITibT8XFc7adAfb0sx5faeOwGXxU/X1mJD5EO/bgHPth2Yj9SEI7B16N4hh/RjuCtrcqo+rNOx+pqYUJgdPJDnMF4diCGtHeo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780817556358.2057842983601; Thu, 15 Oct 2020 09:53:37 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7615.20098 (Exim 4.92) (envelope-from ) id 1kT6VE-0007Gs-D9; Thu, 15 Oct 2020 16:53:20 +0000 Received: by outflank-mailman (output) from mailman id 7615.20098; Thu, 15 Oct 2020 16:53:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VD-0007GR-UG; Thu, 15 Oct 2020 16:53:19 +0000 Received: by outflank-mailman (input) for mailman id 7615; Thu, 15 Oct 2020 16:53:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Or-0004yr-8K for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:45 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8d4d4140-25ab-4442-9a38-2c66f7862da7; Thu, 15 Oct 2020 16:45:22 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id y16so3878109ljk.1 for ; Thu, 15 Oct 2020 09:45:16 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:14 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Or-0004yr-8K for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:45 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8d4d4140-25ab-4442-9a38-2c66f7862da7; Thu, 15 Oct 2020 16:45:22 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id y16so3878109ljk.1 for ; Thu, 15 Oct 2020 09:45:16 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:14 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8d4d4140-25ab-4442-9a38-2c66f7862da7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Io9UJaZa2vr40+lSLt6wvD6ynVflJI9Al4nh6JXoutI=; b=qhNGtO5djVpxpfhukOTWZS1KwYbLm9hZzuRQayLeydkwh1ULMdigrk2cn1jO4BzKSK 04TDAvE3r1NZYXaZjTKAPkdRKb0XfMm86Hk4sWaKLsW+43vgLKBG6WA5mrrCxbPEl2Ed DTtxjLAxvWWc9hH9rhgaGU0CvPW+9RdxR5ifbr0hmCppYe3f39dg3C7+N7bOH72v4M97 JuiF0VYRZkdKpFXkRt7nazvBhNkipTy4UO0QxRdNM4FU0SmNWUePJIpSGRtZK3vChftW QB9TQZTcsDS8PYE2SuHwA00maz5WuW+G0JIHPR24ZrdxtdsKMfS2drS1ypTIgxsueTfM kgOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Io9UJaZa2vr40+lSLt6wvD6ynVflJI9Al4nh6JXoutI=; b=tkZhrXRhb+ADkMxjDT+fONClS+AQkY9sOQxiNMmkNZYKUhXN8kBeBAlttIc43jd4nr j9LR2yc8XnJrFQ0DbGscYGRTNmbiEl2zUhRBLpmfzsefklRHwYgcZhYSIQnJ/mXW9S7/ 36YTkyCW5TekPZ7fSOlqQ32rJ3FvkVOHUpGxCMFNG+fwFUe7I0arPqcayMOO0/lPNljo LtdcbUv4+Kjru+wVUePQfJxzIeIz5FpCtmr9vg/LKGJ85djqy8fglcs9wC49iFpOFbB4 se4wL+on5vCMqTizFPkVzWAhDhoxKP5QIosIeqEPIDCqBSLHGRcFWuXlDDvCL98dnvbM gfGw== X-Gm-Message-State: AOAM530YO7GgLEefH+xjcvUm05FFH1jgDC2zsbRadcW1rrFj0P0Yayk/ f2zZWJgfCePdsFoy/25dYYvqP20f0rx8bw== X-Google-Smtp-Source: ABdhPJzshDvkHnaoQdkfm6rWN+hXIU4lgONNKaeqyrYRC6gft4yR6POrZRsZUT7Z+40lcLRpxWHafQ== X-Received: by 2002:a05:651c:96:: with SMTP id 22mr1712441ljq.76.1602780315241; Thu, 15 Oct 2020 09:45:15 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Ian Jackson , Wei Liu , Anthony PERARD , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V2 22/23] libxl: Introduce basic virtio-mmio support on Arm Date: Thu, 15 Oct 2020 19:44:33 +0300 Message-Id: <1602780274-29141-23-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall This patch creates specific device node in the Guest device-tree with allocated MMIO range and SPI interrupt if specific 'virtio' property is present in domain config. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was squashed with: "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct wa= y" "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virti= o-mmio device node" "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT" - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h Changes V1 -> V2: - update the author of a patch --- tools/libs/light/libxl_arm.c | 58 ++++++++++++++++++++++++++++++++++++= ++-- tools/libs/light/libxl_types.idl | 1 + tools/xl/xl_parse.c | 1 + xen/include/public/arch-arm.h | 5 ++++ 4 files changed, 63 insertions(+), 2 deletions(-) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index 66e8a06..588ee5a 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -26,8 +26,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, { uint32_t nr_spis =3D 0; unsigned int i; - uint32_t vuart_irq; - bool vuart_enabled =3D false; + uint32_t vuart_irq, virtio_irq; + bool vuart_enabled =3D false, virtio_enabled =3D false; =20 /* * If pl011 vuart is enabled then increment the nr_spis to allow alloc= ation @@ -39,6 +39,17 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled =3D true; } =20 + /* + * XXX: Handle properly virtio + * A proper solution would be the toolstack to allocate the interrupts + * used by each virtio backend and let the backend now which one is us= ed + */ + if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) { + nr_spis +=3D (GUEST_VIRTIO_MMIO_SPI - 32) + 1; + virtio_irq =3D GUEST_VIRTIO_MMIO_SPI; + virtio_enabled =3D true; + } + for (i =3D 0; i < d_config->b_info.num_irqs; i++) { uint32_t irq =3D d_config->b_info.irqs[i]; uint32_t spi; @@ -58,6 +69,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, return ERROR_FAIL; } =20 + /* The same check as for vpl011 */ + if (virtio_enabled && irq =3D=3D virtio_irq) { + LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", ir= q); + return ERROR_FAIL; + } + if (irq < 32) continue; =20 @@ -658,6 +675,39 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *= fdt, return 0; } =20 +static int make_virtio_mmio_node(libxl__gc *gc, void *fdt, + uint64_t base, uint32_t irq) +{ + int res; + gic_interrupt intr; + /* Placeholder for virtio@ + a 64-bit number + \0 */ + char buf[24]; + + snprintf(buf, sizeof(buf), "virtio@%"PRIx64, base); + res =3D fdt_begin_node(fdt, buf); + if (res) return res; + + res =3D fdt_property_compat(gc, fdt, 1, "virtio,mmio"); + if (res) return res; + + res =3D fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROO= T_SIZE_CELLS, + 1, base, GUEST_VIRTIO_MMIO_SIZE); + if (res) return res; + + set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING); + res =3D fdt_property_interrupts(gc, fdt, &intr, 1); + if (res) return res; + + res =3D fdt_property(fdt, "dma-coherent", NULL, 0); + if (res) return res; + + res =3D fdt_end_node(fdt); + if (res) return res; + + return 0; + +} + static const struct arch_info *get_arch_info(libxl__gc *gc, const struct xc_dom_image *do= m) { @@ -961,6 +1011,9 @@ next_resize: if (info->tee =3D=3D LIBXL_TEE_TYPE_OPTEE) FDT( make_optee_node(gc, fdt) ); =20 + if (libxl_defbool_val(info->arch_arm.virtio)) + FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GU= EST_VIRTIO_MMIO_SPI) ); + if (pfdt) FDT( copy_partial_fdt(gc, fdt, pfdt) ); =20 @@ -1178,6 +1231,7 @@ void libxl__arch_domain_build_info_setdefault(libxl__= gc *gc, { /* ACPI is disabled by default */ libxl_defbool_setdefault(&b_info->acpi, false); + libxl_defbool_setdefault(&b_info->arch_arm.virtio, false); =20 if (b_info->type !=3D LIBXL_DOMAIN_TYPE_PV) return; diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_type= s.idl index 9d3f05f..b054bf9 100644 --- a/tools/libs/light/libxl_types.idl +++ b/tools/libs/light/libxl_types.idl @@ -639,6 +639,7 @@ libxl_domain_build_info =3D Struct("domain_build_info",[ =20 =20 ("arch_arm", Struct(None, [("gic_version", libxl_gic_version), + ("virtio", libxl_defbool), ("vuart", libxl_vuart_type), ])), # Alternate p2m is not bound to any architecture or guest type, as it = is diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index cae8eb6..10acf22 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -2581,6 +2581,7 @@ skip_usbdev: } =20 xlu_cfg_get_defbool(config, "dm_restrict", &b_info->dm_restrict, 0); + xlu_cfg_get_defbool(config, "virtio", &b_info->arch_arm.virtio, 0); =20 if (c_info->type =3D=3D LIBXL_DOMAIN_TYPE_HVM) { if (!xlu_cfg_get_string (config, "vga", &buf, 0)) { diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index c365b1b..be7595f 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -464,6 +464,11 @@ typedef uint64_t xen_callback_t; #define PSCI_cpu_on 2 #define PSCI_migrate 3 =20 +/* VirtIO MMIO definitions */ +#define GUEST_VIRTIO_MMIO_BASE xen_mk_ullong(0x02000000) +#define GUEST_VIRTIO_MMIO_SIZE xen_mk_ullong(0x200) +#define GUEST_VIRTIO_MMIO_SPI 33 + #endif =20 #ifndef __ASSEMBLY__ --=20 2.7.4 From nobody Wed May 8 17:59:30 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1602780821; cv=none; d=zohomail.com; s=zohoarc; b=neqVQjXx6FFRE0wKSpcofboyTrhf/DE8TK/N0Qg+U46ya5MtgXeNWaE7zHmedj2IR4u/TbpFR8tH5lIuIe3OdMW/xOJDJtcglKv6VYlXyO78D23Rktow8HQ1eDwKKzAuFsQy9qn4E3Cfo8HGBYVpecKxEaZS4ShGC38nOghtwH0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602780821; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=xCZVPNSd778IcfiDHcmUgV2+bmw+xbtVorY65VaJeGs=; b=R7j37EXKhMavcUx8mgdXx6Ecyk0PAsiYDNkvjUAxwd+v78FVPDPsn+cYbZyt/7L2Fh9iZ8TToffXPEapH5nIsoP9HMBletjsmjBp3NqnmiCiR3586dacjyVICI7AdHkeeUYZHdWAuc7BvVi1XA5SNFW4ckc6jYfvxDYlBnALNb4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602780821845893.1282072051523; Thu, 15 Oct 2020 09:53:41 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.7607.20033 (Exim 4.92) (envelope-from ) id 1kT6VA-00078e-KF; Thu, 15 Oct 2020 16:53:16 +0000 Received: by outflank-mailman (output) from mailman id 7607.20033; Thu, 15 Oct 2020 16:53:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6VA-00078X-GI; Thu, 15 Oct 2020 16:53:16 +0000 Received: by outflank-mailman (input) for mailman id 7607; Thu, 15 Oct 2020 16:53:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Ow-0004yr-8X for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:50 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 087e66d4-5203-42f8-bac5-9b88c9992b63; Thu, 15 Oct 2020 16:45:22 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id 77so4380178lfl.2 for ; Thu, 15 Oct 2020 09:45:17 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:15 -0700 (PDT) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kT6Ow-0004yr-8X for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:50 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 087e66d4-5203-42f8-bac5-9b88c9992b63; Thu, 15 Oct 2020 16:45:22 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id 77so4380178lfl.2 for ; Thu, 15 Oct 2020 09:45:17 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Oct 2020 09:45:15 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 087e66d4-5203-42f8-bac5-9b88c9992b63 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xCZVPNSd778IcfiDHcmUgV2+bmw+xbtVorY65VaJeGs=; b=Avq2Hoh8lwOMvmjF3c6XbeNBaRbzuWIGIuolr8Q6LaYlBKeZ5kNLVc2aUpijb3psTy DN6QC9HY7KKywU4UFyxH1vsi17nMeIxUJc7/NIpwpSP7V40iUVqB67xCWkSIIezznbnp RooSEoZsgEVMyJN8XD+eM5XdxkykKAJU0W38u2Xtn9HNfF85TllybdvtP6SIETT4l49g Kp6SlzkKaxloTMK6NAxA/SvUzkWctQSwSzekVxUjUws+bsV6zNLkLPoa0j9KW/6fauUH JsZMrbOKj6W4YfppUi6GqH+j5M74GJikLCslEPFNhG8aPhTmWjjx55HIfPXqRj7chLSx w7jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xCZVPNSd778IcfiDHcmUgV2+bmw+xbtVorY65VaJeGs=; b=ucHXIafUWoHEf7e5HASI8FnPb1tOBqvWwNNvawPCV5Toj4u1JM7qtBqu9CetHHK160 +hr69Lr/emaEljwY64NT17CuiDwVt0ZHbDFMEt27QR+cqK9khPzqMe0pW+O11TalOYgH HHCxp7+qZaksC7PAch8ZJf5I42IE2S7+4Js2mMw4lqXi2ByJOEm4+RZDcOrBzEQpC67d s7dk+7+UsVHlrfcbFH78Hu8fdPwpCWz9THGfe1S2rQsHgv5sVJWSd+ZG3iU/oXN4qIOT CFNOPEosjCgnBHIxrfJoMwxDMem5og8lh8w5k54ashs29t8Bmp6H/jAV8SO2U2oSbKgC suYg== X-Gm-Message-State: AOAM533D2mw894oN/xz4po9FidTP99avvSd6RDy8VLJBq2b24roIA2Y9 OFn4tRfE/xp1Z4EVopTAxpzqWckb1wYifw== X-Google-Smtp-Source: ABdhPJxI6QsqQDzvUEy4e89SghbJpt00SiJg6myZgzN0L2sohpMkncs8FuLSnw9POz2gxpNHr2WNZg== X-Received: by 2002:ac2:5e6d:: with SMTP id a13mr1495324lfr.514.1602780316173; Thu, 15 Oct 2020 09:45:16 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Ian Jackson , Wei Liu , Anthony PERARD , Julien Grall , Stefano Stabellini Subject: [PATCH V2 23/23] [RFC] libxl: Add support for virtio-disk configuration Date: Thu, 15 Oct 2020 19:44:34 +0300 Message-Id: <1602780274-29141-24-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch adds basic support for configuring and assisting virtio-disk backend (emualator) which is intended to run out of Qemu and could be run in any domain. Xenstore was chosen as a communication interface for the emulator running in non-toolstack domain to be able to get configuration either by reading Xenstore directly or by receiving command line parameters (an updated 'xl d= evd' running in the same domain would read Xenstore beforehand and call backend executable with the required arguments). An example of domain configuration (two disks are assigned to the guest, the latter is in readonly mode): vdisk =3D [ 'backend=3DDomD, disks=3Drw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ] Where per-disk Xenstore entries are: - filename and readonly flag (configured via "vdisk" property) - base and irq (allocated dynamically) Besides handling 'visible' params described in configuration file, patch also allocates virtio-mmio specific ones for each device and writes them into Xenstore. virtio-mmio params (irq and base) are unique per guest domain, they allocated at the domain creation time and passed through to the emulator. Each VirtIO device has at least one pair of these params. TODO: 1. An extra "virtio" property could be removed. 2. Update documentation. Signed-off-by: Oleksandr Tyshchenko --- Changes RFC -> V1: - no changes Changes V1 -> V2: - rebase according to the new location of libxl_virtio_disk.c Please note, there is a real concern about VirtIO interrupts allocation. Just copy here what Stefano said in RFC thread. So, if we end up allocating let's say 6 virtio interrupts for a domain, the chance of a clash with a physical interrupt of a passthrough device is = real. I am not entirely sure how to solve it, but these are a few ideas: - choosing virtio interrupts that are less likely to conflict (maybe > 1000) - make the virtio irq (optionally) configurable so that a user could override the default irq and specify one that doesn't conflict - implementing support for virq !=3D pirq (even the xl interface doesn't allow to specify the virq number for passthrough devices, see "irqs") --- tools/libs/light/Makefile | 1 + tools/libs/light/libxl_arm.c | 56 ++++++++++++--- tools/libs/light/libxl_create.c | 1 + tools/libs/light/libxl_internal.h | 1 + tools/libs/light/libxl_types.idl | 15 ++++ tools/libs/light/libxl_types_internal.idl | 1 + tools/libs/light/libxl_virtio_disk.c | 109 ++++++++++++++++++++++++++= ++ tools/xl/Makefile | 2 +- tools/xl/xl.h | 3 + tools/xl/xl_cmdtable.c | 15 ++++ tools/xl/xl_parse.c | 115 ++++++++++++++++++++++++++= ++++ tools/xl/xl_virtio_disk.c | 46 ++++++++++++ 12 files changed, 354 insertions(+), 11 deletions(-) create mode 100644 tools/libs/light/libxl_virtio_disk.c create mode 100644 tools/xl/xl_virtio_disk.c diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile index f58a321..2ee388a 100644 --- a/tools/libs/light/Makefile +++ b/tools/libs/light/Makefile @@ -115,6 +115,7 @@ SRCS-y +=3D libxl_genid.c SRCS-y +=3D _libxl_types.c SRCS-y +=3D libxl_flask.c SRCS-y +=3D _libxl_types_internal.c +SRCS-y +=3D libxl_virtio_disk.c =20 ifeq ($(CONFIG_LIBNL),y) CFLAGS_LIBXL +=3D $(LIBNL3_CFLAGS) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index 588ee5a..9eb3022 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -8,6 +8,12 @@ #include #include =20 +#ifndef container_of +#define container_of(ptr, type, member) ({ \ + typeof( ((type *)0)->member ) *__mptr =3D (ptr); \ + (type *)( (char *)__mptr - offsetof(type,member) );}) +#endif + static const char *gicv_to_string(libxl_gic_version gic_version) { switch (gic_version) { @@ -39,14 +45,32 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled =3D true; } =20 - /* - * XXX: Handle properly virtio - * A proper solution would be the toolstack to allocate the interrupts - * used by each virtio backend and let the backend now which one is us= ed - */ if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) { - nr_spis +=3D (GUEST_VIRTIO_MMIO_SPI - 32) + 1; + uint64_t virtio_base; + libxl_device_virtio_disk *virtio_disk; + + virtio_base =3D GUEST_VIRTIO_MMIO_BASE; virtio_irq =3D GUEST_VIRTIO_MMIO_SPI; + + if (!d_config->num_virtio_disks) { + LOG(ERROR, "Virtio is enabled, but no Virtio devices present\n= "); + return ERROR_FAIL; + } + virtio_disk =3D &d_config->virtio_disks[0]; + + for (i =3D 0; i < virtio_disk->num_disks; i++) { + virtio_disk->disks[i].base =3D virtio_base; + virtio_disk->disks[i].irq =3D virtio_irq; + + LOG(DEBUG, "Allocate Virtio MMIO params: IRQ %u BASE 0x%"PRIx6= 4, + virtio_irq, virtio_base); + + virtio_irq ++; + virtio_base +=3D GUEST_VIRTIO_MMIO_SIZE; + } + virtio_irq --; + + nr_spis +=3D (virtio_irq - 32) + 1; virtio_enabled =3D true; } =20 @@ -70,8 +94,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, } =20 /* The same check as for vpl011 */ - if (virtio_enabled && irq =3D=3D virtio_irq) { - LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", ir= q); + if (virtio_enabled && + (irq >=3D GUEST_VIRTIO_MMIO_SPI && irq <=3D virtio_irq)) { + LOG(ERROR, "Physical IRQ %u conflicting with Virtio IRQ range\= n", irq); return ERROR_FAIL; } =20 @@ -1011,8 +1036,19 @@ next_resize: if (info->tee =3D=3D LIBXL_TEE_TYPE_OPTEE) FDT( make_optee_node(gc, fdt) ); =20 - if (libxl_defbool_val(info->arch_arm.virtio)) - FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GU= EST_VIRTIO_MMIO_SPI) ); + if (libxl_defbool_val(info->arch_arm.virtio)) { + libxl_domain_config *d_config =3D + container_of(info, libxl_domain_config, b_info); + libxl_device_virtio_disk *virtio_disk =3D &d_config->virtio_di= sks[0]; + unsigned int i; + + for (i =3D 0; i < virtio_disk->num_disks; i++) { + uint64_t base =3D virtio_disk->disks[i].base; + uint32_t irq =3D virtio_disk->disks[i].irq; + + FDT( make_virtio_mmio_node(gc, fdt, base, irq) ); + } + } =20 if (pfdt) FDT( copy_partial_fdt(gc, fdt, pfdt) ); diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_creat= e.c index 321a13e..8da328d 100644 --- a/tools/libs/light/libxl_create.c +++ b/tools/libs/light/libxl_create.c @@ -1821,6 +1821,7 @@ const libxl__device_type *device_type_tbl[] =3D { &libxl__dtdev_devtype, &libxl__vdispl_devtype, &libxl__vsnd_devtype, + &libxl__virtio_disk_devtype, NULL }; =20 diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_int= ernal.h index e26cda9..ea497bb 100644 --- a/tools/libs/light/libxl_internal.h +++ b/tools/libs/light/libxl_internal.h @@ -4000,6 +4000,7 @@ extern const libxl__device_type libxl__vdispl_devtype; extern const libxl__device_type libxl__p9_devtype; extern const libxl__device_type libxl__pvcallsif_devtype; extern const libxl__device_type libxl__vsnd_devtype; +extern const libxl__device_type libxl__virtio_disk_devtype; =20 extern const libxl__device_type *device_type_tbl[]; =20 diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_type= s.idl index b054bf9..5f8a3ff 100644 --- a/tools/libs/light/libxl_types.idl +++ b/tools/libs/light/libxl_types.idl @@ -935,6 +935,20 @@ libxl_device_vsnd =3D Struct("device_vsnd", [ ("pcms", Array(libxl_vsnd_pcm, "num_vsnd_pcms")) ]) =20 +libxl_virtio_disk_param =3D Struct("virtio_disk_param", [ + ("filename", string), + ("readonly", bool), + ("irq", uint32), + ("base", uint64), + ]) + +libxl_device_virtio_disk =3D Struct("device_virtio_disk", [ + ("backend_domid", libxl_domid), + ("backend_domname", string), + ("devid", libxl_devid), + ("disks", Array(libxl_virtio_disk_param, "num_disks")), + ]) + libxl_domain_config =3D Struct("domain_config", [ ("c_info", libxl_domain_create_info), ("b_info", libxl_domain_build_info), @@ -951,6 +965,7 @@ libxl_domain_config =3D Struct("domain_config", [ ("pvcallsifs", Array(libxl_device_pvcallsif, "num_pvcallsifs")), ("vdispls", Array(libxl_device_vdispl, "num_vdispls")), ("vsnds", Array(libxl_device_vsnd, "num_vsnds")), + ("virtio_disks", Array(libxl_device_virtio_disk, "num_virtio_disks")), # a channel manifests as a console with a name, # see docs/misc/channels.txt ("channels", Array(libxl_device_channel, "num_channels")), diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/l= ibxl_types_internal.idl index 3593e21..8f71980 100644 --- a/tools/libs/light/libxl_types_internal.idl +++ b/tools/libs/light/libxl_types_internal.idl @@ -32,6 +32,7 @@ libxl__device_kind =3D Enumeration("device_kind", [ (14, "PVCALLS"), (15, "VSND"), (16, "VINPUT"), + (17, "VIRTIO_DISK"), ]) =20 libxl__console_backend =3D Enumeration("console_backend", [ diff --git a/tools/libs/light/libxl_virtio_disk.c b/tools/libs/light/libxl_= virtio_disk.c new file mode 100644 index 0000000..25e7f1a --- /dev/null +++ b/tools/libs/light/libxl_virtio_disk.c @@ -0,0 +1,109 @@ +/* + * Copyright (C) 2020 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include "libxl_internal.h" + +static int libxl__device_virtio_disk_setdefault(libxl__gc *gc, uint32_t do= mid, + libxl_device_virtio_disk *= virtio_disk, + bool hotplug) +{ + return libxl__resolve_domid(gc, virtio_disk->backend_domname, + &virtio_disk->backend_domid); +} + +static int libxl__virtio_disk_from_xenstore(libxl__gc *gc, const char *lib= xl_path, + libxl_devid devid, + libxl_device_virtio_disk *virt= io_disk) +{ + const char *be_path; + int rc; + + virtio_disk->devid =3D devid; + rc =3D libxl__xs_read_mandatory(gc, XBT_NULL, + GCSPRINTF("%s/backend", libxl_path), + &be_path); + if (rc) return rc; + + rc =3D libxl__backendpath_parse_domid(gc, be_path, &virtio_disk->backe= nd_domid); + if (rc) return rc; + + return 0; +} + +static void libxl__update_config_virtio_disk(libxl__gc *gc, + libxl_device_virtio_disk *dst, + libxl_device_virtio_disk *src) +{ + dst->devid =3D src->devid; +} + +static int libxl_device_virtio_disk_compare(libxl_device_virtio_disk *d1, + libxl_device_virtio_disk *d2) +{ + return COMPARE_DEVID(d1, d2); +} + +static void libxl__device_virtio_disk_add(libxl__egc *egc, uint32_t domid, + libxl_device_virtio_disk *virtio= _disk, + libxl__ao_device *aodev) +{ + libxl__device_add_async(egc, domid, &libxl__virtio_disk_devtype, virti= o_disk, aodev); +} + +static int libxl__set_xenstore_virtio_disk(libxl__gc *gc, uint32_t domid, + libxl_device_virtio_disk *virti= o_disk, + flexarray_t *back, flexarray_t = *front, + flexarray_t *ro_front) +{ + int rc; + unsigned int i; + + for (i =3D 0; i < virtio_disk->num_disks; i++) { + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/filename", i), + GCSPRINTF("%s", virtio_disk->disks[i].f= ilename)); + if (rc) return rc; + + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/readonly", i), + GCSPRINTF("%d", virtio_disk->disks[i].r= eadonly)); + if (rc) return rc; + + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/base", i), + GCSPRINTF("%lu", virtio_disk->disks[i].= base)); + if (rc) return rc; + + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/irq", i), + GCSPRINTF("%u", virtio_disk->disks[i].i= rq)); + if (rc) return rc; + } + + return 0; +} + +static LIBXL_DEFINE_UPDATE_DEVID(virtio_disk) +static LIBXL_DEFINE_DEVICE_FROM_TYPE(virtio_disk) +static LIBXL_DEFINE_DEVICES_ADD(virtio_disk) + +DEFINE_DEVICE_TYPE_STRUCT(virtio_disk, VIRTIO_DISK, + .update_config =3D (device_update_config_fn_t) libxl__update_config_vi= rtio_disk, + .from_xenstore =3D (device_from_xenstore_fn_t) libxl__virtio_disk_from= _xenstore, + .set_xenstore_config =3D (device_set_xenstore_config_fn_t) libxl__set_= xenstore_virtio_disk +); + +/* + * Local variables: + * mode: C + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/tools/xl/Makefile b/tools/xl/Makefile index bdf67c8..9d8f2aa 100644 --- a/tools/xl/Makefile +++ b/tools/xl/Makefile @@ -23,7 +23,7 @@ XL_OBJS +=3D xl_vtpm.o xl_block.o xl_nic.o xl_usb.o XL_OBJS +=3D xl_sched.o xl_pci.o xl_vcpu.o xl_cdrom.o xl_mem.o XL_OBJS +=3D xl_info.o xl_console.o xl_misc.o XL_OBJS +=3D xl_vmcontrol.o xl_saverestore.o xl_migrate.o -XL_OBJS +=3D xl_vdispl.o xl_vsnd.o xl_vkb.o +XL_OBJS +=3D xl_vdispl.o xl_vsnd.o xl_vkb.o xl_virtio_disk.o =20 $(XL_OBJS): CFLAGS +=3D $(CFLAGS_libxentoollog) $(XL_OBJS): CFLAGS +=3D $(CFLAGS_XL) diff --git a/tools/xl/xl.h b/tools/xl/xl.h index 06569c6..3d26f19 100644 --- a/tools/xl/xl.h +++ b/tools/xl/xl.h @@ -178,6 +178,9 @@ int main_vsnddetach(int argc, char **argv); int main_vkbattach(int argc, char **argv); int main_vkblist(int argc, char **argv); int main_vkbdetach(int argc, char **argv); +int main_virtio_diskattach(int argc, char **argv); +int main_virtio_disklist(int argc, char **argv); +int main_virtio_diskdetach(int argc, char **argv); int main_usbctrl_attach(int argc, char **argv); int main_usbctrl_detach(int argc, char **argv); int main_usbdev_attach(int argc, char **argv); diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c index 7da6c1b..745afab 100644 --- a/tools/xl/xl_cmdtable.c +++ b/tools/xl/xl_cmdtable.c @@ -435,6 +435,21 @@ struct cmd_spec cmd_table[] =3D { "Destroy a domain's virtual sound device", " ", }, + { "virtio-disk-attach", + &main_virtio_diskattach, 1, 1, + "Create a new virtio block device", + " TBD\n" + }, + { "virtio-disk-list", + &main_virtio_disklist, 0, 0, + "List virtio block devices for a domain", + "", + }, + { "virtio-disk-detach", + &main_virtio_diskdetach, 0, 1, + "Destroy a domain's virtio block device", + " ", + }, { "uptime", &main_uptime, 0, 0, "Print uptime for all/some domains", diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index 10acf22..6cf3524 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -1204,6 +1204,120 @@ out: if (rc) exit(EXIT_FAILURE); } =20 +#define MAX_VIRTIO_DISKS 4 + +static int parse_virtio_disk_config(libxl_device_virtio_disk *virtio_disk,= char *token) +{ + char *oparg; + libxl_string_list disks =3D NULL; + int i, rc; + + if (MATCH_OPTION("backend", token, oparg)) { + virtio_disk->backend_domname =3D strdup(oparg); + } else if (MATCH_OPTION("disks", token, oparg)) { + split_string_into_string_list(oparg, ";", &disks); + + virtio_disk->num_disks =3D libxl_string_list_length(&disks); + if (virtio_disk->num_disks > MAX_VIRTIO_DISKS) { + fprintf(stderr, "vdisk: currently only %d disks are supported", + MAX_VIRTIO_DISKS); + return 1; + } + virtio_disk->disks =3D xcalloc(virtio_disk->num_disks, + sizeof(*virtio_disk->disks)); + + for(i =3D 0; i < virtio_disk->num_disks; i++) { + char *disk_opt; + + rc =3D split_string_into_pair(disks[i], ":", &disk_opt, + &virtio_disk->disks[i].filename); + if (rc) { + fprintf(stderr, "vdisk: failed to split \"%s\" into pair\n= ", + disks[i]); + goto out; + } + + if (!strcmp(disk_opt, "ro")) + virtio_disk->disks[i].readonly =3D 1; + else if (!strcmp(disk_opt, "rw")) + virtio_disk->disks[i].readonly =3D 0; + else { + fprintf(stderr, "vdisk: failed to parse \"%s\" disk option= \n", + disk_opt); + rc =3D 1; + } + free(disk_opt); + + if (rc) goto out; + } + } else { + fprintf(stderr, "Unknown string \"%s\" in vdisk spec\n", token); + rc =3D 1; goto out; + } + + rc =3D 0; + +out: + libxl_string_list_dispose(&disks); + return rc; +} + +static void parse_virtio_disk_list(const XLU_Config *config, + libxl_domain_config *d_config) +{ + XLU_ConfigList *virtio_disks; + const char *item; + char *buf =3D NULL; + int rc; + + if (!xlu_cfg_get_list (config, "vdisk", &virtio_disks, 0, 0)) { + libxl_domain_build_info *b_info =3D &d_config->b_info; + int entry =3D 0; + + /* XXX Remove an extra property */ + libxl_defbool_setdefault(&b_info->arch_arm.virtio, false); + if (!libxl_defbool_val(b_info->arch_arm.virtio)) { + fprintf(stderr, "Virtio device requires Virtio property to be = set\n"); + exit(EXIT_FAILURE); + } + + while ((item =3D xlu_cfg_get_listitem(virtio_disks, entry)) !=3D N= ULL) { + libxl_device_virtio_disk *virtio_disk; + char *p; + + virtio_disk =3D ARRAY_EXTEND_INIT(d_config->virtio_disks, + d_config->num_virtio_disks, + libxl_device_virtio_disk_init); + + buf =3D strdup(item); + + p =3D strtok (buf, ","); + while (p !=3D NULL) + { + while (*p =3D=3D ' ') p++; + + rc =3D parse_virtio_disk_config(virtio_disk, p); + if (rc) goto out; + + p =3D strtok (NULL, ","); + } + + entry++; + + if (virtio_disk->num_disks =3D=3D 0) { + fprintf(stderr, "At least one virtio disk should be specif= ied\n"); + rc =3D 1; goto out; + } + } + } + + rc =3D 0; + +out: + free(buf); + if (rc) exit(EXIT_FAILURE); +} + void parse_config_data(const char *config_source, const char *config_data, int config_len, @@ -2734,6 +2848,7 @@ skip_usbdev: } =20 parse_vkb_list(config, d_config); + parse_virtio_disk_list(config, d_config); =20 xlu_cfg_get_defbool(config, "xend_suspend_evtchn_compat", &c_info->xend_suspend_evtchn_compat, 0); diff --git a/tools/xl/xl_virtio_disk.c b/tools/xl/xl_virtio_disk.c new file mode 100644 index 0000000..808a7da --- /dev/null +++ b/tools/xl/xl_virtio_disk.c @@ -0,0 +1,46 @@ +/* + * Copyright (C) 2020 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include + +#include +#include +#include + +#include "xl.h" +#include "xl_utils.h" +#include "xl_parse.h" + +int main_virtio_diskattach(int argc, char **argv) +{ + return 0; +} + +int main_virtio_disklist(int argc, char **argv) +{ + return 0; +} + +int main_virtio_diskdetach(int argc, char **argv) +{ + return 0; +} + +/* + * Local variables: + * mode: C + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.7.4