From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488408; cv=none; d=zohomail.com; s=zohoarc; b=OSidMk9QEPENnv9ZL/KPGErIjqJyuTygm4ta0qqBtK1lwo4WaVB8DCJzmyk9QnUx6JLmH/uZrQ4Fr9MAdzMWo1w5jkMT2aqOUr0lnhwNHMw3Kia+GUglp0lBmMYLvCdW+A6Vd0aaBeB7z1JuZy2w5PVQyXzODjRuTYXjZ0iAyiU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488408; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=gCw6lT9ULGQVEuBIpHE6YdRrisokhnjPBJH9ThI94Es=; b=JFiOUQkQ0WVuLDfjdLTLXFTOU1bu5SzNHvd3huWk9EyuUhLZlmrO+AwWiJop/dI6bC3SuOJSRg7q0VivFeSjmPePPYtYQ1JOBMTwc55dFh74MHdV6vvPZxE0lIjGt4emB3NqFZH9tJs9P50f2HACaMCcP6Bt9XMedbgm+vm5qrM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488408330254.29884429275046; Tue, 12 Jan 2021 13:53:28 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66040.117124 (Exim 4.92) (envelope-from ) id 1kzRb3-0002RG-Tc; Tue, 12 Jan 2021 21:53:01 +0000 Received: by outflank-mailman (output) from mailman id 66040.117124; Tue, 12 Jan 2021 21:53:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRb3-0002R9-QU; Tue, 12 Jan 2021 21:53:01 +0000 Received: by outflank-mailman (input) for mailman id 66040; Tue, 12 Jan 2021 21:53:00 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRb2-0002PK-GG for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:00 +0000 Received: from mail-wr1-x42f.google.com (unknown [2a00:1450:4864:20::42f]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d71e47a5-40f1-4fca-9139-5c2857d96a4f; Tue, 12 Jan 2021 21:52:54 +0000 (UTC) Received: by mail-wr1-x42f.google.com with SMTP id r3so29587wrt.2 for ; Tue, 12 Jan 2021 13:52:54 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:52:52 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d71e47a5-40f1-4fca-9139-5c2857d96a4f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gCw6lT9ULGQVEuBIpHE6YdRrisokhnjPBJH9ThI94Es=; b=Iel65jIQDaBiIDMJTnFYvs9s5NcidmCoFPb8coYSd/sPXP6Ey83E6tp6BWHJGyxa5G M93CteDmCBf2fEMch2HKvGsfdqQkxrO0qOZZeg9L9kAVRcjyeAl9J/CjNWeVf8LwsHrD ykN+f6w8yT6IbBvh1hGex0dokjw7tozFK0s3UqwaGvAZPPhjN19IRKiH4Ya7+zq1opG4 CdsvUzJf+m6sRrSNycVUX8tJOawKX4CXoLr2E9a6x91JnbsNhluXrcCNgQPXowBMnBs8 7T/v1mydKyi+jlGMOgwqkSQo2J36T/OWav6dYPTHiB4IplN4+Dpfrsuh6wsXhr4AKv2L eZkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gCw6lT9ULGQVEuBIpHE6YdRrisokhnjPBJH9ThI94Es=; b=V+20PmvGd4ConX4xq4G+m1ffI+KghA2lefhige+L2tn3JvkmD7KGL+Gm64e2CPo1N/ st7pk13G1aSsoi+gQxUMCCl40zugYIO3cdBmcpNZAqmDCEtNZNfF/y4vNxLms//QC12w glf+y3ngAEyrPieRZKrBnEmXZ9MVuV/GF6kt7yjA1UVLDKNV4wr1NlXMibWU0nnwe4Yv vgZ4TwQLWGtjOCRrXkThE2GRbD0gxt+yK7rRmKb1PwIdjzM6J3/3JuMDjTbCBAmNzqil 8rZESoVJadQ0vKf4VJD/HtmN3CNhjhjFGKfX4u9NXFBDKdTAjI0p9EVyhOOy5YgYqFUA dE7A== X-Gm-Message-State: AOAM530VrSxtVRteQCbUTRocCCsXpv+STitf5shRPb++wQcZJdz9n+gq YcKYe7TEgLJ+U2QIPaUlOSHXnCxOIXTCoA== X-Google-Smtp-Source: ABdhPJyZRIcdV9UNLz/Sv8L3h0t4tsgm1HqsTM108xOv/+7HGy51OhQ3BVuRZEBowoIYjtFzT98OOQ== X-Received: by 2002:a5d:45d0:: with SMTP id b16mr809939wrs.220.1610488373102; Tue, 12 Jan 2021 13:52:53 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V4 01/24] x86/ioreq: Prepare IOREQ feature for making it common Date: Tue, 12 Jan 2021 23:52:09 +0200 Message-Id: <1610488352-18494-2-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch makes some preparation to x86/hvm/ioreq.c before moving to the common code. This way we will get a verbatim copy for a code movement in subsequent patch. This patch mostly introduces specific hooks to abstract arch specific materials taking into the account the requirment to leave the "legacy" mechanism of mapping magic pages for the IOREQ servers x86 specific and not expose it to the common code. These hooks are named according to the more consistent new naming scheme right away (including dropping the "hvm" prefixes and infixes): - IOREQ server functions should start with "ioreq_server_" - IOREQ functions should start with "ioreq_" other functions will be renamed in subsequent patches. Also re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Alex Benn=C3=A9e CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" - fold the check of p->type into hvm_get_ioreq_server_range_type() and make it return success/failure - remove relocate_portio_handler() call from arch_hvm_ioreq_destroy() in arch/x86/hvm/ioreq.c - introduce arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completio= n() Changes V1 -> V2: - update patch description - make arch functions inline and put them into arch header to achieve a truly rename by the subsequent patch - return void in arch_hvm_destroy_ioreq_server() - return bool in arch_hvm_ioreq_destroy() - bring relocate_portio_handler() back to arch_hvm_ioreq_destroy() - rename IOREQ_IO* to IOREQ_STATUS* - remove *handle* from arch_handle_hvm_io_completion() - re-order #include-s alphabetically - rename hvm_get_ioreq_server_range_type() to hvm_ioreq_server_get_type_= addr() and add "const" to several arguments Changes V2 -> V3: - update patch description - name new arch hooks according to the new naming scheme - don't make arch hooks inline, move them ioreq.c - make get_ioreq_server() local again - rework the whole patch taking into the account that "legacy" interface should remain x86 specific (additional arch hooks, etc) - update the code to be able to use hvm_map_mem_type_to_ioreq_server() in the common code (an extra arch hook, etc) - don=E2=80=99t include from arch header - add "arch" prefix to hvm_ioreq_server_get_type_addr() - move IOREQ_STATUS_* #define-s introduction to the separate patch - move HANDLE_BUFIOREQ to the arch header - just return relocate_portio_handler() from arch_ioreq_server_destroy_a= ll() - misc adjustments proposed by Jan (adding const, unsigned int instead o= f uint32_t) Changes V3 -> V4: - add Alex's R-b - update patch description - make arch_ioreq_server_get_type_addr return bool - drop #include - use two arch hooks in hvm_map_mem_type_to_ioreq_server() to avoid calling p2m_change_entry_type_global() with lock held --- xen/arch/x86/hvm/ioreq.c | 179 ++++++++++++++++++++++++++----------= ---- xen/include/asm-x86/hvm/ioreq.h | 22 +++++ 2 files changed, 141 insertions(+), 60 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 1cc27df..468fe84 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -16,16 +16,15 @@ * this program; If not, see . */ =20 -#include +#include +#include #include +#include #include -#include +#include #include -#include #include -#include -#include -#include +#include #include =20 #include @@ -170,6 +169,29 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv,= ioreq_t *p) return true; } =20 +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) +{ + switch ( io_completion ) + { + case HVMIO_realmode_completion: + { + struct hvm_emulate_ctxt ctxt; + + hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); + vmx_realmode_emulate_one(&ctxt); + hvm_emulate_writeback(&ctxt); + + break; + } + + default: + ASSERT_UNREACHABLE(); + break; + } + + return true; +} + bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; @@ -209,19 +231,8 @@ bool handle_hvm_io_completion(struct vcpu *v) return handle_pio(vio->io_req.addr, vio->io_req.size, vio->io_req.dir); =20 - case HVMIO_realmode_completion: - { - struct hvm_emulate_ctxt ctxt; - - hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); - vmx_realmode_emulate_one(&ctxt); - hvm_emulate_writeback(&ctxt); - - break; - } default: - ASSERT_UNREACHABLE(); - break; + return arch_vcpu_ioreq_completion(io_completion); } =20 return true; @@ -477,9 +488,6 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_se= rver *s, } } =20 -#define HANDLE_BUFIOREQ(s) \ - ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) - static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v) { @@ -586,7 +594,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hv= m_ioreq_server *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) { int rc; =20 @@ -601,7 +609,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_= server *s) return rc; } =20 -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) { hvm_unmap_ioreq_gfn(s, true); hvm_unmap_ioreq_gfn(s, false); @@ -674,6 +682,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm= _ioreq_server *s, return rc; } =20 +void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +{ + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); +} + static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) { struct hvm_ioreq_vcpu *sv; @@ -683,8 +697,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_se= rver *s) if ( s->enabled ) goto done; =20 - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + arch_ioreq_server_enable(s); =20 s->enabled =3D true; =20 @@ -697,6 +710,12 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_s= erver *s) spin_unlock(&s->lock); } =20 +void arch_ioreq_server_disable(struct hvm_ioreq_server *s) +{ + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); +} + static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) { spin_lock(&s->lock); @@ -704,8 +723,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_s= erver *s) if ( !s->enabled ) goto done; =20 - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + arch_ioreq_server_disable(s); =20 s->enabled =3D false; =20 @@ -750,7 +768,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_serve= r *s, =20 fail_add: hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s); + arch_ioreq_server_unmap_pages(s); =20 hvm_ioreq_server_free_rangesets(s); =20 @@ -764,7 +782,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_se= rver *s) hvm_ioreq_server_remove_all_vcpus(s); =20 /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and * hvm_ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages * are not mapped, leaving the page to be freed by the latter. @@ -772,7 +790,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_se= rver *s) * the page_info pointer to NULL, meaning the latter will do * nothing. */ - hvm_ioreq_server_unmap_pages(s); + arch_ioreq_server_unmap_pages(s); hvm_ioreq_server_free_pages(s); =20 hvm_ioreq_server_free_rangesets(s); @@ -836,6 +854,12 @@ int hvm_create_ioreq_server(struct domain *d, int bufi= oreq_handling, return rc; } =20 +/* Called when target domain is paused */ +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) +{ + p2m_set_ioreq_server(s->target, 0, s); +} + int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { struct hvm_ioreq_server *s; @@ -855,7 +879,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) =20 domain_pause(d); =20 - p2m_set_ioreq_server(d, 0, s); + arch_ioreq_server_destroy(s); =20 hvm_ioreq_server_disable(s); =20 @@ -900,7 +924,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, =20 if ( ioreq_gfn || bufioreq_gfn ) { - rc =3D hvm_ioreq_server_map_pages(s); + rc =3D arch_ioreq_server_map_pages(s); if ( rc ) goto out; } @@ -1080,6 +1104,27 @@ int hvm_unmap_io_range_from_ioreq_server(struct doma= in *d, ioservid_t id, return rc; } =20 +/* Called with ioreq_server lock held */ +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) +{ + return p2m_set_ioreq_server(d, flags, s); +} + +void arch_ioreq_server_map_mem_type_completed(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) +{ + if ( flags =3D=3D 0 ) + { + const struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + if ( read_atomic(&p2m->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); + } +} + /* * Map or unmap an ioreq server to specific memory type. For now, only * HVMMEM_ioreq_server is supported, and in the future new types can be @@ -1112,18 +1157,13 @@ int hvm_map_mem_type_to_ioreq_server(struct domain = *d, ioservid_t id, if ( s->emulator !=3D current->domain ) goto out; =20 - rc =3D p2m_set_ioreq_server(d, flags, s); + rc =3D arch_ioreq_server_map_mem_type(d, s, flags); =20 out: spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); =20 - if ( rc =3D=3D 0 && flags =3D=3D 0 ) - { - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - if ( read_atomic(&p2m->ioreq.entry_count) ) - p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); - } + if ( rc =3D=3D 0 ) + arch_ioreq_server_map_mem_type_completed(d, s, flags); =20 return rc; } @@ -1210,12 +1250,17 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domai= n *d, struct vcpu *v) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } =20 +bool arch_ioreq_server_destroy_all(struct domain *d) +{ + return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); +} + void hvm_destroy_all_ioreq_servers(struct domain *d) { struct hvm_ioreq_server *s; unsigned int id; =20 - if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) ) + if ( !arch_ioreq_server_destroy_all(d) ) return; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1239,33 +1284,28 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +bool arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) { - struct hvm_ioreq_server *s; - uint32_t cf8; - uint8_t type; - uint64_t addr; - unsigned int id; + unsigned int cf8 =3D d->arch.hvm.pci_cf8; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) - return NULL; - - cf8 =3D d->arch.hvm.pci_cf8; + return false; =20 if ( p->type =3D=3D IOREQ_TYPE_PIO && (p->addr & ~3) =3D=3D 0xcfc && CF8_ENABLED(cf8) ) { - uint32_t x86_fam; + unsigned int x86_fam, reg; pci_sbdf_t sbdf; - unsigned int reg; =20 reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf); =20 /* PCI config data cycle */ - type =3D XEN_DMOP_IO_RANGE_PCI; - addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; + *type =3D XEN_DMOP_IO_RANGE_PCI; + *addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ if ( CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && @@ -1277,16 +1317,30 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, =20 if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) - addr |=3D CF8_ADDR_HI(cf8); + *addr |=3D CF8_ADDR_HI(cf8); } } else { - type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - addr =3D p->addr; + *type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr =3D p->addr; } =20 + return true; +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( !arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) + return NULL; + FOR_EACH_IOREQ_SERVER(d, id, s) { struct rangeset *r; @@ -1515,11 +1569,16 @@ static int hvm_access_cf8( return X86EMUL_UNHANDLEABLE; } =20 +void arch_ioreq_domain_init(struct domain *d) +{ + register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); +} + void hvm_ioreq_init(struct domain *d) { spin_lock_init(&d->arch.hvm.ioreq_server.lock); =20 - register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); + arch_ioreq_domain_init(d); } =20 /* diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index e2588e9..13d35e1 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,6 +19,9 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ =20 +#define HANDLE_BUFIOREQ(s) \ + ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) + bool hvm_io_pending(struct vcpu *v); bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); @@ -55,6 +58,25 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffer= ed); =20 void hvm_ioreq_init(struct domain *d); =20 +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_enable(struct hvm_ioreq_server *s); +void arch_ioreq_server_disable(struct hvm_ioreq_server *s); +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags); +void arch_ioreq_server_map_mem_type_completed(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags); +bool arch_ioreq_server_destroy_all(struct domain *d); +bool arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr); +void arch_ioreq_domain_init(struct domain *d); + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488404; cv=none; d=zohomail.com; s=zohoarc; b=QhoLu0mh8i1nC3ELB4Jkp1rHKT/YjV3+9a7/scAwTIvEtUDmt69Vln0KDWIcokHbNQDyyMsyP4zzUcxDupaw5AmHJGItoO7SETGG15gCxkXEXD/In1D+3nWxF2qs9PliPdn/O6/fc8AaV75ByRTtO9pl/TER3UFRQowwhSl/Q/k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488404; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=QWHMuQ5lPAzTcBVldh9Gt+8euNBZlz7bxLsK7R0b0yU=; b=OdFpVEUSjfbXO9OrNWktfD9kViI4s96eYkPusqR7OLvonBIYQd+hshbCMVIt6mmVcJDm1cV5vjyd69Cju8vYYAZueCOutJbpAY0/BjgTJEY37twnyWvWq5pH8ckXRrCCAWvC6HMIK4S443PyQQcuZ/lYCChhQAmRbCjzotx0wUs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488404720248.07169837361073; Tue, 12 Jan 2021 13:53:24 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66041.117136 (Exim 4.92) (envelope-from ) id 1kzRb9-0002V1-6B; Tue, 12 Jan 2021 21:53:07 +0000 Received: by outflank-mailman (output) from mailman id 66041.117136; Tue, 12 Jan 2021 21:53:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRb9-0002Ut-2S; Tue, 12 Jan 2021 21:53:07 +0000 Received: by outflank-mailman (input) for mailman id 66041; Tue, 12 Jan 2021 21:53:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRb7-0002PK-GW for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:05 +0000 Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 81985b04-9479-48c3-84a9-d4e20d4135d5; Tue, 12 Jan 2021 21:52:55 +0000 (UTC) Received: by mail-wr1-x42a.google.com with SMTP id q18so33729wrn.1 for ; Tue, 12 Jan 2021 13:52:55 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:52:53 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 81985b04-9479-48c3-84a9-d4e20d4135d5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QWHMuQ5lPAzTcBVldh9Gt+8euNBZlz7bxLsK7R0b0yU=; b=mAT2eBES08YGIxiAlWU9l2u0NtTvOwwSfQueXSviYo2muVGEojhEDOQqmxc4TBEQDW 4lyy7TEB43fYgHJ/JADXUYNDRsNsqf347iBSkS3fgkoycZXmJ4veRQovHf+5XqFfquX3 uijNsIybYGEat+ofNY0hxaosScLlsJqn6S5r+EMAn7kk0MOK14n5KUD4oHZ4o0JqSBCe 8367d6TgCJBsT+YudwG9ZDMSiN5j6R9+d4ISQ6R+84xb56EhxSLp9dEiYvOQYvEKw6Rm ePh9cA5PH0VVC8xnmIthzMFTFqU1Ur+gFmK99SQjjwQg9ewMEQ9GY3eAvL9gderwKEyf RaFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QWHMuQ5lPAzTcBVldh9Gt+8euNBZlz7bxLsK7R0b0yU=; b=uCehEyWSMqVkg62HpyK57X71rHVjiNjNSAES7mQCg9+o0rxuIUE7XNvh+OF9Eb09Ve FrBfgL/M0i6OnCrrclpZpzu8ePW001SWeN6hjHrpMWQ4gIo8UkRHYdIPpxKjimzh1aLD vzQuJh+tt1mqw7jmlN6bhGzOGrYr1KhFoSQk1JEMBEu2IGrfjFz4zpW7dZSGVAtcRh2b FqMkrDnQOx5gvsHLAKTYbPi+9YboTg5sEcxJGWZNppkUUx9efCAK/i8YQV4y//LzEhVO 3tMSpG/gZrySuavG3xgZi1w+yJ6g9mVtipACfthqlk2l8iVvg2h+ZTTCSL5Acj0k89pf BYEQ== X-Gm-Message-State: AOAM533owD5gRi1WmGqa1ab82BVeTesus5zbWouaQiuV8QG4Kh7Mnitc 6SXlyalk33pqRMgAsVf4XEKkFttsuaoYOg== X-Google-Smtp-Source: ABdhPJw3EKSVcU+lttAR3qeElqAQOmzZ8Q/bAmZPJD/LH2QGhu+22aJk7VxRLCGkmQ1cLAezVYDemQ== X-Received: by 2002:adf:e60f:: with SMTP id p15mr826636wrm.60.1610488374097; Tue, 12 Jan 2021 13:52:54 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V4 02/24] x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving Date: Tue, 12 Jan 2021 23:52:10 +0200 Message-Id: <1610488352-18494-3-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) From: Oleksandr Tyshchenko This patch continues to make some preparation to x86/hvm/ioreq.c before moving to the common code. Add IOREQ_STATUS_* #define-s and update candidates for moving since X86EMUL_* shouldn't be exposed to the common code in that form. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich Reviewed-by: Alex Benn=C3=A9e CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V2 -> V3: - new patch, was split from [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it common Changes V3 -> V4: - add Alex's R-b and Jan's A-b - add a comment above IOREQ_STATUS_* #define-s --- xen/arch/x86/hvm/ioreq.c | 16 ++++++++-------- xen/include/asm-x86/hvm/ioreq.h | 5 +++++ 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 468fe84..ff9a546 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1405,7 +1405,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) pg =3D iorp->va; =20 if ( !pg ) - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; =20 /* * Return 0 for the cases we can't deal with: @@ -1435,7 +1435,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) break; default: gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } =20 spin_lock(&s->bufioreq_lock); @@ -1445,7 +1445,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) { /* The queue is full: send the iopacket through the normal path. */ spin_unlock(&s->bufioreq_lock); - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } =20 pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D bp; @@ -1476,7 +1476,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) notify_via_xen_event_channel(d, s->bufioreq_evtchn); spin_unlock(&s->bufioreq_lock); =20 - return X86EMUL_OKAY; + return IOREQ_STATUS_HANDLED; } =20 int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, @@ -1492,7 +1492,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_= t *proto_p, return hvm_send_buffered_ioreq(s, proto_p); =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return X86EMUL_RETRY; + return IOREQ_STATUS_RETRY; =20 list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -1532,11 +1532,11 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, iore= q_t *proto_p, notify_via_xen_event_channel(d, port); =20 sv->pending =3D true; - return X86EMUL_RETRY; + return IOREQ_STATUS_RETRY; } } =20 - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } =20 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) @@ -1550,7 +1550,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buf= fered) if ( !s->enabled ) continue; =20 - if ( hvm_send_ioreq(s, p, buffered) =3D=3D X86EMUL_UNHANDLEABLE ) + if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) failed++; } =20 diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 13d35e1..f140ef4 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -77,6 +77,11 @@ bool arch_ioreq_server_get_type_addr(const struct domain= *d, uint64_t *addr); void arch_ioreq_domain_init(struct domain *d); =20 +/* This correlation must not be altered */ +#define IOREQ_STATUS_HANDLED X86EMUL_OKAY +#define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE +#define IOREQ_STATUS_RETRY X86EMUL_RETRY + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488409; cv=none; d=zohomail.com; s=zohoarc; b=mDuvicSzcdZ9m155IehjCIBe4tf77l8TVhw18NYNWmrl3yjKHF6hnS2rHdDWCyF+rEeAmLvzq3ph0V+c8QbZb9D9Y/u9/CjLwTWj8cP7WK+WHXDtpRIbxB6XpCtAf6n7bi/pqrBlrWm/u93ZxpfqMVri6/4E344a0A+PBkqswBA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488409; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=2tk65VOMw3jKSdrbjvPdRwki3JpePSSllFpesiDYEBY=; b=blz/mS3gGPzNmw1jbwAoPnrMpgSv4Rm66761dawO1SybFw7Gv5zvoNbyJZGZtypMGolJ7TUFvLhql/cPka1g9mSdPsOEvnUQTo4OdcHoLKIKiIeiPBH5bmQAeCfIckeinkmQkAWcjl/wpBatND4Nj7Vr0pRJH3H7W5e/hVGzicg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488409588610.5001238095089; Tue, 12 Jan 2021 13:53:29 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66042.117148 (Exim 4.92) (envelope-from ) id 1kzRbE-0002ZW-Eu; Tue, 12 Jan 2021 21:53:12 +0000 Received: by outflank-mailman (output) from mailman id 66042.117148; Tue, 12 Jan 2021 21:53:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbE-0002ZN-Ak; Tue, 12 Jan 2021 21:53:12 +0000 Received: by outflank-mailman (input) for mailman id 66042; Tue, 12 Jan 2021 21:53:10 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbC-0002PK-Gp for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:10 +0000 Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 429ab65e-718c-4e48-ae97-b36cf6066a68; Tue, 12 Jan 2021 21:52:56 +0000 (UTC) Received: by mail-wm1-x330.google.com with SMTP id r4so3488135wmh.5 for ; Tue, 12 Jan 2021 13:52:56 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:52:54 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 429ab65e-718c-4e48-ae97-b36cf6066a68 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2tk65VOMw3jKSdrbjvPdRwki3JpePSSllFpesiDYEBY=; b=TvvwPbdYFdvWGZuc/0gCrpu3QfUvvXgWDxTjSF33qp4Qvuypt3NJbFmE+aLWzZ+ppT Nxa9AjIUnCnoA8sQgI+i05zp3BzQMSuoiMLbpIROAyKgMsHt2lxbAAmrC8gc1c5Y+kSo no2oRtNmrCuRT0eR/OyirZx5AVCrAXm96aKVejfUCytmarrxASQtC/fsD5vUBgzPG3kn JxCSmNVtP6gYmOrbwO/dLJZSG3Lo/DJI3fB0QiEcGyxT+7e5HtnnQ/mfXhC4fc774Y6P 29fl+3cilNLLe80aC8p/1t891E4t6PoZ/UG1FFw+lzm78DmhJZcDy69KViLVWD4wnRAD hiyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2tk65VOMw3jKSdrbjvPdRwki3JpePSSllFpesiDYEBY=; b=lTUxzmhxUYBM9x6D2nXj4PfXFrTSU+7qOPHXwrwuoy2xV3qemS0LOpmUJ58WbPMkEO JW6YOoE5SIhDHbqz2M39iIs0j41pgzSPG9MfCHROvlGgy5yNAYkn5V7yma0HcoxsU9CP 0tz2oHbxADvapR8M8dkvLFK89cPEcUhnEXCxyNNVR4z5DymQBv6s5W9Buiifirrxkcps KWgiDcdQQh8l2Fs4Nl8kVXnVTGveqUEv/iKkcaP5S/wumx1t2dcLP9kkBchOG3M/bWj1 8KW4gkZo4rPd11h7RqWDJOg1BK2VJCjVMeWMrqLArFAd5RS28wXqBePap+vN/owbw+IT ojgA== X-Gm-Message-State: AOAM532FwgaNnQssMOVGj7eLsSYjhWvfug/a0VfFIQqRRROL3c0WLAYz OJpnq+l0++6uAqx2LEDdBChYobFhimxnYg== X-Google-Smtp-Source: ABdhPJy2qmg2+2rotDljbXzcPM4UU//GyC2s3SuUhtv/OfjlxAQK9sctT/mWAiWQfZVdoNvYFdGl+g== X-Received: by 2002:a1c:cc19:: with SMTP id h25mr1164357wmb.124.1610488375118; Tue, 12 Jan 2021 13:52:55 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V4 03/24] x86/ioreq: Provide out-of-line wrapper for the handle_mmio() Date: Tue, 12 Jan 2021 23:52:11 +0200 Message-Id: <1610488352-18494-4-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is about to be common feature and Arm will have its own implementation. But the name of the function is pretty generic and can be confusing on Arm (we already have a try_handle_mmio()). In order not to rename the function (which is used for a varying set of purposes on x86) globally and get non-confusing variant on Arm provide a wrapper arch_ioreq_complete_mmio() to be used on common and Arm code. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - remove "handle" - add Jan's A-b Changes V2 -> V3: - remove Jan's A-b - update patch subject/description - use out-of-line function instead of #define - put earlier in the series to avoid breakage Changes V3 -> V4: - add Jan's R-b - rename ioreq_complete_mmio() to arch_ioreq_complete_mmio() --- xen/arch/x86/hvm/ioreq.c | 7 ++++++- xen/include/asm-x86/hvm/ioreq.h | 1 + 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index ff9a546..00c68f5 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -35,6 +35,11 @@ #include #include =20 +bool arch_ioreq_complete_mmio(void) +{ + return handle_mmio(); +} + static void set_ioreq_server(struct domain *d, unsigned int id, struct hvm_ioreq_server *s) { @@ -225,7 +230,7 @@ bool handle_hvm_io_completion(struct vcpu *v) break; =20 case HVMIO_mmio_completion: - return handle_mmio(); + return arch_ioreq_complete_mmio(); =20 case HVMIO_pio_completion: return handle_pio(vio->io_req.addr, vio->io_req.size, diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index f140ef4..0e64e76 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -58,6 +58,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffere= d); =20 void hvm_ioreq_init(struct domain *d); =20 +bool arch_ioreq_complete_mmio(void); bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488432; cv=none; d=zohomail.com; s=zohoarc; b=hYrMkU88d+5ufxc4to3cRYLVv4Pc2VcFbfF671vPTavncogTMgF+Aa/4gpS7Nc4MzH7PqV5CC1B3lOQpAPRNBPUQh6TRqH+kJxMObaSx/e5OE/CIzIcEhSF2wg6Zh92mTC/TeTm6bvAyx3yuGZmmuVvsNp6yKrH3ljE4Ala/0rk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488432; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=GasREEJ25sTKVabdBGP+11F6frvM9nQGKecisjlVwAk=; b=hndBk9LSLKQ7Az7aZFxKdvpM4hBni58ffsoO/kn2LQ2Un7yPeyEkvjGD1f6bd3uQ51n7/RYtdV0LpWyU++stPRL3OL2DrADuHj9775sCD7Iq2UVky47newXYb3OUW5tzp6YW6vQdGsU66X28TD+0Mm2c+wxznNvAO+oeGR3k6qg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488432603248.92698863186195; Tue, 12 Jan 2021 13:53:52 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66045.117184 (Exim 4.92) (envelope-from ) id 1kzRbT-0002qd-M8; Tue, 12 Jan 2021 21:53:27 +0000 Received: by outflank-mailman (output) from mailman id 66045.117184; Tue, 12 Jan 2021 21:53:27 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbT-0002qW-HC; Tue, 12 Jan 2021 21:53:27 +0000 Received: by outflank-mailman (input) for mailman id 66045; Tue, 12 Jan 2021 21:53:25 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbR-0002PK-H8 for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:25 +0000 Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e49f883e-f0d0-4140-b6e0-40a1134f46a3; Tue, 12 Jan 2021 21:52:58 +0000 (UTC) Received: by mail-wm1-x335.google.com with SMTP id r4so3488222wmh.5 for ; Tue, 12 Jan 2021 13:52:58 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:52:56 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e49f883e-f0d0-4140-b6e0-40a1134f46a3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GasREEJ25sTKVabdBGP+11F6frvM9nQGKecisjlVwAk=; b=YvzmwFaR87TBuB607TGM8k2rTS3+57m9kyyhoo28ov/iAJGO7rp6wjJAAalAH0Lmon kBSDAkdGviSDS1UlNAOdZfWVlRdVLIOTu6HQv1/Z3WQmzBtLzcQ5Bd4a5H3Pa7Iu/UPP SBLZbi1+nrOuZkbRng5kf0QLslK7Dl845LyVqEOy8CCLLaIvnSzt/lVyphGq8Nm3E5kY sOWk1ZY0jllpk6A8rydA+l3/BjYPdSOmAcEZNlgZE1KMJJ29uUcLoF4N0TrfC9hYA9Rb DVujJ2CtDQu1sW3VA/yR8YfQiaYs6nuMkXdcd2qwfWrXguxnFuPhgHVpfS+eFqiBcJ0h qZZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GasREEJ25sTKVabdBGP+11F6frvM9nQGKecisjlVwAk=; b=aWwe8zchBdYrh0XtJC3tgfPmBFViVjggiRs1U3v2uxUJlP99J6H66nSYXjkU7ybg9C GGFgRcqmSxnmOYAzK+U+eQIxPtgIifUX/ygmM2ZqoqQZNtyAbtSWWZvLm2iMi2hZEBXs bwv7j/GYnP1RF60Jhfo2p8wZeP+qWXr+me+MXpVUvw24zqw9Wn7WnO6wEd1GBsGo2FPl rHAvGh5ZYBTh+wlpqKHsyuX5gbxCnwqahQtMtQS/0S7rhSgcdLwEf5VLe5UVcRtgfi6D 1rAjfPQUYLNJP6hm1m0MMdgXLTHKas6VxdOLtEbYDaKo8h6c4knvxb1enCFYBelJUNis /ezQ== X-Gm-Message-State: AOAM533sEQzXIOv3mkKTfwNiVxW5/4Sjx2aOx2yyHT5iiNFeoSznChkL bS/gpHWikkaZMeHkATR0uu8TBcB2zLmbag== X-Google-Smtp-Source: ABdhPJybCPTxYqQA6OLB/r40l72g2v+W/BuHNqiOOp+IBNW0pdFFdmeVd+Ll6Oq/R5KWns2RTaNfqA== X-Received: by 2002:a1c:4c14:: with SMTP id z20mr1111905wmf.149.1610488376580; Tue, 12 Jan 2021 13:52:56 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Paul Durrant , Jun Nakajima , Kevin Tian , Tim Deegan , Julien Grall Subject: [PATCH V4 04/24] xen/ioreq: Make x86's IOREQ feature common Date: Tue, 12 Jan 2021 23:52:12 +0200 Message-Id: <1610488352-18494-5-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch moves previously prepared IOREQ support to the common code (the code movement is verbatim copy). The "legacy" mechanism of mapping magic pages for the IOREQ servers remains x86 specific and not exposed to the common code. The common IOREQ feature is supposed to be built with IOREQ_SERVER option enabled, which is selected for x86's config HVM for now. In order to avoid having a gigantic patch here, the subsequent patches will update remaining bits in the common code step by step: - Make IOREQ related structs/materials common - Drop the "hvm" prefixes and infixes - Remove layering violation by moving corresponding fields out of *arch.hvm* or abstracting away accesses to them Also include which will be needed on Arm to avoid touch the common code again when introducing Arm specific bits. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** Please note, this patch depends on the following which is on review: https://patchwork.kernel.org/patch/11816689/ *** Changes RFC -> V1: - was split into three patches: - x86/ioreq: Prepare IOREQ feature for making it common - xen/ioreq: Make x86's IOREQ feature common - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common - update MAINTAINERS file - do not use a separate subdir for the IOREQ stuff, move it to: - xen/common/ioreq.c - xen/include/xen/ioreq.h - update x86's files to include xen/ioreq.h - remove unneeded headers in arch/x86/hvm/ioreq.c - re-order the headers alphabetically in common/ioreq.c - update common/ioreq.c according to the newly introduced arch functions: arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion() Changes V1 -> V2: - update patch description - make everything needed in the previous patch to achieve a truly rename here - don't include unnecessary headers from asm-x86/hvm/ioreq.h and xen/ioreq.h - use __XEN_IOREQ_H__ instead of __IOREQ_H__ - move get_ioreq_server() to common/ioreq.c Changes V2 -> V3: - update patch description - make everything needed in the previous patch to not expose "legacy" interface to the common code here - update patch according the "legacy interface" is x86 specific - include in common ioreq.c Changes V3 -> V4: - rebase - don't include from arch header - =D0=BCove all arch hook declarations to the common header --- MAINTAINERS | 8 +- xen/arch/x86/Kconfig | 1 + xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 2 +- xen/arch/x86/hvm/ioreq.c | 1347 ++---------------------------------= ---- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/hvm/vmx/vvmx.c | 3 +- xen/arch/x86/mm.c | 2 +- xen/arch/x86/mm/shadow/common.c | 2 +- xen/common/Kconfig | 3 + xen/common/Makefile | 1 + xen/common/ioreq.c | 1290 +++++++++++++++++++++++++++++++++++= ++ xen/include/asm-x86/hvm/ioreq.h | 59 -- xen/include/xen/ioreq.h | 93 +++ 16 files changed, 1455 insertions(+), 1364 deletions(-) create mode 100644 xen/common/ioreq.c create mode 100644 xen/include/xen/ioreq.h diff --git a/MAINTAINERS b/MAINTAINERS index 6dbd99a..0160cab 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -333,6 +333,13 @@ X: xen/drivers/passthrough/vtd/ X: xen/drivers/passthrough/device_tree.c F: xen/include/xen/iommu.h =20 +I/O EMULATION (IOREQ) +M: Paul Durrant +S: Supported +F: xen/common/ioreq.c +F: xen/include/xen/ioreq.h +F: xen/include/public/hvm/ioreq.h + KCONFIG M: Doug Goldstein S: Supported @@ -549,7 +556,6 @@ F: xen/arch/x86/hvm/ioreq.c F: xen/include/asm-x86/hvm/emulate.h F: xen/include/asm-x86/hvm/io.h F: xen/include/asm-x86/hvm/ioreq.h -F: xen/include/public/hvm/ioreq.h =20 X86 MEMORY MANAGEMENT M: Jan Beulich diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 24868aa..abe0fce 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -91,6 +91,7 @@ config PV_LINEAR_PT =20 config HVM def_bool !PV_SHIM_EXCLUSIVE + select IOREQ_SERVER prompt "HVM support" ---help--- Interfaces to support HVM domains. HVM domains require hardware diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 71f5ca4..d3e2a9e 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -17,12 +17,12 @@ #include #include #include +#include #include #include =20 #include #include -#include #include =20 #include diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 24cf85f..60ca465 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -10,6 +10,7 @@ */ =20 #include +#include #include #include #include @@ -20,7 +21,6 @@ #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 54e32e4..bc96947 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -20,6 +20,7 @@ =20 #include #include +#include #include #include #include @@ -64,7 +65,6 @@ #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 3e09d9b..11e007d 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -19,6 +19,7 @@ */ =20 #include +#include #include #include #include @@ -35,7 +36,6 @@ #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 00c68f5..177b964 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -29,7 +30,6 @@ =20 #include #include -#include #include =20 #include @@ -40,140 +40,6 @@ bool arch_ioreq_complete_mmio(void) return handle_mmio(); } =20 -static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) -{ - ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); - - d->arch.hvm.ioreq_server.server[id] =3D s; -} - -#define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] - -static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) -{ - if ( id >=3D MAX_NR_IOREQ_SERVERS ) - return NULL; - - return GET_IOREQ_SERVER(d, id); -} - -/* - * Iterate over all possible ioreq servers. - * - * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). - * This is a semantic that previously existed when ioreq servers - * were held in a linked list. - */ -#define FOR_EACH_IOREQ_SERVER(d, id, s) \ - for ( (id) =3D MAX_NR_IOREQ_SERVERS; (id) !=3D 0; ) \ - if ( !(s =3D GET_IOREQ_SERVER(d, --(id))) ) \ - continue; \ - else - -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) -{ - shared_iopage_t *p =3D s->ioreq.va; - - ASSERT((v =3D=3D current) || !vcpu_runnable(v)); - ASSERT(p !=3D NULL); - - return &p->vcpu_ioreq[v->vcpu_id]; -} - -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **s= rvp) -{ - struct domain *d =3D v->domain; - struct hvm_ioreq_server *s; - unsigned int id; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct hvm_ioreq_vcpu *sv; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu =3D=3D v && sv->pending ) - { - if ( srvp ) - *srvp =3D s; - return sv; - } - } - } - - return NULL; -} - -bool hvm_io_pending(struct vcpu *v) -{ - return get_pending_vcpu(v, NULL); -} - -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) -{ - unsigned int prev_state =3D STATE_IOREQ_NONE; - unsigned int state =3D p->state; - uint64_t data =3D ~0; - - smp_rmb(); - - /* - * The only reason we should see this condition be false is when an - * emulator dying races with I/O being requested. - */ - while ( likely(state !=3D STATE_IOREQ_NONE) ) - { - if ( unlikely(state < prev_state) ) - { - gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %= u\n", - prev_state, state); - sv->pending =3D false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - switch ( prev_state =3D state ) - { - case STATE_IORESP_READY: /* IORESP_READY -> NONE */ - p->state =3D STATE_IOREQ_NONE; - data =3D p->data; - break; - - case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READ= Y */ - case STATE_IOREQ_INPROCESS: - wait_on_xen_event_channel(sv->ioreq_evtchn, - ({ state =3D p->state; - smp_rmb(); - state !=3D prev_state; })); - continue; - - default: - gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); - sv->pending =3D false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - break; - } - - p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; - if ( hvm_ioreq_needs_completion(p) ) - p->data =3D data; - - sv->pending =3D false; - - return true; -} - bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) { switch ( io_completion ) @@ -197,52 +63,6 @@ bool arch_vcpu_ioreq_completion(enum hvm_io_completion = io_completion) return true; } =20 -bool handle_hvm_io_completion(struct vcpu *v) -{ - struct domain *d =3D v->domain; - struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; - enum hvm_io_completion io_completion; - - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - - sv =3D get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) - return false; - - vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) ? - STATE_IORESP_READY : STATE_IOREQ_NONE; - - msix_write_completion(v); - vcpu_end_shutdown_deferral(v); - - io_completion =3D vio->io_completion; - vio->io_completion =3D HVMIO_no_completion; - - switch ( io_completion ) - { - case HVMIO_no_completion: - break; - - case HVMIO_mmio_completion: - return arch_ioreq_complete_mmio(); - - case HVMIO_pio_completion: - return handle_pio(vio->io_req.addr, vio->io_req.size, - vio->io_req.dir); - - default: - return arch_vcpu_ioreq_completion(io_completion); - } - - return true; -} - static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) { struct domain *d =3D s->target; @@ -359,93 +179,6 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *= s, bool buf) return rc; } =20 -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - - if ( iorp->page ) - { - /* - * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then - * allocating a page is not permitted. - */ - if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; - - return 0; - } - - page =3D alloc_domheap_page(s->target, MEMF_no_refcount); - - if ( !page ) - return -ENOMEM; - - if ( !get_page_and_type(page, s->target, PGT_writable_page) ) - { - /* - * The domain can't possibly know about this page yet, so failure - * here is a clear indication of something fishy going on. - */ - domain_crash(s->emulator); - return -ENODATA; - } - - iorp->va =3D __map_domain_page_global(page); - if ( !iorp->va ) - goto fail; - - iorp->page =3D page; - clear_page(iorp->va); - return 0; - - fail: - put_page_alloc_ref(page); - put_page_and_type(page); - - return -ENOMEM; -} - -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page =3D iorp->page; - - if ( !page ) - return; - - iorp->page =3D NULL; - - unmap_domain_page_global(iorp->va); - iorp->va =3D NULL; - - put_page_alloc_ref(page); - put_page_and_type(page); -} - -bool is_ioreq_server_page(struct domain *d, const struct page_info *page) -{ - const struct hvm_ioreq_server *s; - unsigned int id; - bool found =3D false; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) - { - found =3D true; - break; - } - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return found; -} - static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) =20 { @@ -480,125 +213,6 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server = *s, bool buf) return rc; } =20 -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) -{ - ASSERT(spin_is_locked(&s->lock)); - - if ( s->ioreq.va !=3D NULL ) - { - ioreq_t *p =3D get_ioreq(s, sv->vcpu); - - p->vp_eport =3D sv->ioreq_evtchn; - } -} - -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - int rc; - - sv =3D xzalloc(struct hvm_ioreq_vcpu); - - rc =3D -ENOMEM; - if ( !sv ) - goto fail1; - - spin_lock(&s->lock); - - rc =3D alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail2; - - sv->ioreq_evtchn =3D rc; - - if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) - { - rc =3D alloc_unbound_xen_event_channel(v->domain, 0, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail3; - - s->bufioreq_evtchn =3D rc; - } - - sv->vcpu =3D v; - - list_add(&sv->list_entry, &s->ioreq_vcpu_list); - - if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); - - spin_unlock(&s->lock); - return 0; - - fail3: - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - fail2: - spin_unlock(&s->lock); - xfree(sv); - - fail1: - return rc; -} - -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - - spin_lock(&s->lock); - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu !=3D v ) - continue; - - list_del(&sv->list_entry); - - if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - break; - } - - spin_unlock(&s->lock); -} - -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) -{ - struct hvm_ioreq_vcpu *sv, *next; - - spin_lock(&s->lock); - - list_for_each_entry_safe ( sv, - next, - &s->ioreq_vcpu_list, - list_entry ) - { - struct vcpu *v =3D sv->vcpu; - - list_del(&sv->list_entry); - - if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - } - - spin_unlock(&s->lock); -} - int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) { int rc; @@ -620,705 +234,80 @@ void arch_ioreq_server_unmap_pages(struct hvm_ioreq_= server *s) hvm_unmap_ioreq_gfn(s, false); } =20 -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_enable(struct hvm_ioreq_server *s) { - int rc; - - rc =3D hvm_alloc_ioreq_mfn(s, false); - - if ( !rc && (s->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc =3D hvm_alloc_ioreq_mfn(s, true); - - if ( rc ) - hvm_free_ioreq_mfn(s, false); - - return rc; + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); } =20 -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_disable(struct hvm_ioreq_server *s) { - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); } =20 -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +/* Called when target domain is paused */ +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) { - unsigned int i; - - for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) - rangeset_destroy(s->range[i]); + p2m_set_ioreq_server(s->target, 0, s); } =20 -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, - ioservid_t id) +/* Called with ioreq_server lock held */ +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) { - unsigned int i; - int rc; + return p2m_set_ioreq_server(d, flags, s); +} =20 - for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) +void arch_ioreq_server_map_mem_type_completed(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) +{ + if ( flags =3D=3D 0 ) { - char *name; - - rc =3D asprintf(&name, "ioreq_server %d %s", id, - (i =3D=3D XEN_DMOP_IO_RANGE_PORT) ? "port" : - (i =3D=3D XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : - (i =3D=3D XEN_DMOP_IO_RANGE_PCI) ? "pci" : - ""); - if ( rc ) - goto fail; - - s->range[i] =3D rangeset_new(s->target, name, - RANGESETF_prettyprint_hex); - - xfree(name); - - rc =3D -ENOMEM; - if ( !s->range[i] ) - goto fail; + const struct p2m_domain *p2m =3D p2m_get_hostp2m(d); =20 - rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + if ( read_atomic(&p2m->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); } - - return 0; - - fail: - hvm_ioreq_server_free_rangesets(s); - - return rc; } =20 -void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +bool arch_ioreq_server_destroy_all(struct domain *d) { - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); } =20 -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +bool arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) { - struct hvm_ioreq_vcpu *sv; - - spin_lock(&s->lock); - - if ( s->enabled ) - goto done; - - arch_ioreq_server_enable(s); + unsigned int cf8 =3D d->arch.hvm.pci_cf8; =20 - s->enabled =3D true; + if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) + return false; =20 - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); + if ( p->type =3D=3D IOREQ_TYPE_PIO && + (p->addr & ~3) =3D=3D 0xcfc && + CF8_ENABLED(cf8) ) + { + unsigned int x86_fam, reg; + pci_sbdf_t sbdf; =20 - done: - spin_unlock(&s->lock); -} + reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf); =20 -void arch_ioreq_server_disable(struct hvm_ioreq_server *s) -{ - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); -} - -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) -{ - spin_lock(&s->lock); - - if ( !s->enabled ) - goto done; - - arch_ioreq_server_disable(s); - - s->enabled =3D false; - - done: - spin_unlock(&s->lock); -} - -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) -{ - struct domain *currd =3D current->domain; - struct vcpu *v; - int rc; - - s->target =3D d; - - get_knownalive_domain(currd); - s->emulator =3D currd; - - spin_lock_init(&s->lock); - INIT_LIST_HEAD(&s->ioreq_vcpu_list); - spin_lock_init(&s->bufioreq_lock); - - s->ioreq.gfn =3D INVALID_GFN; - s->bufioreq.gfn =3D INVALID_GFN; - - rc =3D hvm_ioreq_server_alloc_rangesets(s, id); - if ( rc ) - return rc; - - s->bufioreq_handling =3D bufioreq_handling; - - for_each_vcpu ( d, v ) - { - rc =3D hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail_add; - } - - return 0; - - fail_add: - hvm_ioreq_server_remove_all_vcpus(s); - arch_ioreq_server_unmap_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); - return rc; -} - -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) -{ - ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); - - /* - * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. - */ - arch_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); -} - -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) -{ - struct hvm_ioreq_server *s; - unsigned int i; - int rc; - - if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) - return -EINVAL; - - s =3D xzalloc(struct hvm_ioreq_server); - if ( !s ) - return -ENOMEM; - - domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) - { - if ( !GET_IOREQ_SERVER(d, i) ) - break; - } - - rc =3D -ENOSPC; - if ( i >=3D MAX_NR_IOREQ_SERVERS ) - goto fail; - - /* - * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. - */ - set_ioreq_server(d, i, s); - - rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); - if ( rc ) - { - set_ioreq_server(d, i, NULL); - goto fail; - } - - if ( id ) - *id =3D i; - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - return 0; - - fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - xfree(s); - return rc; -} - -/* Called when target domain is paused */ -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) -{ - p2m_set_ioreq_server(s->target, 0, s); -} - -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - domain_pause(d); - - arch_ioreq_server_destroy(s); - - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is paused. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - domain_unpause(d); - - xfree(s); - - rc =3D 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - if ( ioreq_gfn || bufioreq_gfn ) - { - rc =3D arch_ioreq_server_map_pages(s); - if ( rc ) - goto out; - } - - if ( ioreq_gfn ) - *ioreq_gfn =3D gfn_x(s->ioreq.gfn); - - if ( HANDLE_BUFIOREQ(s) ) - { - if ( bufioreq_gfn ) - *bufioreq_gfn =3D gfn_x(s->bufioreq.gfn); - - if ( bufioreq_port ) - *bufioreq_port =3D s->bufioreq_evtchn; - } - - rc =3D 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) -{ - struct hvm_ioreq_server *s; - int rc; - - ASSERT(is_hvm_domain(d)); - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - rc =3D hvm_ioreq_server_alloc_pages(s); - if ( rc ) - goto out; - - switch ( idx ) - { - case XENMEM_resource_ioreq_server_frame_bufioreq: - rc =3D -ENOENT; - if ( !HANDLE_BUFIOREQ(s) ) - goto out; - - *mfn =3D page_to_mfn(s->bufioreq.page); - rc =3D 0; - break; - - case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn =3D page_to_mfn(s->ioreq.page); - rc =3D 0; - break; - - default: - rc =3D -EINVAL; - break; - } - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) -{ - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r =3D s->range[type]; - break; - - default: - r =3D NULL; - break; - } - - rc =3D -EINVAL; - if ( !r ) - goto out; - - rc =3D -EEXIST; - if ( rangeset_overlaps_range(r, start, end) ) - goto out; - - rc =3D rangeset_add_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) -{ - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r =3D s->range[type]; - break; - - default: - r =3D NULL; - break; - } - - rc =3D -EINVAL; - if ( !r ) - goto out; - - rc =3D -ENOENT; - if ( !rangeset_contains_range(r, start, end) ) - goto out; - - rc =3D rangeset_remove_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -/* Called with ioreq_server lock held */ -int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, - uint32_t flags) -{ - return p2m_set_ioreq_server(d, flags, s); -} - -void arch_ioreq_server_map_mem_type_completed(struct domain *d, - struct hvm_ioreq_server *s, - uint32_t flags) -{ - if ( flags =3D=3D 0 ) - { - const struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - if ( read_atomic(&p2m->ioreq.entry_count) ) - p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); - } -} - -/* - * Map or unmap an ioreq server to specific memory type. For now, only - * HVMMEM_ioreq_server is supported, and in the future new types can be - * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And - * currently, only write operations are to be forwarded to an ioreq server. - * Support for the emulation of read operations can be added when an ioreq - * server has such requirement in the future. - */ -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags) -{ - struct hvm_ioreq_server *s; - int rc; - - if ( type !=3D HVMMEM_ioreq_server ) - return -EINVAL; - - if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - rc =3D arch_ioreq_server_map_mem_type(d, s, flags); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - if ( rc =3D=3D 0 ) - arch_ioreq_server_map_mem_type_completed(d, s, flags); - - return rc; -} - -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - domain_pause(d); - - if ( enabled ) - hvm_ioreq_server_enable(s); - else - hvm_ioreq_server_disable(s); - - domain_unpause(d); - - rc =3D 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - return rc; -} - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - rc =3D hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail; - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return 0; - - fail: - while ( ++id !=3D MAX_NR_IOREQ_SERVERS ) - { - s =3D GET_IOREQ_SERVER(d, id); - - if ( !s ) - continue; - - hvm_ioreq_server_remove_vcpu(s, v); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - -bool arch_ioreq_server_destroy_all(struct domain *d) -{ - return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); -} - -void hvm_destroy_all_ioreq_servers(struct domain *d) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - if ( !arch_ioreq_server_destroy_all(d) ) - return; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - /* No need to domain_pause() as the domain is being torn down */ - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is being destroyed. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - xfree(s); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - -bool arch_ioreq_server_get_type_addr(const struct domain *d, - const ioreq_t *p, - uint8_t *type, - uint64_t *addr) -{ - unsigned int cf8 =3D d->arch.hvm.pci_cf8; - - if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) - return false; - - if ( p->type =3D=3D IOREQ_TYPE_PIO && - (p->addr & ~3) =3D=3D 0xcfc && - CF8_ENABLED(cf8) ) - { - unsigned int x86_fam, reg; - pci_sbdf_t sbdf; - - reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf); - - /* PCI config data cycle */ - *type =3D XEN_DMOP_IO_RANGE_PCI; - *addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; - /* AMD extended configuration space access? */ - if ( CF8_ADDR_HI(cf8) && - d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && - (x86_fam =3D get_cpu_family( - d->arch.cpuid->basic.raw_fms, NULL, NULL)) >=3D 0x10 && - x86_fam < 0x17 ) - { - uint64_t msr_val; + /* PCI config data cycle */ + *type =3D XEN_DMOP_IO_RANGE_PCI; + *addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; + /* AMD extended configuration space access? */ + if ( CF8_ADDR_HI(cf8) && + d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && + (x86_fam =3D get_cpu_family( + d->arch.cpuid->basic.raw_fms, NULL, NULL)) >=3D 0x10 && + x86_fam < 0x17 ) + { + uint64_t msr_val; =20 if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) @@ -1335,233 +324,6 @@ bool arch_ioreq_server_get_type_addr(const struct do= main *d, return true; } =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) -{ - struct hvm_ioreq_server *s; - uint8_t type; - uint64_t addr; - unsigned int id; - - if ( !arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) - return NULL; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct rangeset *r; - - if ( !s->enabled ) - continue; - - r =3D s->range[type]; - - switch ( type ) - { - unsigned long start, end; - - case XEN_DMOP_IO_RANGE_PORT: - start =3D addr; - end =3D start + p->size - 1; - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_MEMORY: - start =3D hvm_mmio_first_byte(p); - end =3D hvm_mmio_last_byte(p); - - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_PCI: - if ( rangeset_contains_singleton(r, addr >> 32) ) - { - p->type =3D IOREQ_TYPE_PCI_CONFIG; - p->addr =3D addr; - return s; - } - - break; - } - } - - return NULL; -} - -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) -{ - struct domain *d =3D current->domain; - struct hvm_ioreq_page *iorp; - buffered_iopage_t *pg; - buf_ioreq_t bp =3D { .data =3D p->data, - .addr =3D p->addr, - .type =3D p->type, - .dir =3D p->dir }; - /* Timeoffset sends 64b data, but no address. Use two consecutive slot= s. */ - int qw =3D 0; - - /* Ensure buffered_iopage fits in a page */ - BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); - - iorp =3D &s->bufioreq; - pg =3D iorp->va; - - if ( !pg ) - return IOREQ_STATUS_UNHANDLED; - - /* - * Return 0 for the cases we can't deal with: - * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB - * - we cannot buffer accesses to guest memory buffers, as the guest - * may expect the memory buffer to be synchronously accessed - * - the count field is usually used with data_is_ptr and since we do= n't - * support data_is_ptr we do not waste space for the count field ei= ther - */ - if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count !=3D 1) ) - return 0; - - switch ( p->size ) - { - case 1: - bp.size =3D 0; - break; - case 2: - bp.size =3D 1; - break; - case 4: - bp.size =3D 2; - break; - case 8: - bp.size =3D 3; - qw =3D 1; - break; - default: - gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return IOREQ_STATUS_UNHANDLED; - } - - spin_lock(&s->bufioreq_lock); - - if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=3D - (IOREQ_BUFFER_SLOT_NUM - qw) ) - { - /* The queue is full: send the iopacket through the normal path. */ - spin_unlock(&s->bufioreq_lock); - return IOREQ_STATUS_UNHANDLED; - } - - pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D bp; - - if ( qw ) - { - bp.data =3D p->data >> 32; - pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = =3D bp; - } - - /* Make the ioreq_t visible /before/ write_pointer. */ - smp_wmb(); - pg->ptrs.write_pointer +=3D qw ? 2 : 1; - - /* Canonicalize read/write pointers to prevent their overflow. */ - while ( (s->bufioreq_handling =3D=3D HVM_IOREQSRV_BUFIOREQ_ATOMIC) && - qw++ < IOREQ_BUFFER_SLOT_NUM && - pg->ptrs.read_pointer >=3D IOREQ_BUFFER_SLOT_NUM ) - { - union bufioreq_pointers old =3D pg->ptrs, new; - unsigned int n =3D old.read_pointer / IOREQ_BUFFER_SLOT_NUM; - - new.read_pointer =3D old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; - new.write_pointer =3D old.write_pointer - n * IOREQ_BUFFER_SLOT_NU= M; - cmpxchg(&pg->ptrs.full, old.full, new.full); - } - - notify_via_xen_event_channel(d, s->bufioreq_evtchn); - spin_unlock(&s->bufioreq_lock); - - return IOREQ_STATUS_HANDLED; -} - -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered) -{ - struct vcpu *curr =3D current; - struct domain *d =3D curr->domain; - struct hvm_ioreq_vcpu *sv; - - ASSERT(s); - - if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); - - if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return IOREQ_STATUS_RETRY; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu =3D=3D curr ) - { - evtchn_port_t port =3D sv->ioreq_evtchn; - ioreq_t *p =3D get_ioreq(s, curr); - - if ( unlikely(p->state !=3D STATE_IOREQ_NONE) ) - { - gprintk(XENLOG_ERR, "device model set bad IO state %d\n", - p->state); - break; - } - - if ( unlikely(p->vp_eport !=3D port) ) - { - gprintk(XENLOG_ERR, "device model set bad event channel %d= \n", - p->vp_eport); - break; - } - - proto_p->state =3D STATE_IOREQ_NONE; - proto_p->vp_eport =3D port; - *p =3D *proto_p; - - prepare_wait_on_xen_event_channel(port); - - /* - * Following happens /after/ blocking and setting up ioreq - * contents. prepare_wait_on_xen_event_channel() is an implicit - * barrier. - */ - p->state =3D STATE_IOREQ_READY; - notify_via_xen_event_channel(d, port); - - sv->pending =3D true; - return IOREQ_STATUS_RETRY; - } - } - - return IOREQ_STATUS_UNHANDLED; -} - -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) -{ - struct domain *d =3D current->domain; - struct hvm_ioreq_server *s; - unsigned int id, failed =3D 0; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( !s->enabled ) - continue; - - if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) - failed++; - } - - return failed; -} - static int hvm_access_cf8( int dir, unsigned int port, unsigned int bytes, uint32_t *val) { @@ -1579,13 +341,6 @@ void arch_ioreq_domain_init(struct domain *d) register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); } =20 -void hvm_ioreq_init(struct domain *d) -{ - spin_lock_init(&d->arch.hvm.ioreq_server.lock); - - arch_ioreq_domain_init(d); -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index e267513..fd7cadb 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -27,10 +27,10 @@ * can have side effects. */ =20 +#include #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 3a37e9e..0ddb6a4 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -19,10 +19,11 @@ * */ =20 +#include + #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 79acf20..f6e128e 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -100,6 +100,7 @@ */ =20 #include +#include #include #include #include @@ -140,7 +141,6 @@ #include #include #include -#include #include #include =20 diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index 3298711..5012a9c 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -20,6 +20,7 @@ * along with this program; If not, see . */ =20 +#include #include #include #include @@ -34,7 +35,6 @@ #include #include #include -#include #include #include "private.h" =20 diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 0661328..fa049a6 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -136,6 +136,9 @@ config HYPFS_CONFIG Disable this option in case you want to spare some memory or you want to hide the .config contents from dom0. =20 +config IOREQ_SERVER + bool + config KEXEC bool "kexec support" default y diff --git a/xen/common/Makefile b/xen/common/Makefile index 7a4e652..b161381 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -14,6 +14,7 @@ obj-$(CONFIG_GRANT_TABLE) +=3D grant_table.o obj-y +=3D guestcopy.o obj-bin-y +=3D gunzip.init.o obj-$(CONFIG_HYPFS) +=3D hypfs.o +obj-$(CONFIG_IOREQ_SERVER) +=3D ioreq.o obj-y +=3D irq.o obj-y +=3D kernel.o obj-y +=3D keyhandler.o diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c new file mode 100644 index 0000000..8a004c4 --- /dev/null +++ b/xen/common/ioreq.c @@ -0,0 +1,1290 @@ +/* + * ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +static void set_ioreq_server(struct domain *d, unsigned int id, + struct hvm_ioreq_server *s) +{ + ASSERT(id < MAX_NR_IOREQ_SERVERS); + ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + + d->arch.hvm.ioreq_server.server[id] =3D s; +} + +#define GET_IOREQ_SERVER(d, id) \ + (d)->arch.hvm.ioreq_server.server[id] + +static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) +{ + if ( id >=3D MAX_NR_IOREQ_SERVERS ) + return NULL; + + return GET_IOREQ_SERVER(d, id); +} + +/* + * Iterate over all possible ioreq servers. + * + * NOTE: The iteration is backwards such that more recently created + * ioreq servers are favoured in hvm_select_ioreq_server(). + * This is a semantic that previously existed when ioreq servers + * were held in a linked list. + */ +#define FOR_EACH_IOREQ_SERVER(d, id, s) \ + for ( (id) =3D MAX_NR_IOREQ_SERVERS; (id) !=3D 0; ) \ + if ( !(s =3D GET_IOREQ_SERVER(d, --(id))) ) \ + continue; \ + else + +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +{ + shared_iopage_t *p =3D s->ioreq.va; + + ASSERT((v =3D=3D current) || !vcpu_runnable(v)); + ASSERT(p !=3D NULL); + + return &p->vcpu_ioreq[v->vcpu_id]; +} + +static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct hvm_ioreq_server **s= rvp) +{ + struct domain *d =3D v->domain; + struct hvm_ioreq_server *s; + unsigned int id; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct hvm_ioreq_vcpu *sv; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu =3D=3D v && sv->pending ) + { + if ( srvp ) + *srvp =3D s; + return sv; + } + } + } + + return NULL; +} + +bool hvm_io_pending(struct vcpu *v) +{ + return get_pending_vcpu(v, NULL); +} + +static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +{ + unsigned int prev_state =3D STATE_IOREQ_NONE; + unsigned int state =3D p->state; + uint64_t data =3D ~0; + + smp_rmb(); + + /* + * The only reason we should see this condition be false is when an + * emulator dying races with I/O being requested. + */ + while ( likely(state !=3D STATE_IOREQ_NONE) ) + { + if ( unlikely(state < prev_state) ) + { + gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %= u\n", + prev_state, state); + sv->pending =3D false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + switch ( prev_state =3D state ) + { + case STATE_IORESP_READY: /* IORESP_READY -> NONE */ + p->state =3D STATE_IOREQ_NONE; + data =3D p->data; + break; + + case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READ= Y */ + case STATE_IOREQ_INPROCESS: + wait_on_xen_event_channel(sv->ioreq_evtchn, + ({ state =3D p->state; + smp_rmb(); + state !=3D prev_state; })); + continue; + + default: + gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); + sv->pending =3D false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + break; + } + + p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; + if ( hvm_ioreq_needs_completion(p) ) + p->data =3D data; + + sv->pending =3D false; + + return true; +} + +bool handle_hvm_io_completion(struct vcpu *v) +{ + struct domain *d =3D v->domain; + struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; + struct hvm_ioreq_server *s; + struct hvm_ioreq_vcpu *sv; + enum hvm_io_completion io_completion; + + if ( has_vpci(d) && vpci_process_pending(v) ) + { + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } + + sv =3D get_pending_vcpu(v, &s); + if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + return false; + + vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) ? + STATE_IORESP_READY : STATE_IOREQ_NONE; + + msix_write_completion(v); + vcpu_end_shutdown_deferral(v); + + io_completion =3D vio->io_completion; + vio->io_completion =3D HVMIO_no_completion; + + switch ( io_completion ) + { + case HVMIO_no_completion: + break; + + case HVMIO_mmio_completion: + return arch_ioreq_complete_mmio(); + + case HVMIO_pio_completion: + return handle_pio(vio->io_req.addr, vio->io_req.size, + vio->io_req.dir); + + default: + return arch_vcpu_ioreq_completion(io_completion); + } + + return true; +} + +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct page_info *page; + + if ( iorp->page ) + { + /* + * If a guest frame has already been mapped (which may happen + * on demand if hvm_get_ioreq_server_info() is called), then + * allocating a page is not permitted. + */ + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + page =3D alloc_domheap_page(s->target, MEMF_no_refcount); + + if ( !page ) + return -ENOMEM; + + if ( !get_page_and_type(page, s->target, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so failure + * here is a clear indication of something fishy going on. + */ + domain_crash(s->emulator); + return -ENODATA; + } + + iorp->va =3D __map_domain_page_global(page); + if ( !iorp->va ) + goto fail; + + iorp->page =3D page; + clear_page(iorp->va); + return 0; + + fail: + put_page_alloc_ref(page); + put_page_and_type(page); + + return -ENOMEM; +} + +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct page_info *page =3D iorp->page; + + if ( !page ) + return; + + iorp->page =3D NULL; + + unmap_domain_page_global(iorp->va); + iorp->va =3D NULL; + + put_page_alloc_ref(page); + put_page_and_type(page); +} + +bool is_ioreq_server_page(struct domain *d, const struct page_info *page) +{ + const struct hvm_ioreq_server *s; + unsigned int id; + bool found =3D false; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) + { + found =3D true; + break; + } + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return found; +} + +static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, + struct hvm_ioreq_vcpu *sv) +{ + ASSERT(spin_is_locked(&s->lock)); + + if ( s->ioreq.va !=3D NULL ) + { + ioreq_t *p =3D get_ioreq(s, sv->vcpu); + + p->vp_eport =3D sv->ioreq_evtchn; + } +} + +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + int rc; + + sv =3D xzalloc(struct hvm_ioreq_vcpu); + + rc =3D -ENOMEM; + if ( !sv ) + goto fail1; + + spin_lock(&s->lock); + + rc =3D alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail2; + + sv->ioreq_evtchn =3D rc; + + if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) + { + rc =3D alloc_unbound_xen_event_channel(v->domain, 0, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail3; + + s->bufioreq_evtchn =3D rc; + } + + sv->vcpu =3D v; + + list_add(&sv->list_entry, &s->ioreq_vcpu_list); + + if ( s->enabled ) + hvm_update_ioreq_evtchn(s, sv); + + spin_unlock(&s->lock); + return 0; + + fail3: + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + fail2: + spin_unlock(&s->lock); + xfree(sv); + + fail1: + return rc; +} + +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu !=3D v ) + continue; + + list_del(&sv->list_entry); + + if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + break; + } + + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv, *next; + + spin_lock(&s->lock); + + list_for_each_entry_safe ( sv, + next, + &s->ioreq_vcpu_list, + list_entry ) + { + struct vcpu *v =3D sv->vcpu; + + list_del(&sv->list_entry); + + if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + } + + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +{ + int rc; + + rc =3D hvm_alloc_ioreq_mfn(s, false); + + if ( !rc && (s->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) ) + rc =3D hvm_alloc_ioreq_mfn(s, true); + + if ( rc ) + hvm_free_ioreq_mfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +{ + hvm_free_ioreq_mfn(s, true); + hvm_free_ioreq_mfn(s, false); +} + +static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +{ + unsigned int i; + + for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) + rangeset_destroy(s->range[i]); +} + +static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, + ioservid_t id) +{ + unsigned int i; + int rc; + + for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) + { + char *name; + + rc =3D asprintf(&name, "ioreq_server %d %s", id, + (i =3D=3D XEN_DMOP_IO_RANGE_PORT) ? "port" : + (i =3D=3D XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : + (i =3D=3D XEN_DMOP_IO_RANGE_PCI) ? "pci" : + ""); + if ( rc ) + goto fail; + + s->range[i] =3D rangeset_new(s->target, name, + RANGESETF_prettyprint_hex); + + xfree(name); + + rc =3D -ENOMEM; + if ( !s->range[i] ) + goto fail; + + rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + } + + return 0; + + fail: + hvm_ioreq_server_free_rangesets(s); + + return rc; +} + +static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + if ( s->enabled ) + goto done; + + arch_ioreq_server_enable(s); + + s->enabled =3D true; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + + done: + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +{ + spin_lock(&s->lock); + + if ( !s->enabled ) + goto done; + + arch_ioreq_server_disable(s); + + s->enabled =3D false; + + done: + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) +{ + struct domain *currd =3D current->domain; + struct vcpu *v; + int rc; + + s->target =3D d; + + get_knownalive_domain(currd); + s->emulator =3D currd; + + spin_lock_init(&s->lock); + INIT_LIST_HEAD(&s->ioreq_vcpu_list); + spin_lock_init(&s->bufioreq_lock); + + s->ioreq.gfn =3D INVALID_GFN; + s->bufioreq.gfn =3D INVALID_GFN; + + rc =3D hvm_ioreq_server_alloc_rangesets(s, id); + if ( rc ) + return rc; + + s->bufioreq_handling =3D bufioreq_handling; + + for_each_vcpu ( d, v ) + { + rc =3D hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail_add; + } + + return 0; + + fail_add: + hvm_ioreq_server_remove_all_vcpus(s); + arch_ioreq_server_unmap_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); + return rc; +} + +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +{ + ASSERT(!s->enabled); + hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latter. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ + arch_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); +} + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id) +{ + struct hvm_ioreq_server *s; + unsigned int i; + int rc; + + if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) + return -EINVAL; + + s =3D xzalloc(struct hvm_ioreq_server); + if ( !s ) + return -ENOMEM; + + domain_pause(d); + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) + { + if ( !GET_IOREQ_SERVER(d, i) ) + break; + } + + rc =3D -ENOSPC; + if ( i >=3D MAX_NR_IOREQ_SERVERS ) + goto fail; + + /* + * It is safe to call set_ioreq_server() prior to + * hvm_ioreq_server_init() since the target domain is paused. + */ + set_ioreq_server(d, i, s); + + rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); + if ( rc ) + { + set_ioreq_server(d, i, NULL); + goto fail; + } + + if ( id ) + *id =3D i; + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + return 0; + + fail: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + xfree(s); + return rc; +} + +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + domain_pause(d); + + arch_ioreq_server_destroy(s); + + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is paused. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + domain_unpause(d); + + xfree(s); + + rc =3D 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + if ( ioreq_gfn || bufioreq_gfn ) + { + rc =3D arch_ioreq_server_map_pages(s); + if ( rc ) + goto out; + } + + if ( ioreq_gfn ) + *ioreq_gfn =3D gfn_x(s->ioreq.gfn); + + if ( HANDLE_BUFIOREQ(s) ) + { + if ( bufioreq_gfn ) + *bufioreq_gfn =3D gfn_x(s->bufioreq.gfn); + + if ( bufioreq_port ) + *bufioreq_port =3D s->bufioreq_evtchn; + } + + rc =3D 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) +{ + struct hvm_ioreq_server *s; + int rc; + + ASSERT(is_hvm_domain(d)); + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + rc =3D hvm_ioreq_server_alloc_pages(s); + if ( rc ) + goto out; + + switch ( idx ) + { + case XENMEM_resource_ioreq_server_frame_bufioreq: + rc =3D -ENOENT; + if ( !HANDLE_BUFIOREQ(s) ) + goto out; + + *mfn =3D page_to_mfn(s->bufioreq.page); + rc =3D 0; + break; + + case XENMEM_resource_ioreq_server_frame_ioreq(0): + *mfn =3D page_to_mfn(s->ioreq.page); + rc =3D 0; + break; + + default: + rc =3D -EINVAL; + break; + } + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r =3D s->range[type]; + break; + + default: + r =3D NULL; + break; + } + + rc =3D -EINVAL; + if ( !r ) + goto out; + + rc =3D -EEXIST; + if ( rangeset_overlaps_range(r, start, end) ) + goto out; + + rc =3D rangeset_add_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r =3D s->range[type]; + break; + + default: + r =3D NULL; + break; + } + + rc =3D -EINVAL; + if ( !r ) + goto out; + + rc =3D -ENOENT; + if ( !rangeset_contains_range(r, start, end) ) + goto out; + + rc =3D rangeset_remove_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +/* + * Map or unmap an ioreq server to specific memory type. For now, only + * HVMMEM_ioreq_server is supported, and in the future new types can be + * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And + * currently, only write operations are to be forwarded to an ioreq server. + * Support for the emulation of read operations can be added when an ioreq + * server has such requirement in the future. + */ +int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags) +{ + struct hvm_ioreq_server *s; + int rc; + + if ( type !=3D HVMMEM_ioreq_server ) + return -EINVAL; + + if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + rc =3D arch_ioreq_server_map_mem_type(d, s, flags); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + if ( rc =3D=3D 0 ) + arch_ioreq_server_map_mem_type_completed(d, s, flags); + + return rc; +} + +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + domain_pause(d); + + if ( enabled ) + hvm_ioreq_server_enable(s); + else + hvm_ioreq_server_disable(s); + + domain_unpause(d); + + rc =3D 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + return rc; +} + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + rc =3D hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail; + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return 0; + + fail: + while ( ++id !=3D MAX_NR_IOREQ_SERVERS ) + { + s =3D GET_IOREQ_SERVER(d, id); + + if ( !s ) + continue; + + hvm_ioreq_server_remove_vcpu(s, v); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + hvm_ioreq_server_remove_vcpu(s, v); + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +void hvm_destroy_all_ioreq_servers(struct domain *d) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + if ( !arch_ioreq_server_destroy_all(d) ) + return; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + /* No need to domain_pause() as the domain is being torn down */ + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is being destroyed. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + xfree(s); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( !arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) + return NULL; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct rangeset *r; + + if ( !s->enabled ) + continue; + + r =3D s->range[type]; + + switch ( type ) + { + unsigned long start, end; + + case XEN_DMOP_IO_RANGE_PORT: + start =3D addr; + end =3D start + p->size - 1; + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_MEMORY: + start =3D hvm_mmio_first_byte(p); + end =3D hvm_mmio_last_byte(p); + + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_PCI: + if ( rangeset_contains_singleton(r, addr >> 32) ) + { + p->type =3D IOREQ_TYPE_PCI_CONFIG; + p->addr =3D addr; + return s; + } + + break; + } + } + + return NULL; +} + +static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +{ + struct domain *d =3D current->domain; + struct hvm_ioreq_page *iorp; + buffered_iopage_t *pg; + buf_ioreq_t bp =3D { .data =3D p->data, + .addr =3D p->addr, + .type =3D p->type, + .dir =3D p->dir }; + /* Timeoffset sends 64b data, but no address. Use two consecutive slot= s. */ + int qw =3D 0; + + /* Ensure buffered_iopage fits in a page */ + BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); + + iorp =3D &s->bufioreq; + pg =3D iorp->va; + + if ( !pg ) + return IOREQ_STATUS_UNHANDLED; + + /* + * Return 0 for the cases we can't deal with: + * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB + * - we cannot buffer accesses to guest memory buffers, as the guest + * may expect the memory buffer to be synchronously accessed + * - the count field is usually used with data_is_ptr and since we do= n't + * support data_is_ptr we do not waste space for the count field ei= ther + */ + if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count !=3D 1) ) + return 0; + + switch ( p->size ) + { + case 1: + bp.size =3D 0; + break; + case 2: + bp.size =3D 1; + break; + case 4: + bp.size =3D 2; + break; + case 8: + bp.size =3D 3; + qw =3D 1; + break; + default: + gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); + return IOREQ_STATUS_UNHANDLED; + } + + spin_lock(&s->bufioreq_lock); + + if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=3D + (IOREQ_BUFFER_SLOT_NUM - qw) ) + { + /* The queue is full: send the iopacket through the normal path. */ + spin_unlock(&s->bufioreq_lock); + return IOREQ_STATUS_UNHANDLED; + } + + pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D bp; + + if ( qw ) + { + bp.data =3D p->data >> 32; + pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = =3D bp; + } + + /* Make the ioreq_t visible /before/ write_pointer. */ + smp_wmb(); + pg->ptrs.write_pointer +=3D qw ? 2 : 1; + + /* Canonicalize read/write pointers to prevent their overflow. */ + while ( (s->bufioreq_handling =3D=3D HVM_IOREQSRV_BUFIOREQ_ATOMIC) && + qw++ < IOREQ_BUFFER_SLOT_NUM && + pg->ptrs.read_pointer >=3D IOREQ_BUFFER_SLOT_NUM ) + { + union bufioreq_pointers old =3D pg->ptrs, new; + unsigned int n =3D old.read_pointer / IOREQ_BUFFER_SLOT_NUM; + + new.read_pointer =3D old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; + new.write_pointer =3D old.write_pointer - n * IOREQ_BUFFER_SLOT_NU= M; + cmpxchg(&pg->ptrs.full, old.full, new.full); + } + + notify_via_xen_event_channel(d, s->bufioreq_evtchn); + spin_unlock(&s->bufioreq_lock); + + return IOREQ_STATUS_HANDLED; +} + +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered) +{ + struct vcpu *curr =3D current; + struct domain *d =3D curr->domain; + struct hvm_ioreq_vcpu *sv; + + ASSERT(s); + + if ( buffered ) + return hvm_send_buffered_ioreq(s, proto_p); + + if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) + return IOREQ_STATUS_RETRY; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu =3D=3D curr ) + { + evtchn_port_t port =3D sv->ioreq_evtchn; + ioreq_t *p =3D get_ioreq(s, curr); + + if ( unlikely(p->state !=3D STATE_IOREQ_NONE) ) + { + gprintk(XENLOG_ERR, "device model set bad IO state %d\n", + p->state); + break; + } + + if ( unlikely(p->vp_eport !=3D port) ) + { + gprintk(XENLOG_ERR, "device model set bad event channel %d= \n", + p->vp_eport); + break; + } + + proto_p->state =3D STATE_IOREQ_NONE; + proto_p->vp_eport =3D port; + *p =3D *proto_p; + + prepare_wait_on_xen_event_channel(port); + + /* + * Following happens /after/ blocking and setting up ioreq + * contents. prepare_wait_on_xen_event_channel() is an implicit + * barrier. + */ + p->state =3D STATE_IOREQ_READY; + notify_via_xen_event_channel(d, port); + + sv->pending =3D true; + return IOREQ_STATUS_RETRY; + } + } + + return IOREQ_STATUS_UNHANDLED; +} + +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +{ + struct domain *d =3D current->domain; + struct hvm_ioreq_server *s; + unsigned int id, failed =3D 0; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( !s->enabled ) + continue; + + if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) + failed++; + } + + return failed; +} + +void hvm_ioreq_init(struct domain *d) +{ + spin_lock_init(&d->arch.hvm.ioreq_server.lock); + + arch_ioreq_domain_init(d); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 0e64e76..9b2eb6f 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,65 +19,6 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ =20 -#define HANDLE_BUFIOREQ(s) \ - ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) - -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); -bool is_ioreq_server_page(struct domain *d, const struct page_info *page); - -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); - -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); - -void hvm_ioreq_init(struct domain *d); - -bool arch_ioreq_complete_mmio(void); -bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); -int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_enable(struct hvm_ioreq_server *s); -void arch_ioreq_server_disable(struct hvm_ioreq_server *s); -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); -int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, - uint32_t flags); -void arch_ioreq_server_map_mem_type_completed(struct domain *d, - struct hvm_ioreq_server *s, - uint32_t flags); -bool arch_ioreq_server_destroy_all(struct domain *d); -bool arch_ioreq_server_get_type_addr(const struct domain *d, - const ioreq_t *p, - uint8_t *type, - uint64_t *addr); -void arch_ioreq_domain_init(struct domain *d); - /* This correlation must not be altered */ #define IOREQ_STATUS_HANDLED X86EMUL_OKAY #define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h new file mode 100644 index 0000000..7b67950 --- /dev/null +++ b/xen/include/xen/ioreq.h @@ -0,0 +1,93 @@ +/* + * ioreq.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#ifndef __XEN_IOREQ_H__ +#define __XEN_IOREQ_H__ + +#include + +#define HANDLE_BUFIOREQ(s) \ + ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) + +bool hvm_io_pending(struct vcpu *v); +bool handle_hvm_io_completion(struct vcpu *v); +bool is_ioreq_server_page(struct domain *d, const struct page_info *page); + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id); +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port); +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags); +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled); + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); +void hvm_destroy_all_ioreq_servers(struct domain *d); + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); + +void hvm_ioreq_init(struct domain *d); + +bool arch_ioreq_complete_mmio(void); +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_enable(struct hvm_ioreq_server *s); +void arch_ioreq_server_disable(struct hvm_ioreq_server *s); +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags); +void arch_ioreq_server_map_mem_type_completed(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags); +bool arch_ioreq_server_destroy_all(struct domain *d); +bool arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr); +void arch_ioreq_domain_init(struct domain *d); + +#endif /* __XEN_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488411; cv=none; d=zohomail.com; s=zohoarc; b=kFz8isasHlIYcZeZqSMzljTZEhxz8ezIrkJL2Fp6Mo6SCqA2KQRmpfDVJnVE+O6bBxlyVyauzJzbduopSaZX+NzQ5AkTemI5b0zbfJSuzmR+51wwg8PRQeVYJhUgpw4MW6gGhHTDhuoL2Imn0Dg33kNDwyg305RVmK76Z5A/E+I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488411; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=Ur2doLrkeRpzHl0V4SmQjygeSJuG5lvaFuMWECBZE/g=; b=QPirQ5o+bfaeW3IE4AkWdfKh2VVd5zpRFyo/mJL27K+JZKQ26C3Ww2szZ7P33BpQ+CbzOGRHiti2pBCsBdrlkXOy4p5e6A5v5JxnGW7Z9+ts2TJwLtjd+mlpmV2RIvDfVMZDwiethzYujwNI7RxN0Dt9ef/RrzrlqWNRciIly0k= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488411208567.71597451758; Tue, 12 Jan 2021 13:53:31 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66043.117160 (Exim 4.92) (envelope-from ) id 1kzRbI-0002eu-T3; Tue, 12 Jan 2021 21:53:16 +0000 Received: by outflank-mailman (output) from mailman id 66043.117160; Tue, 12 Jan 2021 21:53:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbI-0002em-PW; Tue, 12 Jan 2021 21:53:16 +0000 Received: by outflank-mailman (input) for mailman id 66043; Tue, 12 Jan 2021 21:53:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbH-0002PK-Gx for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:15 +0000 Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9aef448b-8509-4bca-94d2-3be8ac9a3e61; Tue, 12 Jan 2021 21:52:58 +0000 (UTC) Received: by mail-wr1-x432.google.com with SMTP id t16so25233wra.3 for ; Tue, 12 Jan 2021 13:52:58 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:52:57 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9aef448b-8509-4bca-94d2-3be8ac9a3e61 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ur2doLrkeRpzHl0V4SmQjygeSJuG5lvaFuMWECBZE/g=; b=j0JH2Q7Pqr+Nlcd7z70B4l6w9OQVeUrKSz6mp03amsHqY60VN1YhbWaotnag8U0U6G XKPLOITHy4uPhoirirJpHUEODPvm00TfxPWan8474o66z6weWFDTH2PvwZNsk+Ks/91e wyz5O7Ovy+FhXCZIWCkLlCCjCT06oa4vyq2tYlIPM3uj7+/dnnzFgFtqU/4OvxVH6QyM atgsXYBTeGw+JPLq4AXWtZIjYc1uNw3T/0suUqqxOYwRtmpwnqc/Lgr28n6W5gIWwRFm KJ58vUE3Kk3cOQIojxIYmGFQoxghBuPi0CFIfwFP6hJ5pOvWcf4Yddlq8reCuBpcb9Wx NylA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ur2doLrkeRpzHl0V4SmQjygeSJuG5lvaFuMWECBZE/g=; b=HvEptg5Z9AS6ciGPrHRvkDEc8r+OxoR8pSlccfSakgHZOz3K1/0Rh3GaR/3WfgXyiR sQzsyeGGAicQeE555LAHG8vmKyr4Cp5JoGt84xF+EusPIKmytydc/3fUk85+uV7n0WYo w8a7VSlkHY/nPcTtBfazgYMmxEUQ8lg2Vmw/KITsKVP4/kQAnFk9hzhxeTH2QNtCXBeW uG0HaGARdZ2arPB4LduCfoNcIiVWuBH4h+rqxs+Jzd+UdHXDpmkDwUeKUKIC5imjHswk PNgVLJPN4xB+pDIJ4BFVGzRik81T4qIvcWZ6Vy+0uxsUNlbW/uOF3Jo1rvhMv10YI+hL G9Cg== X-Gm-Message-State: AOAM533JkqkqUxXlOJ4zQ8jVZnzS5ZkaIh/E7DHp8jkcFKjCToDhCC3Q UrOWe15BOaD4spByvPT45Cy1nmHx8qKXvg== X-Google-Smtp-Source: ABdhPJxDeSvoUS2azXxLK8tOUVZiMgutPZ1BiNLWw/t4zed1QNO/dgtADzJJUl5U0cxvFfRqetMnSg== X-Received: by 2002:a5d:674b:: with SMTP id l11mr813289wrw.247.1610488377620; Tue, 12 Jan 2021 13:52:57 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V4 05/24] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common Date: Tue, 12 Jan 2021 23:52:13 +0200 Message-Id: <1610488352-18494-6-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and this helper will be used on Arm as is. Move it to xen/ioreq.h and remove "hvm" prefix. Although PIO handling on Arm is not introduced with the current series (it will be implemented when we add support for vPCI), technically the PIOs exist on Arm (however they are accessed the same way as MMIO) and it would be better not to diverge now. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Paul Durrant Acked-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" Changes V1 -> V2: - remove "hvm" prefix Changes V2 -> V3: - add Paul's R-b Changes V3 -> V4: - add Jan's A-b --- xen/arch/x86/hvm/emulate.c | 4 ++-- xen/arch/x86/hvm/io.c | 2 +- xen/common/ioreq.c | 4 ++-- xen/include/asm-x86/hvm/vcpu.h | 7 ------- xen/include/xen/ioreq.h | 7 +++++++ 5 files changed, 12 insertions(+), 12 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 60ca465..c3487b5 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -336,7 +336,7 @@ static int hvmemul_do_io( rc =3D hvm_send_ioreq(s, &p, 0); if ( rc !=3D X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state =3D STATE_IOREQ_NONE; - else if ( !hvm_ioreq_needs_completion(&vio->io_req) ) + else if ( !ioreq_needs_completion(&vio->io_req) ) rc =3D X86EMUL_OKAY; } break; @@ -2649,7 +2649,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *= hvmemul_ctxt, if ( rc =3D=3D X86EMUL_OKAY && vio->mmio_retry ) rc =3D X86EMUL_RETRY; =20 - if ( !hvm_ioreq_needs_completion(&vio->io_req) ) + if ( !ioreq_needs_completion(&vio->io_req) ) completion =3D HVMIO_no_completion; else if ( completion =3D=3D HVMIO_no_completion ) completion =3D (vio->io_req.type !=3D IOREQ_TYPE_PIO || diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 11e007d..ef8286b 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -135,7 +135,7 @@ bool handle_pio(uint16_t port, unsigned int size, int d= ir) =20 rc =3D hvmemul_do_pio_buffer(port, size, dir, &data); =20 - if ( hvm_ioreq_needs_completion(&vio->io_req) ) + if ( ioreq_needs_completion(&vio->io_req) ) vio->io_completion =3D HVMIO_pio_completion; =20 switch ( rc ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 8a004c4..47e38b6 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -160,7 +160,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, = ioreq_t *p) } =20 p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; - if ( hvm_ioreq_needs_completion(p) ) + if ( ioreq_needs_completion(p) ) p->data =3D data; =20 sv->pending =3D false; @@ -186,7 +186,7 @@ bool handle_hvm_io_completion(struct vcpu *v) if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) return false; =20 - vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) ? + vio->io_req.state =3D ioreq_needs_completion(&vio->io_req) ? STATE_IORESP_READY : STATE_IOREQ_NONE; =20 msix_write_completion(v); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 5ccd075..6c1feda 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -91,13 +91,6 @@ struct hvm_vcpu_io { const struct g2m_ioport *g2m_ioport; }; =20 -static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) -{ - return ioreq->state =3D=3D STATE_IOREQ_READY && - !ioreq->data_is_ptr && - (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); -} - struct nestedvcpu { bool_t nv_guestmode; /* vcpu in guestmode? */ void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 7b67950..750d884 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -21,6 +21,13 @@ =20 #include =20 +static inline bool ioreq_needs_completion(const ioreq_t *ioreq) +{ + return ioreq->state =3D=3D STATE_IOREQ_READY && + !ioreq->data_is_ptr && + (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); +} + #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) =20 --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488419; cv=none; d=zohomail.com; s=zohoarc; b=cCc8XmkWz+DpjwSesyoJuLme/KT5tvC0LFrSk7ulC6x1TonpdOXkpx49CJuZHTWDvX3K8z1ocllvh2pSpAprTx1zCZWcFumReaJXCLf70PU+w8AM9xMLG0G7o3uBVD7f6uVeYfmGWaS9hjKNqmQ1SkuVYBt+vPUlcntcrnBmKdQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488419; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=qHt2C5CJrXgXEhS7KQneWBAhFD+/+G5EoA81FF2rr6I=; b=XMpslKaniNYdgQGxxQ3XLgWzToiyBE0NwLH5UwyGe5Sk4d23qTDjBfXkdQ2yw/DdnMCSdLY/6arkCqSLbcQna53t+pW/gskfGVmrE7gcAneNVpCPhBUlsmQl5YZk7ra/OTLnKPbJp4X1ofjKlZoVvqHJCgsWEfnetabP9x2uWkI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 161048841999297.84872404310704; Tue, 12 Jan 2021 13:53:39 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66044.117172 (Exim 4.92) (envelope-from ) id 1kzRbO-0002kf-9a; Tue, 12 Jan 2021 21:53:22 +0000 Received: by outflank-mailman (output) from mailman id 66044.117172; Tue, 12 Jan 2021 21:53:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbO-0002kS-4G; Tue, 12 Jan 2021 21:53:22 +0000 Received: by outflank-mailman (input) for mailman id 66044; Tue, 12 Jan 2021 21:53:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbM-0002PK-Gv for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:20 +0000 Received: from mail-wm1-x32e.google.com (unknown [2a00:1450:4864:20::32e]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 875b43d8-11f8-49e2-83a6-fe1f17854035; Tue, 12 Jan 2021 21:52:59 +0000 (UTC) Received: by mail-wm1-x32e.google.com with SMTP id 190so3226428wmz.0 for ; Tue, 12 Jan 2021 13:52:59 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:52:58 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 875b43d8-11f8-49e2-83a6-fe1f17854035 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qHt2C5CJrXgXEhS7KQneWBAhFD+/+G5EoA81FF2rr6I=; b=d4v/KQZxKstm+f6m9qRY+T/BjkLp1BBK30nv0aO4rwzp2fjtitLkVPD47I70sNP2GT vnP/QFDF+90zVN/mGODH7geBT5qtVtXxis0DJ7f9uxu+FKH5LQF1AxL7Qr5IfzUyYC2X AmYUqsgHXKqAjQAAfjLKJWWIP6bGyR7TaK2Cv/9IcWhO1sJb2qVmrNBdeB6AC6CzcRzn hASzhCHV7W7WNBhsTfFw0RR6Qut+Vstcfz38g+QgSToasJPUbEiPznIWWj9E2DZhh6QM dOATPUGDWiRREAQbrxH006NB5s5KlwGf05R9dFVgKS1WCce+dsBgGHB6teZwyQC6X/ns i0kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qHt2C5CJrXgXEhS7KQneWBAhFD+/+G5EoA81FF2rr6I=; b=XDzFHesxV7tDcacqk9zs5NeRj3Vgjh+HSfykAh/H/YralGwRn648lkshiy2mDMVkwG DWFfhMCcuHd/+Gy6LffHoic38BG98CI5LLatucgFLjoZZeXxxcBtivQnlJSlk+1DbSxZ FwwokAQxEH4ZGsOIgW0a1wfauijKvBq4NzL3OL4/APle6Eo6fObbW+SHSVodtemQePNz FQ0mF6QfEfA/EK9EHsfGceW7V0toMJbixzGiAcJShM74q/vaUkD1HAbW9U9grLlIi+xb 4bIU4hiTCkC4tUFcGI2D+/MmWFo0wpPUTwTqVKm9Cwq9MrUS2x1Ors4BrvQ+cWCQwSZ7 ZkyA== X-Gm-Message-State: AOAM531WCtY4ccaZBhjdkXGFx1ww/kK9IAmqcsF34W6FeGbrp4XIA6RR ivYv4QlqZ7ri3AfPOO8BkMqLlKalLv2mLw== X-Google-Smtp-Source: ABdhPJy7rea5BQVC702NHVbtbnq8YMGeb8odK485dVaGhZeAISIFliFcv2hzWVI/av/xLl3CKGaibg== X-Received: by 2002:a1c:4c0a:: with SMTP id z10mr1088042wmf.95.1610488378631; Tue, 12 Jan 2021 13:52:58 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V4 06/24] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common Date: Tue, 12 Jan 2021 23:52:14 +0200 Message-Id: <1610488352-18494-7-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these helpers will be used on Arm as is. Move them to xen/ioreq.h and replace "hvm" prefixes with "ioreq". Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Paul Durrant CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - replace "hvm" prefix by "ioreq" Changes V2 -> V3: - add Paul's R-b Changes V32 -> V4: - add Jan's A-b --- xen/arch/x86/hvm/intercept.c | 5 +++-- xen/arch/x86/hvm/stdvga.c | 4 ++-- xen/common/ioreq.c | 4 ++-- xen/include/asm-x86/hvm/io.h | 16 ---------------- xen/include/xen/ioreq.h | 16 ++++++++++++++++ 5 files changed, 23 insertions(+), 22 deletions(-) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index cd4c4c1..02ca3b0 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -17,6 +17,7 @@ * this program; If not, see . */ =20 +#include #include #include #include @@ -34,7 +35,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler, const ioreq_t *p) { - paddr_t first =3D hvm_mmio_first_byte(p), last; + paddr_t first =3D ioreq_mmio_first_byte(p), last; =20 BUG_ON(handler->type !=3D IOREQ_TYPE_COPY); =20 @@ -42,7 +43,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler= *handler, return 0; =20 /* Make sure the handler will accept the whole access. */ - last =3D hvm_mmio_last_byte(p); + last =3D ioreq_mmio_last_byte(p); if ( last !=3D first && !handler->mmio.ops->check(current, last) ) domain_crash(current->domain); diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index fd7cadb..17dee74 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -524,8 +524,8 @@ static bool_t stdvga_mem_accept(const struct hvm_io_han= dler *handler, * deadlock when hvm_mmio_internal() is called from * hvm_copy_to/from_guest_phys() in hvm_process_io_intercept(). */ - if ( (hvm_mmio_first_byte(p) < VGA_MEM_BASE) || - (hvm_mmio_last_byte(p) >=3D (VGA_MEM_BASE + VGA_MEM_SIZE)) ) + if ( (ioreq_mmio_first_byte(p) < VGA_MEM_BASE) || + (ioreq_mmio_last_byte(p) >=3D (VGA_MEM_BASE + VGA_MEM_SIZE)) ) return 0; =20 spin_lock(&s->lock); diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 47e38b6..a196e14 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -1078,8 +1078,8 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, break; =20 case XEN_DMOP_IO_RANGE_MEMORY: - start =3D hvm_mmio_first_byte(p); - end =3D hvm_mmio_last_byte(p); + start =3D ioreq_mmio_first_byte(p); + end =3D ioreq_mmio_last_byte(p); =20 if ( rangeset_contains_range(r, start, end) ) return s; diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 558426b..fb64294 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -40,22 +40,6 @@ struct hvm_mmio_ops { hvm_mmio_write_t write; }; =20 -static inline paddr_t hvm_mmio_first_byte(const ioreq_t *p) -{ - return unlikely(p->df) ? - p->addr - (p->count - 1ul) * p->size : - p->addr; -} - -static inline paddr_t hvm_mmio_last_byte(const ioreq_t *p) -{ - unsigned long size =3D p->size; - - return unlikely(p->df) ? - p->addr + size - 1: - p->addr + (p->count * size) - 1; -} - typedef int (*portio_action_t)( int dir, unsigned int port, unsigned int bytes, uint32_t *val); =20 diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 750d884..aeea67e 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -21,6 +21,22 @@ =20 #include =20 +static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) +{ + return unlikely(p->df) ? + p->addr - (p->count - 1ul) * p->size : + p->addr; +} + +static inline paddr_t ioreq_mmio_last_byte(const ioreq_t *p) +{ + unsigned long size =3D p->size; + + return unlikely(p->df) ? + p->addr + size - 1: + p->addr + (p->count * size) - 1; +} + static inline bool ioreq_needs_completion(const ioreq_t *ioreq) { return ioreq->state =3D=3D STATE_IOREQ_READY && --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488428; cv=none; d=zohomail.com; s=zohoarc; b=OrdJGPsbj6ZhWmmkcRKhMHjsPvv0i38kwCMQ3z+ss2r1nC7TFtmWccwUa6U5ZXTjWQkvNQnX+SnNSmyn/D4I/d5Ap/qeghm3kcZcMFuHxegrxQS5zUTQSuShVvGb7V9gn6R3Du8qiAxhoSNDbSw2V0kyUTVL5mnGHbQi3kUneNY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488428; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=Nv/8iJSp1q8sRY5B7cchG5UGicc6rqIROQrT0DQ8mUA=; b=nXZ15ZOZnQMKyQbKaASF4bcpHXBIrXDPjUzwnhgUGu5mWme1iMXSJcuKPp5kkTAqJvsszRQOmaCR+BZmXkLrkF6+6WnAQ7Hb5+fXI5cr4SrgVLYwbTxXIZQ6Hl50Yvq1DYZfq65qdkc186rEuQW3pE6LK36nfFQO4dsJEmtP7xU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488428810538.5962975753624; Tue, 12 Jan 2021 13:53:48 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66046.117196 (Exim 4.92) (envelope-from ) id 1kzRbY-0002vk-7J; Tue, 12 Jan 2021 21:53:32 +0000 Received: by outflank-mailman (output) from mailman id 66046.117196; Tue, 12 Jan 2021 21:53:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbY-0002vW-3b; Tue, 12 Jan 2021 21:53:32 +0000 Received: by outflank-mailman (input) for mailman id 66046; Tue, 12 Jan 2021 21:53:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbW-0002PK-HF for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:30 +0000 Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a6c37162-e18c-4853-9566-f49358826ad3; Tue, 12 Jan 2021 21:53:01 +0000 (UTC) Received: by mail-wm1-x333.google.com with SMTP id g8so3232415wme.1 for ; Tue, 12 Jan 2021 13:53:01 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:52:59 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a6c37162-e18c-4853-9566-f49358826ad3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Nv/8iJSp1q8sRY5B7cchG5UGicc6rqIROQrT0DQ8mUA=; b=eKrYEf+5C4xWpKwhKMKSC2cbrIa2A4izfMh0U+2KX3mfBuPlAD0RKkj2pOqvDRtlrK ioq3cC3sd8nSaGkviJEUexV0w9U9qfsOWoIwQ24uCYhXmI4PCQLvrA52uzopWIWMhgdq OWCIPJ2pccOMm8CtZXsjrzwr4nm/Wzvj1Zxe98pOUjHsvaLHBAD5ctlsyVNqkJT4ieIB z0U32LonBpMBPtG54KtP11DJTWJBGviD3EHi24CBGOd7LUpVS8LNV5XQEKlgZfxJJgz4 CK1uz0BE2iUJSwTLG1InoAeR0HGFJUlxA9+IaGOWVq713kN/zyen27kcGXwNaRWROCZ1 CFdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Nv/8iJSp1q8sRY5B7cchG5UGicc6rqIROQrT0DQ8mUA=; b=YgJbSYHSehagQTKG0VQm3LDqDwlgqajAfxpGEb+rTiGl1qYwRZAnT3IUC9LkjhcNBF pDM06WG2ptfcqxf8DVc+pdtueKTONgZhU2aoP7WvXGI3jyVyGbpYJ1pC2ePLii5LJEfc AK5sTUtb63z5JrklAB+RqXCVG9WcYtkN5ZA10Kg6/RQRKhJPULsGz88SrXF9z1GUcEWq LduJyEBaOUgds0AzQYje01jIeIrIfMl4MSWgs2O2D7AQOAbz+rvdvtUyhtWKXWSVAfbJ CmR4u/iUg+lSBBr3TmC9SPaUsH3jBC5g4SUeFFn82E6Iu5QzU09KhdCqIVXYTNRGM5mT oYLQ== X-Gm-Message-State: AOAM5326yf87bWhDZfg7WVWVv6Zrr4zyTKKN1Us/EZv730dtrRjmnXOs LM3o/rWY+qFIFXqwuQOQPaUJsRTEDWWwWg== X-Google-Smtp-Source: ABdhPJxIvbLLpF+dsmswCKSVLjr3PvGenln1zOeS0cRtUlcOrbsIIP4untz8w0HJrENg38g4vF63rw== X-Received: by 2002:a1c:e4c5:: with SMTP id b188mr1175711wmh.78.1610488379784; Tue, 12 Jan 2021 13:52:59 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V4 07/24] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common Date: Tue, 12 Jan 2021 23:52:15 +0200 Message-Id: <1610488352-18494-8-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these structs will be used on Arm as is. Move them to xen/ioreq.h and remove "hvm" prefixes. Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - remove "hvm" prefix Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific Changes V3 -> V4: - add Jan's A-b --- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/ioreq.c | 38 +++++++------- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/mm/p2m.c | 8 +-- xen/common/ioreq.c | 108 +++++++++++++++++++----------------= ---- xen/include/asm-x86/hvm/domain.h | 36 +------------ xen/include/asm-x86/p2m.h | 8 +-- xen/include/xen/ioreq.h | 54 ++++++++++++++++---- 8 files changed, 128 insertions(+), 128 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index c3487b5..4d62199 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -287,7 +287,7 @@ static int hvmemul_do_io( * However, there's no cheap approach to avoid above situations in= xen, * so the device model side needs to check the incoming ioreq even= t. */ - struct hvm_ioreq_server *s =3D NULL; + struct ioreq_server *s =3D NULL; p2m_type_t p2mt =3D p2m_invalid; =20 if ( is_mmio ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 177b964..8393922 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -63,7 +63,7 @@ bool arch_vcpu_ioreq_completion(enum hvm_io_completion io= _completion) return true; } =20 -static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -79,7 +79,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_= server *s) return INVALID_GFN; } =20 -static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s) { struct domain *d =3D s->target; unsigned int i; @@ -97,7 +97,7 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server = *s) return hvm_alloc_legacy_ioreq_gfn(s); } =20 -static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, +static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d =3D s->target; @@ -115,7 +115,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_= server *s, return true; } =20 -static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) +static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d =3D s->target; unsigned int i =3D gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; @@ -129,9 +129,9 @@ static void hvm_free_ioreq_gfn(struct hvm_ioreq_server = *s, gfn_t gfn) } } =20 -static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -143,10 +143,10 @@ static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_serv= er *s, bool buf) iorp->gfn =3D INVALID_GFN; } =20 -static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; int rc; =20 if ( iorp->page ) @@ -179,11 +179,11 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server = *s, bool buf) return rc; } =20 -static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) =20 { struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -194,10 +194,10 @@ static void hvm_remove_ioreq_gfn(struct hvm_ioreq_ser= ver *s, bool buf) clear_page(iorp->va); } =20 -static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; int rc; =20 if ( gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -213,7 +213,7 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s= , bool buf) return rc; } =20 -int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) +int arch_ioreq_server_map_pages(struct ioreq_server *s) { int rc; =20 @@ -228,40 +228,40 @@ int arch_ioreq_server_map_pages(struct hvm_ioreq_serv= er *s) return rc; } =20 -void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_unmap_pages(struct ioreq_server *s) { hvm_unmap_ioreq_gfn(s, true); hvm_unmap_ioreq_gfn(s, false); } =20 -void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +void arch_ioreq_server_enable(struct ioreq_server *s) { hvm_remove_ioreq_gfn(s, false); hvm_remove_ioreq_gfn(s, true); } =20 -void arch_ioreq_server_disable(struct hvm_ioreq_server *s) +void arch_ioreq_server_disable(struct ioreq_server *s) { hvm_add_ioreq_gfn(s, true); hvm_add_ioreq_gfn(s, false); } =20 /* Called when target domain is paused */ -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) +void arch_ioreq_server_destroy(struct ioreq_server *s) { p2m_set_ioreq_server(s->target, 0, s); } =20 /* Called with ioreq_server lock held */ int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags) { return p2m_set_ioreq_server(d, flags, s); } =20 void arch_ioreq_server_map_mem_type_completed(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags) { if ( flags =3D=3D 0 ) diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index 17dee74..ee13449 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler= *handler, .dir =3D IOREQ_WRITE, .data =3D data, }; - struct hvm_ioreq_server *srv; + struct ioreq_server *srv; =20 if ( !stdvga_cache_is_enabled(s) || !s->stdvga ) goto done; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index ad4bb94..71fda06 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -372,7 +372,7 @@ void p2m_memory_type_changed(struct domain *d) =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); int rc; @@ -420,11 +420,11 @@ int p2m_set_ioreq_server(struct domain *d, return rc; } =20 -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags) +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - struct hvm_ioreq_server *s; + struct ioreq_server *s; =20 spin_lock(&p2m->ioreq.lock); =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index a196e14..3f631ec 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -35,7 +35,7 @@ #include =20 static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); @@ -46,8 +46,8 @@ static void set_ioreq_server(struct domain *d, unsigned i= nt id, #define GET_IOREQ_SERVER(d, id) \ (d)->arch.hvm.ioreq_server.server[id] =20 -static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) +static struct ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) { if ( id >=3D MAX_NR_IOREQ_SERVERS ) return NULL; @@ -69,7 +69,7 @@ static struct hvm_ioreq_server *get_ioreq_server(const st= ruct domain *d, continue; \ else =20 -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v) { shared_iopage_t *p =3D s->ioreq.va; =20 @@ -79,16 +79,16 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, s= truct vcpu *v) return &p->vcpu_ioreq[v->vcpu_id]; } =20 -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **s= rvp) +static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct ioreq_server **srvp) { struct domain *d =3D v->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -111,7 +111,7 @@ bool hvm_io_pending(struct vcpu *v) return get_pending_vcpu(v, NULL); } =20 -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state =3D STATE_IOREQ_NONE; unsigned int state =3D p->state; @@ -172,8 +172,8 @@ bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; + struct ioreq_server *s; + struct ioreq_vcpu *sv; enum hvm_io_completion io_completion; =20 if ( has_vpci(d) && vpci_process_pending(v) ) @@ -214,9 +214,9 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } =20 -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page; =20 if ( iorp->page ) @@ -262,9 +262,9 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server = *s, bool buf) return -ENOMEM; } =20 -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page =3D iorp->page; =20 if ( !page ) @@ -281,7 +281,7 @@ static void hvm_free_ioreq_mfn(struct hvm_ioreq_server = *s, bool buf) =20 bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { - const struct hvm_ioreq_server *s; + const struct ioreq_server *s; unsigned int id; bool found =3D false; =20 @@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) return found; } =20 -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) +static void hvm_update_ioreq_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); =20 @@ -314,13 +314,13 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_= server *s, } } =20 -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; int rc; =20 - sv =3D xzalloc(struct hvm_ioreq_vcpu); + sv =3D xzalloc(struct ioreq_vcpu); =20 rc =3D -ENOMEM; if ( !sv ) @@ -366,10 +366,10 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq= _server *s, return rc; } =20 -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, +static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 spin_lock(&s->lock); =20 @@ -394,9 +394,9 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ior= eq_server *s, spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv, *next; + struct ioreq_vcpu *sv, *next; =20 spin_lock(&s->lock); =20 @@ -420,7 +420,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hv= m_ioreq_server *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; =20 @@ -435,13 +435,13 @@ static int hvm_ioreq_server_alloc_pages(struct hvm_io= req_server *s) return rc; } =20 -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_pages(struct ioreq_server *s) { hvm_free_ioreq_mfn(s, true); hvm_free_ioreq_mfn(s, false); } =20 -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; =20 @@ -449,7 +449,7 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_= ioreq_server *s) rangeset_destroy(s->range[i]); } =20 -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, ioservid_t id) { unsigned int i; @@ -487,9 +487,9 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_= ioreq_server *s, return rc; } =20 -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_enable(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 spin_lock(&s->lock); =20 @@ -509,7 +509,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_se= rver *s) spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); =20 @@ -524,7 +524,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_s= erver *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_init(struct ioreq_server *s, struct domain *d, int bufioreq_handling, ioservid_t id) { @@ -569,7 +569,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_serve= r *s, return rc; } =20 -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); @@ -594,14 +594,14 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_= server *s) int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, ioservid_t *id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int i; int rc; =20 if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) return -EINVAL; =20 - s =3D xzalloc(struct hvm_ioreq_server); + s =3D xzalloc(struct ioreq_server); if ( !s ) return -ENOMEM; =20 @@ -649,7 +649,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, =20 int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -694,7 +694,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, unsigned long *bufioreq_gfn, evtchn_port_t *bufioreq_port) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -739,7 +739,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, unsigned long idx, mfn_t *mfn) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 ASSERT(is_hvm_domain(d)); @@ -791,7 +791,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; =20 @@ -843,7 +843,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; =20 @@ -902,7 +902,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 if ( type !=3D HVMMEM_ioreq_server ) @@ -937,7 +937,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, bool enabled) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -970,7 +970,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioserv= id_t id, =20 int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; int rc; =20 @@ -1005,7 +1005,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) =20 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1018,7 +1018,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain = *d, struct vcpu *v) =20 void hvm_destroy_all_ioreq_servers(struct domain *d) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; =20 if ( !arch_ioreq_server_destroy_all(d) ) @@ -1045,10 +1045,10 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; uint8_t type; uint64_t addr; unsigned int id; @@ -1101,10 +1101,10 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, return NULL; } =20 -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) { struct domain *d =3D current->domain; - struct hvm_ioreq_page *iorp; + struct ioreq_page *iorp; buffered_iopage_t *pg; buf_ioreq_t bp =3D { .data =3D p->data, .addr =3D p->addr, @@ -1194,12 +1194,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq= _server *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } =20 -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered) { struct vcpu *curr =3D current; struct domain *d =3D curr->domain; - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; =20 ASSERT(s); =20 @@ -1257,7 +1257,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_= t *proto_p, unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) { struct domain *d =3D current->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id, failed =3D 0; =20 FOR_EACH_IOREQ_SERVER(d, id, s) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 9d247ba..1c4ca47 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -30,40 +30,6 @@ =20 #include =20 -struct hvm_ioreq_page { - gfn_t gfn; - struct page_info *page; - void *va; -}; - -struct hvm_ioreq_vcpu { - struct list_head list_entry; - struct vcpu *vcpu; - evtchn_port_t ioreq_evtchn; - bool pending; -}; - -#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) -#define MAX_NR_IO_RANGES 256 - -struct hvm_ioreq_server { - struct domain *target, *emulator; - - /* Lock to serialize toolstack modifications */ - spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; - struct rangeset *range[NR_IO_RANGE_TYPES]; - bool enabled; - uint8_t bufioreq_handling; -}; - #ifdef CONFIG_MEM_SHARING struct mem_sharing_domain { @@ -110,7 +76,7 @@ struct hvm_domain { /* Lock protects all other values in the sub-struct and the default */ struct { spinlock_t lock; - struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; } ioreq_server; =20 /* Cached CF8 for guest PCI config cycles */ diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 6447696..7df2878 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -363,7 +363,7 @@ struct p2m_domain { * ioreq server who's responsible for the emulation of * gfns with specific p2m type(for now, p2m_ioreq_server). */ - struct hvm_ioreq_server *server; + struct ioreq_server *server; /* * flags specifies whether read, write or both operations * are to be emulated by an ioreq server. @@ -937,9 +937,9 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type= _t p2mt, mfn_t mfn) } =20 int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s); -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags); + struct ioreq_server *s); +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags); =20 static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt, p2m_type_t ot, mfn_t nfn, mfn_t ofn, diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index aeea67e..bc79c37 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -21,6 +21,40 @@ =20 #include =20 +struct ioreq_page { + gfn_t gfn; + struct page_info *page; + void *va; +}; + +struct ioreq_vcpu { + struct list_head list_entry; + struct vcpu *vcpu; + evtchn_port_t ioreq_evtchn; + bool pending; +}; + +#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) +#define MAX_NR_IO_RANGES 256 + +struct ioreq_server { + struct domain *target, *emulator; + + /* Lock to serialize toolstack modifications */ + spinlock_t lock; + + struct ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + struct rangeset *range[NR_IO_RANGE_TYPES]; + bool enabled; + uint8_t bufioreq_handling; +}; + static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) { return unlikely(p->df) ? @@ -75,9 +109,9 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, str= uct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); void hvm_destroy_all_ioreq_servers(struct domain *d); =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered); unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); =20 @@ -85,16 +119,16 @@ void hvm_ioreq_init(struct domain *d); =20 bool arch_ioreq_complete_mmio(void); bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); -int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_enable(struct hvm_ioreq_server *s); -void arch_ioreq_server_disable(struct hvm_ioreq_server *s); -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); +int arch_ioreq_server_map_pages(struct ioreq_server *s); +void arch_ioreq_server_unmap_pages(struct ioreq_server *s); +void arch_ioreq_server_enable(struct ioreq_server *s); +void arch_ioreq_server_disable(struct ioreq_server *s); +void arch_ioreq_server_destroy(struct ioreq_server *s); int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags); void arch_ioreq_server_map_mem_type_completed(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags); bool arch_ioreq_server_destroy_all(struct domain *d); bool arch_ioreq_server_get_type_addr(const struct domain *d, --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488432; cv=none; d=zohomail.com; s=zohoarc; b=Do+xuJo8ANAznVRsyLj9yScRbldqSNxDFLzroL41J4OYG58WP10ZVjWyB1sZVf1o7OfPahtevdNhVTFozusxopHR0Tw6Jaf4lp14DAmnQktVRwha6cco/YilsczZi7tfm+/j7hKoT/6Hn8Uwg3mQ3vf9Z9MOhKo+uVN22ot8u5M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488432; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=YQH/ykFIdBp1o2ex56YNYAHOdPkvCmUObEYKSCRsGqc=; b=Dvbs4NNw1dr+FXLqiRjS7Vu6/Aiy0J6GNxAs3niWdp1uvgVqCjQ3Ohc4LKz0DRn7CVFHUDASjLff0mZlJkTZLNqqooZSc9BUodGZO1Ixvi2AVgGkD27Eh8WrWRnPRr0h1NuZ7QoT2fmfSj0qjnohn4QHgBMPwANWpcdVc96rLIk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488432898176.28402316553888; Tue, 12 Jan 2021 13:53:52 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66048.117208 (Exim 4.92) (envelope-from ) id 1kzRbd-00031a-NI; Tue, 12 Jan 2021 21:53:37 +0000 Received: by outflank-mailman (output) from mailman id 66048.117208; Tue, 12 Jan 2021 21:53:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbd-00031S-JA; Tue, 12 Jan 2021 21:53:37 +0000 Received: by outflank-mailman (input) for mailman id 66048; Tue, 12 Jan 2021 21:53:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbb-0002PK-HH for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:35 +0000 Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b81d7164-b184-44c2-9db9-6ced6f1bc7e5; Tue, 12 Jan 2021 21:53:02 +0000 (UTC) Received: by mail-wm1-x333.google.com with SMTP id 3so3494282wmg.4 for ; Tue, 12 Jan 2021 13:53:01 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:00 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b81d7164-b184-44c2-9db9-6ced6f1bc7e5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YQH/ykFIdBp1o2ex56YNYAHOdPkvCmUObEYKSCRsGqc=; b=GfiQCfNwoIh6MqoK3egSACQqddErEYoXS4Q5VJHVxoXifRr+p+aJmgwyVhljQFpSSR 2rh0mUkJn0HDuAL3ZIT7Ix4lSGcbP6CXRKpZfRMv6RPeDf74z2IfVoIH9j7AJFZRJCQ9 Z3bcy+tdSRSUjqP16u5wFgRDsMD2UCSb6P4lZALTSAIlX7Ym2DXJ39VFl3qh5PILf1Oy I12DEU2jrH4dAtE3pbbwspdTkgECs891+P7UBz3XMGAW+sbTU8CAPptOjY2ODfLhLKHG wmLwRGCGsfFD4unMv0e1iQ2k+PxVhmrUKBxrOKtemhT7LN/8JgKjxM9YofcG5QRsRKaz /p8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YQH/ykFIdBp1o2ex56YNYAHOdPkvCmUObEYKSCRsGqc=; b=mIViw2D4s7H+kSogfEC6JTaO8SEFedgBP6KdGmdfxZVdFjDl5bWJPC3CuHvc8DTXXD xlGa1q5oV+2tscxFDJkGrqdRzd882WXBaa4dK23073NcxE0kun97w2GZaQDha4mNtYuX ja8pcSvYwF1zrsy3GKoNx8TGyqWMhoWkyCdHHuh4pkc1LeQHV1pU5FhAph/t81tk2lg9 MOKFVX96y5K1nNdx0B7Jjbt3GdCKatrfcKIgCCdit0Jy5XNnMSpUQ6PFlx9wgYj30LNb 4DwmFF1l384qFYlU79nguVM2MHybirteqz+8kdbSUrpbyKpZnsGsyYiN9x1s5DBX/QAR Xe/Q== X-Gm-Message-State: AOAM530SOWCvNW3S0Eql6g4J8IuZVmV/VyKu6OiSAS9yGXXjIJZXHIo3 iXRBfSOlyADael1771XkeDTQ80fuST0ErA== X-Google-Smtp-Source: ABdhPJwHqXcIsOmFKM1lTmLdMr8jkWoWsAxYDQb70Fhe4D/Uxg9HlTrF6W3Nar59mgo77xAExE5uoA== X-Received: by 2002:a05:600c:2042:: with SMTP id p2mr1168845wmg.152.1610488380947; Tue, 12 Jan 2021 13:53:00 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall Subject: [PATCH V4 08/24] xen/ioreq: Move x86's ioreq_server to struct domain Date: Tue, 12 Jan 2021 23:52:16 +0200 Message-Id: <1610488352-18494-9-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and this struct will be used on Arm as is. Move it to common struct domain. This also significantly reduces the layering violation in the common code (*arch.hvm* usage). We don't move ioreq_gfn since it is not used in the common code (the "legacy" mechanism is x86 specific). Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - remove the mention of "ioreq_gfn" from patch subject/description - update patch according the "legacy interface" is x86 specific - drop hvm_params related changes in arch/x86/hvm/hvm.c - leave ioreq_gfn in hvm_domain Changes V3 -> V4: - rebase - drop the stale part of the comment above struct ioreq_server - add Jan's A-b --- xen/common/ioreq.c | 60 ++++++++++++++++++++----------------= ---- xen/include/asm-x86/hvm/domain.h | 8 ------ xen/include/xen/sched.h | 10 +++++++ 3 files changed, 40 insertions(+), 38 deletions(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 3f631ec..a319c88 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -38,13 +38,13 @@ static void set_ioreq_server(struct domain *d, unsigned= int id, struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + ASSERT(!s || !d->ioreq_server.server[id]); =20 - d->arch.hvm.ioreq_server.server[id] =3D s; + d->ioreq_server.server[id] =3D s; } =20 #define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] + (d)->ioreq_server.server[id] =20 static struct ioreq_server *get_ioreq_server(const struct domain *d, unsigned int id) @@ -285,7 +285,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) unsigned int id; bool found =3D false; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -296,7 +296,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) } } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return found; } @@ -606,7 +606,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, return -ENOMEM; =20 domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) { @@ -634,13 +634,13 @@ int hvm_create_ioreq_server(struct domain *d, int buf= ioreq_handling, if ( id ) *id =3D i; =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); =20 return 0; =20 fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); =20 xfree(s); @@ -652,7 +652,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -684,7 +684,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -697,7 +697,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -731,7 +731,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -744,7 +744,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, =20 ASSERT(is_hvm_domain(d)); =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -782,7 +782,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, } =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -798,7 +798,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -834,7 +834,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, rc =3D rangeset_add_range(r, start, end); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -850,7 +850,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -886,7 +886,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, rc =3D rangeset_remove_range(r, start, end); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -911,7 +911,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -926,7 +926,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, rc =3D arch_ioreq_server_map_mem_type(d, s, flags); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 if ( rc =3D=3D 0 ) arch_ioreq_server_map_mem_type_completed(d, s, flags); @@ -940,7 +940,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioserv= id_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -964,7 +964,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioserv= id_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } =20 @@ -974,7 +974,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) unsigned int id; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -983,7 +983,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) goto fail; } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return 0; =20 @@ -998,7 +998,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) hvm_ioreq_server_remove_vcpu(s, v); } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -1008,12 +1008,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domai= n *d, struct vcpu *v) struct ioreq_server *s; unsigned int id; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) hvm_ioreq_server_remove_vcpu(s, v); =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } =20 void hvm_destroy_all_ioreq_servers(struct domain *d) @@ -1024,7 +1024,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) if ( !arch_ioreq_server_destroy_all(d) ) return; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 /* No need to domain_pause() as the domain is being torn down */ =20 @@ -1042,7 +1042,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) xfree(s); } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } =20 struct ioreq_server *hvm_select_ioreq_server(struct domain *d, @@ -1274,7 +1274,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buf= fered) =20 void hvm_ioreq_init(struct domain *d) { - spin_lock_init(&d->arch.hvm.ioreq_server.lock); + spin_lock_init(&d->ioreq_server.lock); =20 arch_ioreq_domain_init(d); } diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 1c4ca47..b8be1ad 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -63,8 +63,6 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; =20 -#define MAX_NR_IOREQ_SERVERS 8 - struct hvm_domain { /* Guest page range used for non-default ioreq servers */ struct { @@ -73,12 +71,6 @@ struct hvm_domain { unsigned long legacy_mask; /* indexed by HVM param number */ } ioreq_gfn; =20 - /* Lock protects all other values in the sub-struct and the default */ - struct { - spinlock_t lock; - struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; - } ioreq_server; - /* Cached CF8 for guest PCI config cycles */ uint32_t pci_cf8; =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 3e46384..ad0d761 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -318,6 +318,8 @@ struct sched_unit { =20 struct evtchn_port_ops; =20 +#define MAX_NR_IOREQ_SERVERS 8 + struct domain { domid_t domain_id; @@ -533,6 +535,14 @@ struct domain struct { unsigned int val; } teardown; + +#ifdef CONFIG_IOREQ_SERVER + /* Lock protects all other values in the sub-struct */ + struct { + spinlock_t lock; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + } ioreq_server; +#endif }; =20 static inline struct page_list_head *page_to_list( --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488775; cv=none; d=zohomail.com; s=zohoarc; b=a0T2ZoPTyQBzibD2qae15JsvO2qi3vbaeWWJglnGTYe7MMbGadphF93kpIuifQZB8rIDt8L3uMsoFiUN4cFJ20+1IkfsyMnoPSTM4P5RjIkKnBNoJqSzKTcnu6xhNNTazc+nrM8089mT0DDKDL9cxcnRH950WCCCFuCCrUwEFo8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488775; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=V6c4AwWccq5AwOz6uz4L90kbcmiGc+PgMzq6WugoC5U=; b=hr4572jBnZ9sJ7kOpcEmHSVzpTqq7eb5gsE3RUPYxIGI6gViG4x4DLB173NsEqlsXeWhjPbjwx6/xnGfAl0d2mFrj8XlV6DVkZVVUKxEHnM5Y8+6UGt8LA/cRKwuAylWcezqYWzHraK2fSvn33CKtgNvmSMH2mbrQM2QbPQO/SQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488775243974.0742347855545; Tue, 12 Jan 2021 13:59:35 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66099.117375 (Exim 4.92) (envelope-from ) id 1kzRh6-0004Ti-Ib; Tue, 12 Jan 2021 21:59:16 +0000 Received: by outflank-mailman (output) from mailman id 66099.117375; Tue, 12 Jan 2021 21:59:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRh6-0004T1-6c; Tue, 12 Jan 2021 21:59:16 +0000 Received: by outflank-mailman (input) for mailman id 66099; Tue, 12 Jan 2021 21:59:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbl-0002PK-Hk for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:45 +0000 Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 72c4ed1e-048c-43a9-9c5d-6ecfae4a8588; Tue, 12 Jan 2021 21:53:03 +0000 (UTC) Received: by mail-wm1-x32f.google.com with SMTP id r4so3488392wmh.5 for ; Tue, 12 Jan 2021 13:53:03 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:01 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 72c4ed1e-048c-43a9-9c5d-6ecfae4a8588 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=V6c4AwWccq5AwOz6uz4L90kbcmiGc+PgMzq6WugoC5U=; b=GYeATYSOQwWksPAySe1H1S29HcnzA2clzVyjNIEXNg/ld4QdN6BZu7O9thY8l+tVcb 041KJpqVsq8JIqc2zhhXbru9uf1g7FAFgM2kX+zDKXwdY5zcIri+xSGPAyC+R5WNvsZV /naOOaKk6gJWv8yCstequoUagm+kbB16lv5Jyof25OPJpkyEOp3cpTEEr/HpJox2SXo/ 3/Uu3W97qgHsD7s609kJuvfhhpT0+Yd0+Fs520+Rsbp0gz9sC9/Eq0souJGdLe+/OJRR 81+EjY+681/kFbc6x/YiiDm30UbV0wUNJ7gDFMzRyCDxPEP0FeXMnRMt5EY50KbFCGhI RZcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=V6c4AwWccq5AwOz6uz4L90kbcmiGc+PgMzq6WugoC5U=; b=afb+0h7qnYIus/7X99zRETrJkFSu7mLHrAGaR8icEhdWwrKxnzMhcmBPJysgQVunfD QMBH5LhnXVr2iMOrcS2B8HDIq5K3/ZHngxxT2xNyJGSrElVYUYjZiqUbNfEogUhytock BG43FDmfxGKXsvvSa4jwUZacM6u13Q2iTINga/X8+oaTUJgZ8BN4LVlGVXvxVZ4cdi90 iu2BKQOy31JMxkoAK9LQSDggjKmxmMvhyIKhJhnc/OWaWmseOzTA1kUbMSOpCjVIvhxb na2onF7hiGHASg7idxivhgGLwaMfjyFjeCijRYbFIZsPYHmTFtoOhxzTqusPUE0WQ0Lq C07Q== X-Gm-Message-State: AOAM533sJphhcf497dYajQuMHOZw9ETnU3VsoPL0N4F4PPRM+VKMIBqw 0NGJeSe4gpZ3eVj0MVUD1/BiK1FTUmoiNg== X-Google-Smtp-Source: ABdhPJx/XAwMEQPQNZ6o7RiqFLm8QKpxBPka5y6cvywGuBqRWfsQKqPFC/cp+ZbQ0Wtw2OeY47dqtA== X-Received: by 2002:a7b:c1c6:: with SMTP id a6mr1125167wmj.23.1610488382168; Tue, 12 Jan 2021 13:53:02 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Paul Durrant , Daniel De Graaf , Oleksandr Tyshchenko Subject: [PATCH V4 09/24] xen/ioreq: Make x86's IOREQ related dm-op handling common Date: Tue, 12 Jan 2021 23:52:17 +0200 Message-Id: <1610488352-18494-10-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall As a lot of x86 code can be re-used on Arm later on, this patch moves the IOREQ related dm-op handling to the common code. The idea is to have the top level dm-op handling arch-specific and call into ioreq_server_dm_op() for otherwise unhandled ops. Pros: - More natural than doing it other way around (top level dm-op handling common). - Leave compat_dm_op() in x86 code. Cons: - Code duplication. Both arches have to duplicate do_dm_op(), etc. Also update XSM code a bit to let dm-op be used on Arm. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko [On Arm only] Tested-by: Wei Chen Acked-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** I decided to leave common dm.h to keep struct dmop_args declaration (to be included by Arm's dm.c), alternatively we could avoid introducing new header by moving the declaration into the existing header, but failed to find a suitable one which context would fit. *** Changes RFC -> V1: - update XSM, related changes were pulled from: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/D= M features Changes V1 -> V2: - update the author of a patch - update patch description - introduce xen/dm.h and move definitions here Changes V2 -> V3: - no changes Changes V3 -> V4: - rework to have the top level dm-op handling arch-specific - update patch subject/description, was "xen/dm: Make x86's DM feature c= ommon" - make a few functions static in common ioreq.c --- xen/arch/x86/hvm/dm.c | 101 +----------------------------------- xen/common/ioreq.c | 135 ++++++++++++++++++++++++++++++++++++++++++--= ---- xen/include/xen/dm.h | 39 ++++++++++++++ xen/include/xen/ioreq.h | 17 +----- xen/include/xsm/dummy.h | 4 +- xen/include/xsm/xsm.h | 6 +-- xen/xsm/dummy.c | 2 +- xen/xsm/flask/hooks.c | 5 +- 8 files changed, 171 insertions(+), 138 deletions(-) create mode 100644 xen/include/xen/dm.h diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index d3e2a9e..dc8e47d 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -16,6 +16,7 @@ =20 #include #include +#include #include #include #include @@ -29,13 +30,6 @@ =20 #include =20 -struct dmop_args { - domid_t domid; - unsigned int nr_bufs; - /* Reserve enough buf elements for all current hypercalls. */ - struct xen_dm_op_buf buf[2]; -}; - static bool _raw_copy_from_guest_buf_offset(void *dst, const struct dmop_args *args, unsigned int buf_idx, @@ -408,71 +402,6 @@ static int dm_op(const struct dmop_args *op_args) =20 switch ( op.op ) { - case XEN_DMOP_create_ioreq_server: - { - struct xen_dm_op_create_ioreq_server *data =3D - &op.u.create_ioreq_server; - - const_op =3D false; - - rc =3D -EINVAL; - if ( data->pad[0] || data->pad[1] || data->pad[2] ) - break; - - rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); - break; - } - - case XEN_DMOP_get_ioreq_server_info: - { - struct xen_dm_op_get_ioreq_server_info *data =3D - &op.u.get_ioreq_server_info; - const uint16_t valid_flags =3D XEN_DMOP_no_gfns; - - const_op =3D false; - - rc =3D -EINVAL; - if ( data->flags & ~valid_flags ) - break; - - rc =3D hvm_get_ioreq_server_info(d, data->id, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->ioreq_gfn, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->bufioreq_gfn, - &data->bufioreq_port); - break; - } - - case XEN_DMOP_map_io_range_to_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data =3D - &op.u.map_io_range_to_ioreq_server; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); - break; - } - - case XEN_DMOP_unmap_io_range_from_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data =3D - &op.u.unmap_io_range_from_ioreq_server; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, - data->start, data->end); - break; - } - case XEN_DMOP_map_mem_type_to_ioreq_server: { struct xen_dm_op_map_mem_type_to_ioreq_server *data =3D @@ -523,32 +452,6 @@ static int dm_op(const struct dmop_args *op_args) break; } =20 - case XEN_DMOP_set_ioreq_server_state: - { - const struct xen_dm_op_set_ioreq_server_state *data =3D - &op.u.set_ioreq_server_state; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); - break; - } - - case XEN_DMOP_destroy_ioreq_server: - { - const struct xen_dm_op_destroy_ioreq_server *data =3D - &op.u.destroy_ioreq_server; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_destroy_ioreq_server(d, data->id); - break; - } - case XEN_DMOP_track_dirty_vram: { const struct xen_dm_op_track_dirty_vram *data =3D @@ -703,7 +606,7 @@ static int dm_op(const struct dmop_args *op_args) } =20 default: - rc =3D -EOPNOTSUPP; + rc =3D ioreq_server_dm_op(&op, d, &const_op); break; } =20 diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index a319c88..72b5da0 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -591,8 +591,8 @@ static void hvm_ioreq_server_deinit(struct ioreq_server= *s) put_domain(s->emulator); } =20 -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) +static int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id) { struct ioreq_server *s; unsigned int i; @@ -647,7 +647,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, return rc; } =20 -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +static int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { struct ioreq_server *s; int rc; @@ -689,10 +689,10 @@ int hvm_destroy_ioreq_server(struct domain *d, ioserv= id_t id) return rc; } =20 -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) +static int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) { struct ioreq_server *s; int rc; @@ -787,9 +787,9 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, return rc; } =20 -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +static int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t i= d, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -839,9 +839,9 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, return rc; } =20 -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +static int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid= _t id, + uint32_t type, uint64_t st= art, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -934,8 +934,8 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, return rc; } =20 -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) +static int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled) { struct ioreq_server *s; int rc; @@ -1279,6 +1279,111 @@ void hvm_ioreq_init(struct domain *d) arch_ioreq_domain_init(d); } =20 +int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const= _op) +{ + long rc; + + switch ( op->op ) + { + case XEN_DMOP_create_ioreq_server: + { + struct xen_dm_op_create_ioreq_server *data =3D + &op->u.create_ioreq_server; + + *const_op =3D false; + + rc =3D -EINVAL; + if ( data->pad[0] || data->pad[1] || data->pad[2] ) + break; + + rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, + &data->id); + break; + } + + case XEN_DMOP_get_ioreq_server_info: + { + struct xen_dm_op_get_ioreq_server_info *data =3D + &op->u.get_ioreq_server_info; + const uint16_t valid_flags =3D XEN_DMOP_no_gfns; + + *const_op =3D false; + + rc =3D -EINVAL; + if ( data->flags & ~valid_flags ) + break; + + rc =3D hvm_get_ioreq_server_info(d, data->id, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->iore= q_gfn, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->bufi= oreq_gfn, + &data->bufioreq_port); + break; + } + + case XEN_DMOP_map_io_range_to_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data =3D + &op->u.map_io_range_to_ioreq_server; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, + data->start, data->end); + break; + } + + case XEN_DMOP_unmap_io_range_from_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data =3D + &op->u.unmap_io_range_from_ioreq_server; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, + data->start, data->end); + break; + } + + case XEN_DMOP_set_ioreq_server_state: + { + const struct xen_dm_op_set_ioreq_server_state *data =3D + &op->u.set_ioreq_server_state; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + break; + } + + case XEN_DMOP_destroy_ioreq_server: + { + const struct xen_dm_op_destroy_ioreq_server *data =3D + &op->u.destroy_ioreq_server; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_destroy_ioreq_server(d, data->id); + break; + } + + default: + rc =3D -EOPNOTSUPP; + break; + } + + return rc; +} + /* * Local variables: * mode: C diff --git a/xen/include/xen/dm.h b/xen/include/xen/dm.h new file mode 100644 index 0000000..2c9952d --- /dev/null +++ b/xen/include/xen/dm.h @@ -0,0 +1,39 @@ +/* + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#ifndef __XEN_DM_H__ +#define __XEN_DM_H__ + +#include + +struct dmop_args { + domid_t domid; + unsigned int nr_bufs; + /* Reserve enough buf elements for all current hypercalls. */ + struct xen_dm_op_buf buf[2]; +}; + +#endif /* __XEN_DM_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index bc79c37..7a90873 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -85,25 +85,10 @@ bool hvm_io_pending(struct vcpu *v); bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); =20 -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); =20 int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); @@ -117,6 +102,8 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffe= red); =20 void hvm_ioreq_init(struct domain *d); =20 +int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const= _op); + bool arch_ioreq_complete_mmio(void); bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); int arch_ioreq_server_map_pages(struct ioreq_server *s); diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 7ae3c40..5c61d8e 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -707,14 +707,14 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG str= uct domain *d, unsigned int } } =20 +#endif /* CONFIG_X86 */ + static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d) { XSM_ASSERT_ACTION(XSM_DM_PRIV); return xsm_default_action(action, current->domain, d); } =20 -#endif /* CONFIG_X86 */ - #ifdef CONFIG_ARGO static XSM_INLINE int xsm_argo_enable(const struct domain *d) { diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 7bd03d8..91ecff4 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -176,8 +176,8 @@ struct xsm_operations { int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, ui= nt8_t allow); int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8= _t allow); int (*pmu_op) (struct domain *d, unsigned int op); - int (*dm_op) (struct domain *d); #endif + int (*dm_op) (struct domain *d); int (*xen_version) (uint32_t cmd); int (*domain_resource_map) (struct domain *d); #ifdef CONFIG_ARGO @@ -682,13 +682,13 @@ static inline int xsm_pmu_op (xsm_default_t def, stru= ct domain *d, unsigned int return xsm_ops->pmu_op(d, op); } =20 +#endif /* CONFIG_X86 */ + static inline int xsm_dm_op(xsm_default_t def, struct domain *d) { return xsm_ops->dm_op(d); } =20 -#endif /* CONFIG_X86 */ - static inline int xsm_xen_version (xsm_default_t def, uint32_t op) { return xsm_ops->xen_version(op); diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 9e09512..8bdffe7 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -147,8 +147,8 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, ioport_permission); set_to_dummy_if_null(ops, ioport_mapping); set_to_dummy_if_null(ops, pmu_op); - set_to_dummy_if_null(ops, dm_op); #endif + set_to_dummy_if_null(ops, dm_op); set_to_dummy_if_null(ops, xen_version); set_to_dummy_if_null(ops, domain_resource_map); #ifdef CONFIG_ARGO diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 19b0d9e..11784d7 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1656,14 +1656,13 @@ static int flask_pmu_op (struct domain *d, unsigned= int op) return -EPERM; } } +#endif /* CONFIG_X86 */ =20 static int flask_dm_op(struct domain *d) { return current_has_perm(d, SECCLASS_HVM, HVM__DM); } =20 -#endif /* CONFIG_X86 */ - static int flask_xen_version (uint32_t op) { u32 dsid =3D domain_sid(current->domain); @@ -1865,8 +1864,8 @@ static struct xsm_operations flask_ops =3D { .ioport_permission =3D flask_ioport_permission, .ioport_mapping =3D flask_ioport_mapping, .pmu_op =3D flask_pmu_op, - .dm_op =3D flask_dm_op, #endif + .dm_op =3D flask_dm_op, .xen_version =3D flask_xen_version, .domain_resource_map =3D flask_domain_resource_map, #ifdef CONFIG_ARGO --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488771; cv=none; d=zohomail.com; s=zohoarc; b=EwwykX++nafhm7UNtrB5he+KrY5oX9KLnwUK5t2c4xoSME255CpeA3D3FWm0FpO4QoQukBsxktJQHA+wz4P+b1bD/HsoJ4QrgLVtVcgKIuCeZ6gtGozJ48FT6bwraOABoZ2cWpUcd4kbY4CyzE7zoX4Zm+za+lLRxmD55E2h5Rk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488771; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=FKttzyrlhs9r0a6Bqtfw7wbziiG+iHahebfFYHiHG7M=; b=ZjxT/VUrUr3yM/S2uOMRNyuObw5wbA6XyrBW27okiYs6NaXNLwgH9SrJGS8TAya916j2lwUnanPEzp5U6ldU/nhYDTo8QAHX9x6JrFEz5ecURogrwra1Km30cYQleScxEUCQjgEQrXDc/NxDp3DCGfz0eS82mfBoNYufxSWkTUE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488771944360.2672307519392; Tue, 12 Jan 2021 13:59:31 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66094.117350 (Exim 4.92) (envelope-from ) id 1kzRh0-00048Y-4V; Tue, 12 Jan 2021 21:59:10 +0000 Received: by outflank-mailman (output) from mailman id 66094.117350; Tue, 12 Jan 2021 21:59:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgy-00047D-Vt; Tue, 12 Jan 2021 21:59:08 +0000 Received: by outflank-mailman (input) for mailman id 66094; Tue, 12 Jan 2021 21:59:07 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbv-0002PK-IB for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:55 +0000 Received: from mail-wm1-x32b.google.com (unknown [2a00:1450:4864:20::32b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 88aa00d4-4797-4f33-89e1-a41385444d37; Tue, 12 Jan 2021 21:53:04 +0000 (UTC) Received: by mail-wm1-x32b.google.com with SMTP id g8so3232504wme.1 for ; Tue, 12 Jan 2021 13:53:04 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:02 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 88aa00d4-4797-4f33-89e1-a41385444d37 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FKttzyrlhs9r0a6Bqtfw7wbziiG+iHahebfFYHiHG7M=; b=QY5gbz8oh+FFv7V8vWaRviWHfOCxVRY9/zex0CCji7gzYqZTK6pB674J8rGPC+7Llk G8rO25rXp67hYveQYCC7kU6IiSGRytxJ28AW0s37Vx4vBVLkexkk+N+k/yatzwDDwPzx BmBdZHQPZjIFGd4FAQyFhToD/h9PWuwExLWqJsFlfcdSBG96uy/CApkxLBWmqf75/hyh xZJ49rDLK29VFsX+lOc/RVdqSexSZSxd4O5d2d4KZdSMusNwAhOtNJaQ/9bIj8eNwzDj DHw/Ui0a6tXxrLvyAxHB+NM+0NhdWmx35RO4OhU+E2+Iay+mo9AYg/Etc2fhi3Ghuf4o Pm3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FKttzyrlhs9r0a6Bqtfw7wbziiG+iHahebfFYHiHG7M=; b=ohUWTc7eE0KVSQSS0P4UI6AisRefi//NgoB0W1UJMopJ+BMbxwWYzxC+EDO95cx13m geTFu2o4dMnDNF1q0vtpsSYANf5R58Lahv9+fNRLkxKoWANCfLh5GhqEZRveXcD9boSW bf1/KdNx5hDB8yl3qctXaD6PzbQuvRp8du7BPtM5zykEW2T8SCP469x9JSz/eVGSu/58 vL2kTzFRVx/45HzKSoq1ZoyT7d2B6AlLwG4nZVzWOrzo9Zconw0li+cEvV+hOF3voF3W +FnE+oqD6y8/DM+P29cznIqOZSGHzXexQL27ClgEoGUTMZPtX4Vg/UnHVqN7V+5pIHMI Nf9g== X-Gm-Message-State: AOAM530pWlaErp9Ah65FRl/+dstGSQuw/OQTtZumIGJbpShJ4cM+rmCG rKKyo96tZGdAaf+68NMAE+D8N5lMxSWZeQ== X-Google-Smtp-Source: ABdhPJyRGmvztBY1ZUv8thskAilUJB9oOkmDa2dwhfjO88GtENj/dlpBZBDTz8jMyUoaOyFg41HOJA== X-Received: by 2002:a1c:7c19:: with SMTP id x25mr1177518wmc.94.1610488383522; Tue, 12 Jan 2021 13:53:03 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V4 10/24] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu Date: Tue, 12 Jan 2021 23:52:18 +0200 Message-Id: <1610488352-18494-11-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these fields will be used on Arm as is. Move them to common struct vcpu as a part of new struct vcpu_io and drop duplicating "io" prefixes. Also move enum hvm_io_completion to xen/sched.h and remove "hvm" prefixes. This patch completely removes layering violation in the common code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Acked-by: Jan Beulich Reviewed-by: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - update patch description - drop the "io" prefixes from the field names - wrap IO_realmode_completion Changes V3 -> V4: - rename all hvm_vcpu_io locals to "hvio" - rename according to the new renaming scheme IO_ -> VIO_ (io_ -> vio_) - drop "io" prefix from io_completion locals --- xen/arch/x86/hvm/emulate.c | 210 +++++++++++++++++++---------------= ---- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 32 +++--- xen/arch/x86/hvm/ioreq.c | 6 +- xen/arch/x86/hvm/svm/nestedsvm.c | 2 +- xen/arch/x86/hvm/vmx/realmode.c | 8 +- xen/common/ioreq.c | 26 ++--- xen/include/asm-x86/hvm/emulate.h | 2 +- xen/include/asm-x86/hvm/vcpu.h | 11 -- xen/include/xen/ioreq.h | 2 +- xen/include/xen/sched.h | 19 ++++ 11 files changed, 164 insertions(+), 156 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 4d62199..21051ce 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -140,15 +140,15 @@ static const struct hvm_io_handler ioreq_server_handl= er =3D { */ void hvmemul_cancel(struct vcpu *v) { - struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &v->arch.hvm.hvm_io; =20 - vio->io_req.state =3D STATE_IOREQ_NONE; - vio->io_completion =3D HVMIO_no_completion; - vio->mmio_cache_count =3D 0; - vio->mmio_insn_bytes =3D 0; - vio->mmio_access =3D (struct npfec){}; - vio->mmio_retry =3D false; - vio->g2m_ioport =3D NULL; + v->io.req.state =3D STATE_IOREQ_NONE; + v->io.completion =3D VIO_no_completion; + hvio->mmio_cache_count =3D 0; + hvio->mmio_insn_bytes =3D 0; + hvio->mmio_access =3D (struct npfec){}; + hvio->mmio_retry =3D false; + hvio->g2m_ioport =3D NULL; =20 hvmemul_cache_disable(v); } @@ -159,7 +159,7 @@ static int hvmemul_do_io( { struct vcpu *curr =3D current; struct domain *currd =3D curr->domain; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct vcpu_io *vio =3D &curr->io; ioreq_t p =3D { .type =3D is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO, .addr =3D addr, @@ -184,13 +184,13 @@ static int hvmemul_do_io( return X86EMUL_UNHANDLEABLE; } =20 - switch ( vio->io_req.state ) + switch ( vio->req.state ) { case STATE_IOREQ_NONE: break; case STATE_IORESP_READY: - vio->io_req.state =3D STATE_IOREQ_NONE; - p =3D vio->io_req; + vio->req.state =3D STATE_IOREQ_NONE; + p =3D vio->req; =20 /* Verify the emulation request has been correctly re-issued */ if ( (p.type !=3D (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) || @@ -238,7 +238,7 @@ static int hvmemul_do_io( } ASSERT(p.count); =20 - vio->io_req =3D p; + vio->req =3D p; =20 rc =3D hvm_io_intercept(&p); =20 @@ -247,12 +247,12 @@ static int hvmemul_do_io( * our callers and mirror this into latched state. */ ASSERT(p.count <=3D *reps); - *reps =3D vio->io_req.count =3D p.count; + *reps =3D vio->req.count =3D p.count; =20 switch ( rc ) { case X86EMUL_OKAY: - vio->io_req.state =3D STATE_IOREQ_NONE; + vio->req.state =3D STATE_IOREQ_NONE; break; case X86EMUL_UNHANDLEABLE: { @@ -305,7 +305,7 @@ static int hvmemul_do_io( if ( s =3D=3D NULL ) { rc =3D X86EMUL_RETRY; - vio->io_req.state =3D STATE_IOREQ_NONE; + vio->req.state =3D STATE_IOREQ_NONE; break; } =20 @@ -316,7 +316,7 @@ static int hvmemul_do_io( if ( dir =3D=3D IOREQ_READ ) { rc =3D hvm_process_io_intercept(&ioreq_server_handler,= &p); - vio->io_req.state =3D STATE_IOREQ_NONE; + vio->req.state =3D STATE_IOREQ_NONE; break; } } @@ -329,14 +329,14 @@ static int hvmemul_do_io( if ( !s ) { rc =3D hvm_process_io_intercept(&null_handler, &p); - vio->io_req.state =3D STATE_IOREQ_NONE; + vio->req.state =3D STATE_IOREQ_NONE; } else { rc =3D hvm_send_ioreq(s, &p, 0); if ( rc !=3D X86EMUL_RETRY || currd->is_shutting_down ) - vio->io_req.state =3D STATE_IOREQ_NONE; - else if ( !ioreq_needs_completion(&vio->io_req) ) + vio->req.state =3D STATE_IOREQ_NONE; + else if ( !ioreq_needs_completion(&vio->req) ) rc =3D X86EMUL_OKAY; } break; @@ -1005,14 +1005,14 @@ static int hvmemul_phys_mmio_access( * cache indexed by linear MMIO address. */ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( - struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir, bool create) + struct hvm_vcpu_io *hvio, unsigned long gla, uint8_t dir, bool create) { unsigned int i; struct hvm_mmio_cache *cache; =20 - for ( i =3D 0; i < vio->mmio_cache_count; i ++ ) + for ( i =3D 0; i < hvio->mmio_cache_count; i ++ ) { - cache =3D &vio->mmio_cache[i]; + cache =3D &hvio->mmio_cache[i]; =20 if ( gla =3D=3D cache->gla && dir =3D=3D cache->dir ) @@ -1022,13 +1022,13 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cac= he( if ( !create ) return NULL; =20 - i =3D vio->mmio_cache_count; - if( i =3D=3D ARRAY_SIZE(vio->mmio_cache) ) + i =3D hvio->mmio_cache_count; + if( i =3D=3D ARRAY_SIZE(hvio->mmio_cache) ) return NULL; =20 - ++vio->mmio_cache_count; + ++hvio->mmio_cache_count; =20 - cache =3D &vio->mmio_cache[i]; + cache =3D &hvio->mmio_cache[i]; memset(cache, 0, sizeof (*cache)); =20 cache->gla =3D gla; @@ -1037,26 +1037,26 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cac= he( return cache; } =20 -static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gl= a, +static void latch_linear_to_phys(struct hvm_vcpu_io *hvio, unsigned long g= la, unsigned long gpa, bool_t write) { - if ( vio->mmio_access.gla_valid ) + if ( hvio->mmio_access.gla_valid ) return; =20 - vio->mmio_gla =3D gla & PAGE_MASK; - vio->mmio_gpfn =3D PFN_DOWN(gpa); - vio->mmio_access =3D (struct npfec){ .gla_valid =3D 1, - .read_access =3D 1, - .write_access =3D write }; + hvio->mmio_gla =3D gla & PAGE_MASK; + hvio->mmio_gpfn =3D PFN_DOWN(gpa); + hvio->mmio_access =3D (struct npfec){ .gla_valid =3D 1, + .read_access =3D 1, + .write_access =3D write }; } =20 static int hvmemul_linear_mmio_access( unsigned long gla, unsigned int size, uint8_t dir, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpf= n) { - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; unsigned long offset =3D gla & ~PAGE_MASK; - struct hvm_mmio_cache *cache =3D hvmemul_find_mmio_cache(vio, gla, dir= , true); + struct hvm_mmio_cache *cache =3D hvmemul_find_mmio_cache(hvio, gla, di= r, true); unsigned int chunk, buffer_offset =3D 0; paddr_t gpa; unsigned long one_rep =3D 1; @@ -1068,7 +1068,7 @@ static int hvmemul_linear_mmio_access( chunk =3D min_t(unsigned int, size, PAGE_SIZE - offset); =20 if ( known_gpfn ) - gpa =3D pfn_to_paddr(vio->mmio_gpfn) | offset; + gpa =3D pfn_to_paddr(hvio->mmio_gpfn) | offset; else { rc =3D hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec, @@ -1076,7 +1076,7 @@ static int hvmemul_linear_mmio_access( if ( rc !=3D X86EMUL_OKAY ) return rc; =20 - latch_linear_to_phys(vio, gla, gpa, dir =3D=3D IOREQ_WRITE); + latch_linear_to_phys(hvio, gla, gpa, dir =3D=3D IOREQ_WRITE); } =20 for ( ;; ) @@ -1122,22 +1122,22 @@ static inline int hvmemul_linear_mmio_write( =20 static bool known_gla(unsigned long addr, unsigned int bytes, uint32_t pfe= c) { - const struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + const struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; =20 if ( pfec & PFEC_write_access ) { - if ( !vio->mmio_access.write_access ) + if ( !hvio->mmio_access.write_access ) return false; } else if ( pfec & PFEC_insn_fetch ) { - if ( !vio->mmio_access.insn_fetch ) + if ( !hvio->mmio_access.insn_fetch ) return false; } - else if ( !vio->mmio_access.read_access ) + else if ( !hvio->mmio_access.read_access ) return false; =20 - return (vio->mmio_gla =3D=3D (addr & PAGE_MASK) && + return (hvio->mmio_gla =3D=3D (addr & PAGE_MASK) && (addr & ~PAGE_MASK) + bytes <=3D PAGE_SIZE); } =20 @@ -1145,7 +1145,7 @@ static int linear_read(unsigned long addr, unsigned i= nt bytes, void *p_data, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctx= t) { pagefault_info_t pfinfo; - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; unsigned int offset =3D addr & ~PAGE_MASK; int rc =3D HVMTRANS_bad_gfn_to_mfn; =20 @@ -1167,7 +1167,7 @@ static int linear_read(unsigned long addr, unsigned i= nt bytes, void *p_data, * we handle this access in the same way to guarantee completion and h= ence * clean up any interim state. */ - if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_READ, false) ) + if ( !hvmemul_find_mmio_cache(hvio, addr, IOREQ_READ, false) ) rc =3D hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfin= fo); =20 switch ( rc ) @@ -1200,7 +1200,7 @@ static int linear_write(unsigned long addr, unsigned = int bytes, void *p_data, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ct= xt) { pagefault_info_t pfinfo; - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; unsigned int offset =3D addr & ~PAGE_MASK; int rc =3D HVMTRANS_bad_gfn_to_mfn; =20 @@ -1222,7 +1222,7 @@ static int linear_write(unsigned long addr, unsigned = int bytes, void *p_data, * we handle this access in the same way to guarantee completion and h= ence * clean up any interim state. */ - if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_WRITE, false) ) + if ( !hvmemul_find_mmio_cache(hvio, addr, IOREQ_WRITE, false) ) rc =3D hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo= ); =20 switch ( rc ) @@ -1599,7 +1599,7 @@ static int hvmemul_cmpxchg( struct vcpu *curr =3D current; unsigned long addr; uint32_t pfec =3D PFEC_page_present | PFEC_write_access; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; int rc; void *mapping =3D NULL; =20 @@ -1625,8 +1625,8 @@ static int hvmemul_cmpxchg( /* Fix this in case the guest is really relying on r-m-w atomicity= . */ return hvmemul_linear_mmio_write(addr, bytes, p_new, pfec, hvmemul_ctxt, - vio->mmio_access.write_access && - vio->mmio_gla =3D=3D (addr & PAGE= _MASK)); + hvio->mmio_access.write_access && + hvio->mmio_gla =3D=3D (addr & PAG= E_MASK)); } =20 switch ( bytes ) @@ -1823,7 +1823,7 @@ static int hvmemul_rep_movs( struct hvm_emulate_ctxt *hvmemul_ctxt =3D container_of(ctxt, struct hvm_emulate_ctxt, ctxt); struct vcpu *curr =3D current; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; unsigned long saddr, daddr, bytes; paddr_t sgpa, dgpa; uint32_t pfec =3D PFEC_page_present; @@ -1846,18 +1846,18 @@ static int hvmemul_rep_movs( if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl =3D=3D 3 ) pfec |=3D PFEC_user_mode; =20 - if ( vio->mmio_access.read_access && - (vio->mmio_gla =3D=3D (saddr & PAGE_MASK)) && + if ( hvio->mmio_access.read_access && + (hvio->mmio_gla =3D=3D (saddr & PAGE_MASK)) && /* * Upon initial invocation don't truncate large batches just beca= use * of a hit for the translation: Doing the guest page table walk = is * cheaper than multiple round trips through the device model. Yet * when processing a response we can always re-use the translatio= n. */ - (vio->io_req.state =3D=3D STATE_IORESP_READY || + (curr->io.req.state =3D=3D STATE_IORESP_READY || ((!df || *reps =3D=3D 1) && PAGE_SIZE - (saddr & ~PAGE_MASK) >=3D *reps * bytes_per_rep)) ) - sgpa =3D pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK); + sgpa =3D pfn_to_paddr(hvio->mmio_gpfn) | (saddr & ~PAGE_MASK); else { rc =3D hvmemul_linear_to_phys(saddr, &sgpa, bytes_per_rep, reps, p= fec, @@ -1867,13 +1867,13 @@ static int hvmemul_rep_movs( } =20 bytes =3D PAGE_SIZE - (daddr & ~PAGE_MASK); - if ( vio->mmio_access.write_access && - (vio->mmio_gla =3D=3D (daddr & PAGE_MASK)) && + if ( hvio->mmio_access.write_access && + (hvio->mmio_gla =3D=3D (daddr & PAGE_MASK)) && /* See comment above. */ - (vio->io_req.state =3D=3D STATE_IORESP_READY || + (curr->io.req.state =3D=3D STATE_IORESP_READY || ((!df || *reps =3D=3D 1) && PAGE_SIZE - (daddr & ~PAGE_MASK) >=3D *reps * bytes_per_rep)) ) - dgpa =3D pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK); + dgpa =3D pfn_to_paddr(hvio->mmio_gpfn) | (daddr & ~PAGE_MASK); else { rc =3D hvmemul_linear_to_phys(daddr, &dgpa, bytes_per_rep, reps, @@ -1892,14 +1892,14 @@ static int hvmemul_rep_movs( =20 if ( sp2mt =3D=3D p2m_mmio_dm ) { - latch_linear_to_phys(vio, saddr, sgpa, 0); + latch_linear_to_phys(hvio, saddr, sgpa, 0); return hvmemul_do_mmio_addr( sgpa, reps, bytes_per_rep, IOREQ_READ, df, dgpa); } =20 if ( dp2mt =3D=3D p2m_mmio_dm ) { - latch_linear_to_phys(vio, daddr, dgpa, 1); + latch_linear_to_phys(hvio, daddr, dgpa, 1); return hvmemul_do_mmio_addr( dgpa, reps, bytes_per_rep, IOREQ_WRITE, df, sgpa); } @@ -1992,7 +1992,7 @@ static int hvmemul_rep_stos( struct hvm_emulate_ctxt *hvmemul_ctxt =3D container_of(ctxt, struct hvm_emulate_ctxt, ctxt); struct vcpu *curr =3D current; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; unsigned long addr, bytes; paddr_t gpa; p2m_type_t p2mt; @@ -2004,13 +2004,13 @@ static int hvmemul_rep_stos( return rc; =20 bytes =3D PAGE_SIZE - (addr & ~PAGE_MASK); - if ( vio->mmio_access.write_access && - (vio->mmio_gla =3D=3D (addr & PAGE_MASK)) && + if ( hvio->mmio_access.write_access && + (hvio->mmio_gla =3D=3D (addr & PAGE_MASK)) && /* See respective comment in MOVS processing. */ - (vio->io_req.state =3D=3D STATE_IORESP_READY || + (curr->io.req.state =3D=3D STATE_IORESP_READY || ((!df || *reps =3D=3D 1) && PAGE_SIZE - (addr & ~PAGE_MASK) >=3D *reps * bytes_per_rep)) ) - gpa =3D pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK); + gpa =3D pfn_to_paddr(hvio->mmio_gpfn) | (addr & ~PAGE_MASK); else { uint32_t pfec =3D PFEC_page_present | PFEC_write_access; @@ -2103,7 +2103,7 @@ static int hvmemul_rep_stos( return X86EMUL_UNHANDLEABLE; =20 case p2m_mmio_dm: - latch_linear_to_phys(vio, addr, gpa, 1); + latch_linear_to_phys(hvio, addr, gpa, 1); return hvmemul_do_mmio_buffer(gpa, reps, bytes_per_rep, IOREQ_WRIT= E, df, p_data); } @@ -2613,18 +2613,18 @@ static const struct x86_emulate_ops hvm_emulate_ops= _no_write =3D { }; =20 /* - * Note that passing HVMIO_no_completion into this function serves as kind + * Note that passing VIO_no_completion into this function serves as kind * of (but not fully) an "auto select completion" indicator. When there's * no completion needed, the passed in value will be ignored in any case. */ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, const struct x86_emulate_ops *ops, - enum hvm_io_completion completion) + enum vio_completion completion) { const struct cpu_user_regs *regs =3D hvmemul_ctxt->ctxt.regs; struct vcpu *curr =3D current; uint32_t new_intr_shadow; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; int rc; =20 /* @@ -2632,45 +2632,45 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt= *hvmemul_ctxt, * untouched if it's already enabled, for re-execution to consume * entries populated by an earlier pass. */ - if ( vio->cache->num_ents > vio->cache->max_ents ) + if ( hvio->cache->num_ents > hvio->cache->max_ents ) { - ASSERT(vio->io_req.state =3D=3D STATE_IOREQ_NONE); - vio->cache->num_ents =3D 0; + ASSERT(curr->io.req.state =3D=3D STATE_IOREQ_NONE); + hvio->cache->num_ents =3D 0; } else - ASSERT(vio->io_req.state =3D=3D STATE_IORESP_READY); + ASSERT(curr->io.req.state =3D=3D STATE_IORESP_READY); =20 - hvm_emulate_init_per_insn(hvmemul_ctxt, vio->mmio_insn, - vio->mmio_insn_bytes); + hvm_emulate_init_per_insn(hvmemul_ctxt, hvio->mmio_insn, + hvio->mmio_insn_bytes); =20 - vio->mmio_retry =3D 0; + hvio->mmio_retry =3D 0; =20 rc =3D x86_emulate(&hvmemul_ctxt->ctxt, ops); - if ( rc =3D=3D X86EMUL_OKAY && vio->mmio_retry ) + if ( rc =3D=3D X86EMUL_OKAY && hvio->mmio_retry ) rc =3D X86EMUL_RETRY; =20 - if ( !ioreq_needs_completion(&vio->io_req) ) - completion =3D HVMIO_no_completion; - else if ( completion =3D=3D HVMIO_no_completion ) - completion =3D (vio->io_req.type !=3D IOREQ_TYPE_PIO || - hvmemul_ctxt->is_mem_access) ? HVMIO_mmio_completion - : HVMIO_pio_completion; + if ( !ioreq_needs_completion(&curr->io.req) ) + completion =3D VIO_no_completion; + else if ( completion =3D=3D VIO_no_completion ) + completion =3D (curr->io.req.type !=3D IOREQ_TYPE_PIO || + hvmemul_ctxt->is_mem_access) ? VIO_mmio_completion + : VIO_pio_completion; =20 - switch ( vio->io_completion =3D completion ) + switch ( curr->io.completion =3D completion ) { - case HVMIO_no_completion: - case HVMIO_pio_completion: - vio->mmio_cache_count =3D 0; - vio->mmio_insn_bytes =3D 0; - vio->mmio_access =3D (struct npfec){}; + case VIO_no_completion: + case VIO_pio_completion: + hvio->mmio_cache_count =3D 0; + hvio->mmio_insn_bytes =3D 0; + hvio->mmio_access =3D (struct npfec){}; hvmemul_cache_disable(curr); break; =20 - case HVMIO_mmio_completion: - case HVMIO_realmode_completion: - BUILD_BUG_ON(sizeof(vio->mmio_insn) < sizeof(hvmemul_ctxt->insn_bu= f)); - vio->mmio_insn_bytes =3D hvmemul_ctxt->insn_buf_bytes; - memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, vio->mmio_insn_byte= s); + case VIO_mmio_completion: + case VIO_realmode_completion: + BUILD_BUG_ON(sizeof(hvio->mmio_insn) < sizeof(hvmemul_ctxt->insn_b= uf)); + hvio->mmio_insn_bytes =3D hvmemul_ctxt->insn_buf_bytes; + memcpy(hvio->mmio_insn, hvmemul_ctxt->insn_buf, hvio->mmio_insn_by= tes); break; =20 default: @@ -2716,7 +2716,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *= hvmemul_ctxt, =20 int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion) + enum vio_completion completion) { return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops, completion); } @@ -2754,7 +2754,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned = long gla) guest_cpu_user_regs()); ctxt.ctxt.data =3D &mmio_ro_ctxt; =20 - switch ( rc =3D _hvm_emulate_one(&ctxt, ops, HVMIO_no_completion) ) + switch ( rc =3D _hvm_emulate_one(&ctxt, ops, VIO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: @@ -2782,28 +2782,28 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, = unsigned int trapnr, { case EMUL_KIND_NOWRITE: rc =3D _hvm_emulate_one(&ctx, &hvm_emulate_ops_no_write, - HVMIO_no_completion); + VIO_no_completion); break; case EMUL_KIND_SET_CONTEXT_INSN: { struct vcpu *curr =3D current; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; =20 - BUILD_BUG_ON(sizeof(vio->mmio_insn) !=3D + BUILD_BUG_ON(sizeof(hvio->mmio_insn) !=3D sizeof(curr->arch.vm_event->emul.insn.data)); - ASSERT(!vio->mmio_insn_bytes); + ASSERT(!hvio->mmio_insn_bytes); =20 /* * Stash insn buffer into mmio buffer here instead of ctx * to avoid having to add more logic to hvm_emulate_one. */ - vio->mmio_insn_bytes =3D sizeof(vio->mmio_insn); - memcpy(vio->mmio_insn, curr->arch.vm_event->emul.insn.data, - vio->mmio_insn_bytes); + hvio->mmio_insn_bytes =3D sizeof(hvio->mmio_insn); + memcpy(hvio->mmio_insn, curr->arch.vm_event->emul.insn.data, + hvio->mmio_insn_bytes); } /* Fall-through */ default: ctx.set_context =3D (kind =3D=3D EMUL_KIND_SET_CONTEXT_DATA); - rc =3D hvm_emulate_one(&ctx, HVMIO_no_completion); + rc =3D hvm_emulate_one(&ctx, VIO_no_completion); } =20 switch ( rc ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index bc96947..4ed929c 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3800,7 +3800,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs) return; } =20 - switch ( hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( hvm_emulate_one(&ctxt, VIO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index ef8286b..dd733e1 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -85,7 +85,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validat= e, const char *descr) =20 hvm_emulate_init_once(&ctxt, validate, guest_cpu_user_regs()); =20 - switch ( rc =3D hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( rc =3D hvm_emulate_one(&ctxt, VIO_no_completion) ) { case X86EMUL_UNHANDLEABLE: hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt, rc); @@ -109,20 +109,20 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *val= idate, const char *descr) bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec access) { - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; =20 - vio->mmio_access =3D access.gla_valid && - access.kind =3D=3D npfec_kind_with_gla - ? access : (struct npfec){}; - vio->mmio_gla =3D gla & PAGE_MASK; - vio->mmio_gpfn =3D gpfn; + hvio->mmio_access =3D access.gla_valid && + access.kind =3D=3D npfec_kind_with_gla + ? access : (struct npfec){}; + hvio->mmio_gla =3D gla & PAGE_MASK; + hvio->mmio_gpfn =3D gpfn; return handle_mmio(); } =20 bool handle_pio(uint16_t port, unsigned int size, int dir) { struct vcpu *curr =3D current; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct vcpu_io *vio =3D &curr->io; unsigned int data; int rc; =20 @@ -135,8 +135,8 @@ bool handle_pio(uint16_t port, unsigned int size, int d= ir) =20 rc =3D hvmemul_do_pio_buffer(port, size, dir, &data); =20 - if ( ioreq_needs_completion(&vio->io_req) ) - vio->io_completion =3D HVMIO_pio_completion; + if ( ioreq_needs_completion(&vio->req) ) + vio->completion =3D VIO_pio_completion; =20 switch ( rc ) { @@ -175,7 +175,7 @@ static bool_t g2m_portio_accept(const struct hvm_io_han= dler *handler, { struct vcpu *curr =3D current; const struct hvm_domain *hvm =3D &curr->domain->arch.hvm; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; struct g2m_ioport *g2m_ioport; unsigned int start, end; =20 @@ -185,7 +185,7 @@ static bool_t g2m_portio_accept(const struct hvm_io_han= dler *handler, end =3D start + g2m_ioport->np; if ( (p->addr >=3D start) && (p->addr + p->size <=3D end) ) { - vio->g2m_ioport =3D g2m_ioport; + hvio->g2m_ioport =3D g2m_ioport; return 1; } } @@ -196,8 +196,8 @@ static bool_t g2m_portio_accept(const struct hvm_io_han= dler *handler, static int g2m_portio_read(const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, uint64_t *data) { - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; - const struct g2m_ioport *g2m_ioport =3D vio->g2m_ioport; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; + const struct g2m_ioport *g2m_ioport =3D hvio->g2m_ioport; unsigned int mport =3D (addr - g2m_ioport->gport) + g2m_ioport->mport; =20 switch ( size ) @@ -221,8 +221,8 @@ static int g2m_portio_read(const struct hvm_io_handler = *handler, static int g2m_portio_write(const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, uint64_t data) { - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; - const struct g2m_ioport *g2m_ioport =3D vio->g2m_ioport; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; + const struct g2m_ioport *g2m_ioport =3D hvio->g2m_ioport; unsigned int mport =3D (addr - g2m_ioport->gport) + g2m_ioport->mport; =20 switch ( size ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 8393922..c00ee8e 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -40,11 +40,11 @@ bool arch_ioreq_complete_mmio(void) return handle_mmio(); } =20 -bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) +bool arch_vcpu_ioreq_completion(enum vio_completion completion) { - switch ( io_completion ) + switch ( completion ) { - case HVMIO_realmode_completion: + case VIO_realmode_completion: { struct hvm_emulate_ctxt ctxt; =20 diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nested= svm.c index fcfccf7..6d90630 100644 --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -1266,7 +1266,7 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v) * Delay the injection because this would result in delivering * an interrupt *within* the execution of an instruction. */ - if ( v->arch.hvm.hvm_io.io_req.state !=3D STATE_IOREQ_NONE ) + if ( v->io.req.state !=3D STATE_IOREQ_NONE ) return hvm_intblk_shadow; =20 if ( !nv->nv_vmexit_pending && n2vmcb->exit_int_info.v ) diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmod= e.c index 768f01e..cc23afa 100644 --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -101,7 +101,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *= hvmemul_ctxt) =20 perfc_incr(realmode_emulations); =20 - rc =3D hvm_emulate_one(hvmemul_ctxt, HVMIO_realmode_completion); + rc =3D hvm_emulate_one(hvmemul_ctxt, VIO_realmode_completion); =20 if ( rc =3D=3D X86EMUL_UNHANDLEABLE ) { @@ -153,7 +153,7 @@ void vmx_realmode(struct cpu_user_regs *regs) struct vcpu *curr =3D current; struct hvm_emulate_ctxt hvmemul_ctxt; struct segment_register *sreg; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; unsigned long intr_info; unsigned int emulations =3D 0; =20 @@ -188,7 +188,7 @@ void vmx_realmode(struct cpu_user_regs *regs) =20 vmx_realmode_emulate_one(&hvmemul_ctxt); =20 - if ( vio->io_req.state !=3D STATE_IOREQ_NONE || vio->mmio_retry ) + if ( curr->io.req.state !=3D STATE_IOREQ_NONE || hvio->mmio_retry ) break; =20 /* Stop emulating unless our segment state is not safe */ @@ -202,7 +202,7 @@ void vmx_realmode(struct cpu_user_regs *regs) } =20 /* Need to emulate next time if we've started an IO operation */ - if ( vio->io_req.state !=3D STATE_IOREQ_NONE ) + if ( curr->io.req.state !=3D STATE_IOREQ_NONE ) curr->arch.hvm.vmx.vmx_emulate =3D 1; =20 if ( !curr->arch.hvm.vmx.vmx_emulate && !curr->arch.hvm.vmx.vmx_realmo= de ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 72b5da0..273683f 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -159,7 +159,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, iore= q_t *p) break; } =20 - p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; + p =3D &sv->vcpu->io.req; if ( ioreq_needs_completion(p) ) p->data =3D data; =20 @@ -171,10 +171,10 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, io= req_t *p) bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; - struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; + struct vcpu_io *vio =3D &v->io; struct ioreq_server *s; struct ioreq_vcpu *sv; - enum hvm_io_completion io_completion; + enum vio_completion completion; =20 if ( has_vpci(d) && vpci_process_pending(v) ) { @@ -186,29 +186,29 @@ bool handle_hvm_io_completion(struct vcpu *v) if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) return false; =20 - vio->io_req.state =3D ioreq_needs_completion(&vio->io_req) ? + vio->req.state =3D ioreq_needs_completion(&vio->req) ? STATE_IORESP_READY : STATE_IOREQ_NONE; =20 msix_write_completion(v); vcpu_end_shutdown_deferral(v); =20 - io_completion =3D vio->io_completion; - vio->io_completion =3D HVMIO_no_completion; + completion =3D vio->completion; + vio->completion =3D VIO_no_completion; =20 - switch ( io_completion ) + switch ( completion ) { - case HVMIO_no_completion: + case VIO_no_completion: break; =20 - case HVMIO_mmio_completion: + case VIO_mmio_completion: return arch_ioreq_complete_mmio(); =20 - case HVMIO_pio_completion: - return handle_pio(vio->io_req.addr, vio->io_req.size, - vio->io_req.dir); + case VIO_pio_completion: + return handle_pio(vio->req.addr, vio->req.size, + vio->req.dir); =20 default: - return arch_vcpu_ioreq_completion(io_completion); + return arch_vcpu_ioreq_completion(completion); } =20 return true; diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/em= ulate.h index 1620cc7..610078b 100644 --- a/xen/include/asm-x86/hvm/emulate.h +++ b/xen/include/asm-x86/hvm/emulate.h @@ -65,7 +65,7 @@ bool __nonnull(1, 2) hvm_emulate_one_insn( const char *descr); int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion); + enum vio_completion completion); void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 6c1feda..8adf455 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -28,13 +28,6 @@ #include #include =20 -enum hvm_io_completion { - HVMIO_no_completion, - HVMIO_mmio_completion, - HVMIO_pio_completion, - HVMIO_realmode_completion -}; - struct hvm_vcpu_asid { uint64_t generation; uint32_t asid; @@ -52,10 +45,6 @@ struct hvm_mmio_cache { }; =20 struct hvm_vcpu_io { - /* I/O request in flight to device model. */ - enum hvm_io_completion io_completion; - ioreq_t io_req; - /* * HVM emulation: * Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn. diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 7a90873..dffed60 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -105,7 +105,7 @@ void hvm_ioreq_init(struct domain *d); int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const= _op); =20 bool arch_ioreq_complete_mmio(void); -bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); +bool arch_vcpu_ioreq_completion(enum vio_completion completion); int arch_ioreq_server_map_pages(struct ioreq_server *s); void arch_ioreq_server_unmap_pages(struct ioreq_server *s); void arch_ioreq_server_enable(struct ioreq_server *s); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index ad0d761..7aea2bb 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -147,6 +147,21 @@ void evtchn_destroy_final(struct domain *d); /* from c= omplete_domain_destroy */ =20 struct waitqueue_vcpu; =20 +enum vio_completion { + VIO_no_completion, + VIO_mmio_completion, + VIO_pio_completion, +#ifdef CONFIG_X86 + VIO_realmode_completion, +#endif +}; + +struct vcpu_io { + /* I/O request in flight to device model. */ + enum vio_completion completion; + ioreq_t req; +}; + struct vcpu { int vcpu_id; @@ -258,6 +273,10 @@ struct vcpu struct vpci_vcpu vpci; =20 struct arch_vcpu arch; + +#ifdef CONFIG_IOREQ_SERVER + struct vcpu_io io; +#endif }; =20 struct sched_unit { --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488761; cv=none; d=zohomail.com; s=zohoarc; b=I0Hkwj3ZzAAQo28RafUp/q/xVmiD5DWn+OgUWIIiE9uIY62UXvdgd4fme7KMurq+4YbpOmSPdUAaw0rR4cjpETn1PT/oYJuSBR0l6Tv4HAGSyHhHy7LoJ/TMaIZGUqr8l+zqeO2xsFXv9A0U2EuERotF710mtDMOiljmIl/Vo7w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488761; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=r9qMuj3+JjZCaIsXkfzjOZ+OYBXiWWkq8CnfFLTk3DQ=; b=Sfsm/NvwWsCela62AYxmOKwsJaM5buOpa4QOM2xg55/C6i1PU0gAkkNsbnI3pIw2WrNkl7/aGYiywexiWpnZ4c4LVYPNSG8/HvyKadtX0uvw39pjAgm7j31xBtNpGGdEWmmYYbR4Iq1/TeclR9jwmZh7WO1FfW/pw022gDPeqdc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488761517419.23438640716097; Tue, 12 Jan 2021 13:59:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66077.117259 (Exim 4.92) (envelope-from ) id 1kzRgp-0003ev-5e; Tue, 12 Jan 2021 21:58:59 +0000 Received: by outflank-mailman (output) from mailman id 66077.117259; Tue, 12 Jan 2021 21:58:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgo-0003eY-SE; Tue, 12 Jan 2021 21:58:58 +0000 Received: by outflank-mailman (input) for mailman id 66077; Tue, 12 Jan 2021 21:58:57 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRc0-0002PK-IL for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:00 +0000 Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8981ee5b-67c0-4834-8062-c0bc29ab2fb5; Tue, 12 Jan 2021 21:53:05 +0000 (UTC) Received: by mail-wm1-x32f.google.com with SMTP id i63so3214278wma.4 for ; Tue, 12 Jan 2021 13:53:05 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:04 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8981ee5b-67c0-4834-8062-c0bc29ab2fb5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=r9qMuj3+JjZCaIsXkfzjOZ+OYBXiWWkq8CnfFLTk3DQ=; b=LpKNcl8BGvU/LvQvGUTVfo9jJBWoWXWwi/1tC3C35Blx9EfAoTNGn+OdyWeq79GDmY CA1VYzPSWL0o2JMuH9nu0LXzuKOrAfg0Z3R+yNg9OH4iHBRYliYm6AQnBLgDuKjenAd9 miV30NsoVkHNj+apN0VnVKVlVm3cLsJ4b2iXe00dqvfjELaAvAe8hYTzhud77ms1yDwL Hm4hOjP9tt6SZUktlukn9FbylmqlAAfHJs3pIszB48xHbPd9YIjoaPrUHy8rY3P8MLvf iBcvRFDKFXc/8O9aNBw7TmecSUF1LYK0yecSV9oJTUOOIv0lMnzauJ/YIDqL/kYLtAMf c0kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=r9qMuj3+JjZCaIsXkfzjOZ+OYBXiWWkq8CnfFLTk3DQ=; b=tUhMcq4K2CvJPZVezeY3HGKu98sT5CO9gxB3WBt4U0DQb9MtZCL4SJXiD5ogyzzJQv L8bxP/bmC82A7REzBbJlLfMSYE+UdiJboG6XuBV4fLfDkcYuUyjjd2rJkGJwE9KhlYNW vkprlhkJ4mnqIe/hcm23dTEQMorSlfn2O3mp+jUgDqAjv80veBpLAyYKBsNmwMQWrXRt hY+4uWWY2UH48sQewGtpcjlRBLBAlN0k5DkgXSPjAkxpCCZ5udPDSuaABtjCRwjTd8Kw iVhhluUjFdFNdMci5KZJ8HhpbCjEF0EMn8ILUSSGmGzLWmGAJtJfuJZK/SSIoELTQEOl LOSQ== X-Gm-Message-State: AOAM531BnEl9ytoCwqKhknNIMJ/QSR7gbIJnabmYLP7v9GzOsIFuyk8l Q8QlIAfpvWdxdD/IkXPRvGktYuJ09iz73g== X-Google-Smtp-Source: ABdhPJy5hPk56yWF1gDwGafXoyENIxy3TEKW6ihiULkChORQ1L2PBzJJ252f8E6nOwLtLGrn+UgzCA== X-Received: by 2002:a1c:ac86:: with SMTP id v128mr1147523wme.76.1610488384787; Tue, 12 Jan 2021 13:53:04 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V4 11/24] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Date: Tue, 12 Jan 2021 23:52:19 +0200 Message-Id: <1610488352-18494-12-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall As x86 implementation of XENMEM_resource_ioreq_server can be re-used on Arm later on, this patch makes it common and removes arch_acquire_resource as unneeded. Also re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Jan Beulich [On Arm only] Tested-by: Wei Chen Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - no changes Changes V1 -> V2: - update the author of a patch Changes V2 -> V3: - don't wrap #include - limit the number of #ifdef-s - re-order #include-s alphabetically Changes V3 -> V4: - rebase - Add Jan's R-b --- xen/arch/x86/mm.c | 44 --------------------------------- xen/common/memory.c | 63 +++++++++++++++++++++++++++++++++++++++-----= ---- xen/include/asm-arm/mm.h | 8 ------ xen/include/asm-x86/mm.h | 4 --- 4 files changed, 51 insertions(+), 68 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index f6e128e..54ac398 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4587,50 +4587,6 @@ static int handle_iomem_range(unsigned long s, unsig= ned long e, void *p) return err || s > e ? err : _handle_iomem_range(s, e, p); } =20 -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]) -{ - int rc; - - switch ( type ) - { -#ifdef CONFIG_HVM - case XENMEM_resource_ioreq_server: - { - ioservid_t ioservid =3D id; - unsigned int i; - - rc =3D -EINVAL; - if ( !is_hvm_domain(d) ) - break; - - if ( id !=3D (unsigned int)ioservid ) - break; - - rc =3D 0; - for ( i =3D 0; i < nr_frames; i++ ) - { - mfn_t mfn; - - rc =3D hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); - if ( rc ) - break; - - mfn_list[i] =3D mfn_x(mfn); - } - break; - } -#endif - - default: - rc =3D -EOPNOTSUPP; - break; - } - - return rc; -} - long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { int rc; diff --git a/xen/common/memory.c b/xen/common/memory.c index b21b6c4..7e560b5 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -8,22 +8,23 @@ */ =20 #include -#include +#include +#include +#include +#include +#include +#include +#include #include +#include #include +#include +#include #include #include #include -#include -#include -#include -#include -#include -#include -#include -#include #include -#include +#include #include #include #include @@ -1090,6 +1091,40 @@ static int acquire_grant_table(struct domain *d, uns= igned int id, return 0; } =20 +static int acquire_ioreq_server(struct domain *d, + unsigned int id, + unsigned long frame, + unsigned int nr_frames, + xen_pfn_t mfn_list[]) +{ +#ifdef CONFIG_IOREQ_SERVER + ioservid_t ioservid =3D id; + unsigned int i; + int rc; + + if ( !is_hvm_domain(d) ) + return -EINVAL; + + if ( id !=3D (unsigned int)ioservid ) + return -EINVAL; + + for ( i =3D 0; i < nr_frames; i++ ) + { + mfn_t mfn; + + rc =3D hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + if ( rc ) + return rc; + + mfn_list[i] =3D mfn_x(mfn); + } + + return 0; +#else + return -EOPNOTSUPP; +#endif +} + static int acquire_resource( XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg) { @@ -1148,9 +1183,13 @@ static int acquire_resource( mfn_list); break; =20 + case XENMEM_resource_ioreq_server: + rc =3D acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames, + mfn_list); + break; + default: - rc =3D arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame, - xmar.nr_frames, mfn_list); + rc =3D -EOPNOTSUPP; break; } =20 diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index f8ba49b..0b7de31 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -358,14 +358,6 @@ static inline void put_page_and_type(struct page_info = *page) =20 void clear_and_clean_page(struct page_info *page); =20 -static inline -int arch_acquire_resource(struct domain *d, unsigned int type, unsigned in= t id, - unsigned long frame, unsigned int nr_frames, - xen_pfn_t mfn_list[]) -{ - return -EOPNOTSUPP; -} - unsigned int arch_get_dma_bitsize(void); =20 #endif /* __ARCH_ARM_MM__ */ diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index deeba75..859214e 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -639,8 +639,4 @@ static inline bool arch_mfn_in_directmap(unsigned long = mfn) return mfn <=3D (virt_to_mfn(eva - 1) + 1); } =20 -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]); - #endif /* __ASM_X86_MM_H__ */ --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488763; cv=none; d=zohomail.com; s=zohoarc; b=nwYQpDZQ/vmvYNdiP9k6b4H7OoTzz6Bx2qchd32iQxmbIvBc7YD9MczDWXjCS/RO1qtr3zL8kZL0UXodflZOZk34viHm9pi7xSXlcTuElOFwDje1QnfRUYt8/VcvhPVOMBbLCkzmz/IgDbM5kSaKG6wLsvTV9RmWgFtpc7o87fc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488763; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=KptDLtZiPJUj2pq8DO71RGEQ9ixZa+sozo8Z+p+jkOY=; b=edDfmlsluR6S80M4nVFk9Incth3Ie0n9GapC1FeP3g5OhmoTjVdQrEngzw4oGrW6tsHV+rDH3MAxbNt6eFQtZM7cQwgoPaaMjoJXLEgjBXXz5CMETX0seRKLyKQS/2X8wOqAtjwrkAZ4VQMVDYa8UuUpaztMCtRRXo0MwpKIOdk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488763601949.4430022405614; Tue, 12 Jan 2021 13:59:23 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66082.117271 (Exim 4.92) (envelope-from ) id 1kzRgq-0003gT-0b; Tue, 12 Jan 2021 21:59:00 +0000 Received: by outflank-mailman (output) from mailman id 66082.117271; Tue, 12 Jan 2021 21:58:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgp-0003fz-FU; Tue, 12 Jan 2021 21:58:59 +0000 Received: by outflank-mailman (input) for mailman id 66082; Tue, 12 Jan 2021 21:58:58 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRcA-0002PK-Il for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:10 +0000 Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3247a3ae-341e-4613-a803-ea647edea240; Tue, 12 Jan 2021 21:53:07 +0000 (UTC) Received: by mail-wm1-x329.google.com with SMTP id r4so3488503wmh.5 for ; Tue, 12 Jan 2021 13:53:07 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:05 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3247a3ae-341e-4613-a803-ea647edea240 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=KptDLtZiPJUj2pq8DO71RGEQ9ixZa+sozo8Z+p+jkOY=; b=aah2k/71n77+0QspVNumbYsWocK1/W+OFOirF+W0z2C6GHsADy5cFtCfWxmVsbm5Gl T37l1IpufKSXXuUIVPFoT2lk7w1H4RfVoXg53F7ir5K5pBBTlicKJ+7h2TpUoNUxaBUu zHcest7rKKrn/pkskwpwAN31defVdhvsaUjisQQk72NnkcqCCYvESniFJQutB18N7m35 xy08sXUs3sTIRR0BEe8yvDOM5NL/0t64jH8z9Ax68eJqciWRO95fXL9zG3iozVLnlJsW yWw1njSlMwsYxURYhQafKpUcq0vLgzBh+LRASkLQuHy2r4J9VV20ElG7z/hykkJ2P9iO vOXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=KptDLtZiPJUj2pq8DO71RGEQ9ixZa+sozo8Z+p+jkOY=; b=eHUojkA80PZQecnJBQni+VSAsapWKhZseGhFwLQp6XyfNNdIDq3dODJvEVVHrS9N1C bFziOHNPfqlGI3Vbvdn5jb51k63O3+tqxaiMH3vykAHV2/52MhU1VAiuVXFIZmoOEK4j HCBJytkXUwvsckYFTAeZ7/ozLgAg2dbXeDWsVQwYXuZL8vZd58hehq4mpJXQ0C6g9eGB yXSWkPD+UZFEe0EM1YntWxzaW++1VnZeKYG0qnpDWjUlgW5ED1asCSzAj+bXsMcfwZek ax5FK19q+RII2ucyvTLFjetSZiX1Ij8iVTfXKE+B5s91afYSyJBRu3hlRJKXvSlETmeS FZ8Q== X-Gm-Message-State: AOAM5315NSL6Cyy9zk9aG5e0b+h1XKXdPHilLm+vz55V66jXuYQS7GIT b3U3qWs4PR3VmwYI60dudbq2QP/QebUsfg== X-Google-Smtp-Source: ABdhPJwspKAhEPetDwMZeFnkyBENtkIN7y+/Ye0yarKPsYWeTn9aRoUgFdzOZZnhOA5njNKLoCsLpQ== X-Received: by 2002:a7b:c3d6:: with SMTP id t22mr1182025wmj.134.1610488386141; Tue, 12 Jan 2021 13:53:06 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Paul Durrant , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V4 12/24] xen/ioreq: Remove "hvm" prefixes from involved function names Date: Tue, 12 Jan 2021 23:52:20 +0200 Message-Id: <1610488352-18494-13-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch removes "hvm" prefixes and infixes from IOREQ related function names in the common code and performs a renaming where appropriate according to the more consistent new naming scheme: - IOREQ server functions should start with "ioreq_server_" - IOREQ functions should start with "ioreq_" A few function names are clarified to better fit into their purposes: handle_hvm_io_completion -> vcpu_ioreq_handle_completion hvm_io_pending -> vcpu_ioreq_pending hvm_ioreq_init -> ioreq_domain_init hvm_alloc_ioreq_mfn -> ioreq_server_alloc_mfn hvm_free_ioreq_mfn -> ioreq_server_free_mfn Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - update patch description - rename everything touched according to new naming scheme Changes V3 -> V4: - rebase - rename ioreq_update_evtchn() to ioreq_server_update_evtchn() - add Jan's R-b --- xen/arch/x86/hvm/dm.c | 4 +- xen/arch/x86/hvm/emulate.c | 6 +- xen/arch/x86/hvm/hvm.c | 10 +-- xen/arch/x86/hvm/io.c | 6 +- xen/arch/x86/hvm/ioreq.c | 2 +- xen/arch/x86/hvm/stdvga.c | 4 +- xen/arch/x86/hvm/vmx/vvmx.c | 2 +- xen/common/ioreq.c | 202 ++++++++++++++++++++++------------------= ---- xen/common/memory.c | 2 +- xen/include/xen/ioreq.h | 30 +++---- 10 files changed, 134 insertions(+), 134 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index dc8e47d..f770536 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -415,8 +415,8 @@ static int dm_op(const struct dmop_args *op_args) break; =20 if ( first_gfn =3D=3D 0 ) - rc =3D hvm_map_mem_type_to_ioreq_server(d, data->id, - data->type, data->flags); + rc =3D ioreq_server_map_mem_type(d, data->id, + data->type, data->flags); else rc =3D 0; =20 diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 21051ce..425c8dd 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -261,7 +261,7 @@ static int hvmemul_do_io( * an ioreq server that can handle it. * * Rules: - * A> PIO or MMIO accesses run through hvm_select_ioreq_server() to + * A> PIO or MMIO accesses run through ioreq_server_select() to * choose the ioreq server by range. If no server is found, the ac= cess * is ignored. * @@ -323,7 +323,7 @@ static int hvmemul_do_io( } =20 if ( !s ) - s =3D hvm_select_ioreq_server(currd, &p); + s =3D ioreq_server_select(currd, &p); =20 /* If there is no suitable backing DM, just ignore accesses */ if ( !s ) @@ -333,7 +333,7 @@ static int hvmemul_do_io( } else { - rc =3D hvm_send_ioreq(s, &p, 0); + rc =3D ioreq_send(s, &p, 0); if ( rc !=3D X86EMUL_RETRY || currd->is_shutting_down ) vio->req.state =3D STATE_IOREQ_NONE; else if ( !ioreq_needs_completion(&vio->req) ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 4ed929c..0d7bb42 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -546,7 +546,7 @@ void hvm_do_resume(struct vcpu *v) =20 pt_restore_timer(v); =20 - if ( !handle_hvm_io_completion(v) ) + if ( !vcpu_ioreq_handle_completion(v) ) return; =20 if ( unlikely(v->arch.vm_event) ) @@ -677,7 +677,7 @@ int hvm_domain_initialise(struct domain *d) register_g2m_portio_handler(d); register_vpci_portio_handler(d); =20 - hvm_ioreq_init(d); + ioreq_domain_init(d); =20 hvm_init_guest_time(d); =20 @@ -739,7 +739,7 @@ void hvm_domain_relinquish_resources(struct domain *d) =20 viridian_domain_deinit(d); =20 - hvm_destroy_all_ioreq_servers(d); + ioreq_server_destroy_all(d); =20 msixtbl_pt_cleanup(d); =20 @@ -1582,7 +1582,7 @@ int hvm_vcpu_initialise(struct vcpu *v) if ( rc ) goto fail5; =20 - rc =3D hvm_all_ioreq_servers_add_vcpu(d, v); + rc =3D ioreq_server_add_vcpu_all(d, v); if ( rc !=3D 0 ) goto fail6; =20 @@ -1618,7 +1618,7 @@ void hvm_vcpu_destroy(struct vcpu *v) { viridian_vcpu_deinit(v); =20 - hvm_all_ioreq_servers_remove_vcpu(v->domain, v); + ioreq_server_remove_vcpu_all(v->domain, v); =20 if ( hvm_altp2m_supported() ) altp2m_vcpu_destroy(v); diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index dd733e1..66a37ee 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -60,7 +60,7 @@ void send_timeoffset_req(unsigned long timeoff) if ( timeoff =3D=3D 0 ) return; =20 - if ( hvm_broadcast_ioreq(&p, true) !=3D 0 ) + if ( ioreq_broadcast(&p, true) !=3D 0 ) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } =20 @@ -74,7 +74,7 @@ void send_invalidate_req(void) .data =3D ~0UL, /* flush all */ }; =20 - if ( hvm_broadcast_ioreq(&p, false) !=3D 0 ) + if ( ioreq_broadcast(&p, false) !=3D 0 ) gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); } =20 @@ -155,7 +155,7 @@ bool handle_pio(uint16_t port, unsigned int size, int d= ir) * We should not advance RIP/EIP if the domain is shutting down or * if X86EMUL_RETRY has been returned by an internal handler. */ - if ( curr->domain->is_shutting_down || !hvm_io_pending(curr) ) + if ( curr->domain->is_shutting_down || !vcpu_ioreq_pending(curr) ) return false; break; =20 diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index c00ee8e..5c9f3a5 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -153,7 +153,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bo= ol buf) { /* * If a page has already been allocated (which will happen on - * demand if hvm_get_ioreq_server_frame() is called), then + * demand if ioreq_server_get_frame() is called), then * mapping a guest frame is not permitted. */ if ( gfn_eq(iorp->gfn, INVALID_GFN) ) diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index ee13449..ab9781d 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handl= er *handler, } =20 done: - srv =3D hvm_select_ioreq_server(current->domain, &p); + srv =3D ioreq_server_select(current->domain, &p); if ( !srv ) return X86EMUL_UNHANDLEABLE; =20 - return hvm_send_ioreq(srv, &p, 1); + return ioreq_send(srv, &p, 1); } =20 static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 0ddb6a4..e9f94da 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1517,7 +1517,7 @@ void nvmx_switch_guest(void) * don't want to continue as this setup is not implemented nor support= ed * as of right now. */ - if ( hvm_io_pending(v) ) + if ( vcpu_ioreq_pending(v) ) return; /* * a softirq may interrupt us between a virtual vmentry is diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 273683f..d233a49 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -59,7 +59,7 @@ static struct ioreq_server *get_ioreq_server(const struct= domain *d, * Iterate over all possible ioreq servers. * * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). + * ioreq servers are favoured in ioreq_server_select(). * This is a semantic that previously existed when ioreq servers * were held in a linked list. */ @@ -106,12 +106,12 @@ static struct ioreq_vcpu *get_pending_vcpu(const stru= ct vcpu *v, return NULL; } =20 -bool hvm_io_pending(struct vcpu *v) +bool vcpu_ioreq_pending(struct vcpu *v) { return get_pending_vcpu(v, NULL); } =20 -static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) +static bool wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state =3D STATE_IOREQ_NONE; unsigned int state =3D p->state; @@ -168,7 +168,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, iore= q_t *p) return true; } =20 -bool handle_hvm_io_completion(struct vcpu *v) +bool vcpu_ioreq_handle_completion(struct vcpu *v) { struct domain *d =3D v->domain; struct vcpu_io *vio =3D &v->io; @@ -183,7 +183,7 @@ bool handle_hvm_io_completion(struct vcpu *v) } =20 sv =3D get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + if ( sv && !wait_for_io(sv, get_ioreq(s, v)) ) return false; =20 vio->req.state =3D ioreq_needs_completion(&vio->req) ? @@ -214,7 +214,7 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } =20 -static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) +static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page; @@ -223,7 +223,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, = bool buf) { /* * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then + * on demand if ioreq_server_get_info() is called), then * allocating a page is not permitted. */ if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -262,7 +262,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, = bool buf) return -ENOMEM; } =20 -static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) +static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; struct page_info *page =3D iorp->page; @@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) return found; } =20 -static void hvm_update_ioreq_evtchn(struct ioreq_server *s, - struct ioreq_vcpu *sv) +static void ioreq_server_update_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); =20 @@ -314,8 +314,8 @@ static void hvm_update_ioreq_evtchn(struct ioreq_server= *s, } } =20 -static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, - struct vcpu *v) +static int ioreq_server_add_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; int rc; @@ -350,7 +350,7 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_serve= r *s, list_add(&sv->list_entry, &s->ioreq_vcpu_list); =20 if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); + ioreq_server_update_evtchn(s, sv); =20 spin_unlock(&s->lock); return 0; @@ -366,8 +366,8 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_serve= r *s, return rc; } =20 -static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, - struct vcpu *v) +static void ioreq_server_remove_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; =20 @@ -394,7 +394,7 @@ static void hvm_ioreq_server_remove_vcpu(struct ioreq_s= erver *s, spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) +static void ioreq_server_remove_all_vcpus(struct ioreq_server *s) { struct ioreq_vcpu *sv, *next; =20 @@ -420,28 +420,28 @@ static void hvm_ioreq_server_remove_all_vcpus(struct = ioreq_server *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) +static int ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; =20 - rc =3D hvm_alloc_ioreq_mfn(s, false); + rc =3D ioreq_server_alloc_mfn(s, false); =20 if ( !rc && (s->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc =3D hvm_alloc_ioreq_mfn(s, true); + rc =3D ioreq_server_alloc_mfn(s, true); =20 if ( rc ) - hvm_free_ioreq_mfn(s, false); + ioreq_server_free_mfn(s, false); =20 return rc; } =20 -static void hvm_ioreq_server_free_pages(struct ioreq_server *s) +static void ioreq_server_free_pages(struct ioreq_server *s) { - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); + ioreq_server_free_mfn(s, true); + ioreq_server_free_mfn(s, false); } =20 -static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) +static void ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; =20 @@ -449,8 +449,8 @@ static void hvm_ioreq_server_free_rangesets(struct iore= q_server *s) rangeset_destroy(s->range[i]); } =20 -static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, - ioservid_t id) +static int ioreq_server_alloc_rangesets(struct ioreq_server *s, + ioservid_t id) { unsigned int i; int rc; @@ -482,12 +482,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct io= req_server *s, return 0; =20 fail: - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); =20 return rc; } =20 -static void hvm_ioreq_server_enable(struct ioreq_server *s) +static void ioreq_server_enable(struct ioreq_server *s) { struct ioreq_vcpu *sv; =20 @@ -503,13 +503,13 @@ static void hvm_ioreq_server_enable(struct ioreq_serv= er *s) list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) - hvm_update_ioreq_evtchn(s, sv); + ioreq_server_update_evtchn(s, sv); =20 done: spin_unlock(&s->lock); } =20 -static void hvm_ioreq_server_disable(struct ioreq_server *s) +static void ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); =20 @@ -524,9 +524,9 @@ static void hvm_ioreq_server_disable(struct ioreq_serve= r *s) spin_unlock(&s->lock); } =20 -static int hvm_ioreq_server_init(struct ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) +static int ioreq_server_init(struct ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) { struct domain *currd =3D current->domain; struct vcpu *v; @@ -544,7 +544,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, s->ioreq.gfn =3D INVALID_GFN; s->bufioreq.gfn =3D INVALID_GFN; =20 - rc =3D hvm_ioreq_server_alloc_rangesets(s, id); + rc =3D ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; =20 @@ -552,7 +552,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, =20 for_each_vcpu ( d, v ) { - rc =3D hvm_ioreq_server_add_vcpu(s, v); + rc =3D ioreq_server_add_vcpu(s, v); if ( rc ) goto fail_add; } @@ -560,23 +560,23 @@ static int hvm_ioreq_server_init(struct ioreq_server = *s, return 0; =20 fail_add: - hvm_ioreq_server_remove_all_vcpus(s); + ioreq_server_remove_all_vcpus(s); arch_ioreq_server_unmap_pages(s); =20 - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); =20 put_domain(s->emulator); return rc; } =20 -static void hvm_ioreq_server_deinit(struct ioreq_server *s) +static void ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); + ioreq_server_remove_all_vcpus(s); =20 /* * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. + * ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages * are not mapped, leaving the page to be freed by the latter. * However if the pages are mapped then the former will set @@ -584,15 +584,15 @@ static void hvm_ioreq_server_deinit(struct ioreq_serv= er *s) * nothing. */ arch_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); + ioreq_server_free_pages(s); =20 - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); =20 put_domain(s->emulator); } =20 -static int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) +static int ioreq_server_create(struct domain *d, int bufioreq_handling, + ioservid_t *id) { struct ioreq_server *s; unsigned int i; @@ -620,11 +620,11 @@ static int hvm_create_ioreq_server(struct domain *d, = int bufioreq_handling, =20 /* * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. + * ioreq_server_init() since the target domain is paused. */ set_ioreq_server(d, i, s); =20 - rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); + rc =3D ioreq_server_init(s, d, bufioreq_handling, i); if ( rc ) { set_ioreq_server(d, i, NULL); @@ -647,7 +647,7 @@ static int hvm_create_ioreq_server(struct domain *d, in= t bufioreq_handling, return rc; } =20 -static int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +static int ioreq_server_destroy(struct domain *d, ioservid_t id) { struct ioreq_server *s; int rc; @@ -668,13 +668,13 @@ static int hvm_destroy_ioreq_server(struct domain *d,= ioservid_t id) =20 arch_ioreq_server_destroy(s); =20 - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); =20 /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is paused. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); =20 domain_unpause(d); @@ -689,10 +689,10 @@ static int hvm_destroy_ioreq_server(struct domain *d,= ioservid_t id) return rc; } =20 -static int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) +static int ioreq_server_get_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) { struct ioreq_server *s; int rc; @@ -736,8 +736,8 @@ static int hvm_get_ioreq_server_info(struct domain *d, = ioservid_t id, return rc; } =20 -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) +int ioreq_server_get_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) { struct ioreq_server *s; int rc; @@ -756,7 +756,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, if ( s->emulator !=3D current->domain ) goto out; =20 - rc =3D hvm_ioreq_server_alloc_pages(s); + rc =3D ioreq_server_alloc_pages(s); if ( rc ) goto out; =20 @@ -787,9 +787,9 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, return rc; } =20 -static int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t i= d, - uint32_t type, uint64_t start, - uint64_t end) +static int ioreq_server_map_io_range(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -839,9 +839,9 @@ static int hvm_map_io_range_to_ioreq_server(struct doma= in *d, ioservid_t id, return rc; } =20 -static int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid= _t id, - uint32_t type, uint64_t st= art, - uint64_t end) +static int ioreq_server_unmap_io_range(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -899,8 +899,8 @@ static int hvm_unmap_io_range_from_ioreq_server(struct = domain *d, ioservid_t id, * Support for the emulation of read operations can be added when an ioreq * server has such requirement in the future. */ -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags) +int ioreq_server_map_mem_type(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags) { struct ioreq_server *s; int rc; @@ -934,8 +934,8 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, return rc; } =20 -static int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) +static int ioreq_server_set_state(struct domain *d, ioservid_t id, + bool enabled) { struct ioreq_server *s; int rc; @@ -955,9 +955,9 @@ static int hvm_set_ioreq_server_state(struct domain *d,= ioservid_t id, domain_pause(d); =20 if ( enabled ) - hvm_ioreq_server_enable(s); + ioreq_server_enable(s); else - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); =20 domain_unpause(d); =20 @@ -968,7 +968,7 @@ static int hvm_set_ioreq_server_state(struct domain *d,= ioservid_t id, return rc; } =20 -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -978,7 +978,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - rc =3D hvm_ioreq_server_add_vcpu(s, v); + rc =3D ioreq_server_add_vcpu(s, v); if ( rc ) goto fail; } @@ -995,7 +995,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) if ( !s ) continue; =20 - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); } =20 spin_unlock_recursive(&d->ioreq_server.lock); @@ -1003,7 +1003,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, = struct vcpu *v) return rc; } =20 -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +void ioreq_server_remove_vcpu_all(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -1011,12 +1011,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domai= n *d, struct vcpu *v) spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); =20 spin_unlock_recursive(&d->ioreq_server.lock); } =20 -void hvm_destroy_all_ioreq_servers(struct domain *d) +void ioreq_server_destroy_all(struct domain *d) { struct ioreq_server *s; unsigned int id; @@ -1030,13 +1030,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) =20 FOR_EACH_IOREQ_SERVER(d, id, s) { - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); =20 /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is being destroyed. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); =20 xfree(s); @@ -1045,8 +1045,8 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->ioreq_server.lock); } =20 -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *ioreq_server_select(struct domain *d, + ioreq_t *p) { struct ioreq_server *s; uint8_t type; @@ -1101,7 +1101,7 @@ struct ioreq_server *hvm_select_ioreq_server(struct d= omain *d, return NULL; } =20 -static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) +static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p) { struct domain *d =3D current->domain; struct ioreq_page *iorp; @@ -1194,8 +1194,8 @@ static int hvm_send_buffered_ioreq(struct ioreq_serve= r *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } =20 -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered) +int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered) { struct vcpu *curr =3D current; struct domain *d =3D curr->domain; @@ -1204,7 +1204,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *p= roto_p, ASSERT(s); =20 if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); + return ioreq_send_buffered(s, proto_p); =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return IOREQ_STATUS_RETRY; @@ -1254,7 +1254,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *p= roto_p, return IOREQ_STATUS_UNHANDLED; } =20 -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +unsigned int ioreq_broadcast(ioreq_t *p, bool buffered) { struct domain *d =3D current->domain; struct ioreq_server *s; @@ -1265,14 +1265,14 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool b= uffered) if ( !s->enabled ) continue; =20 - if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) + if ( ioreq_send(s, p, buffered) =3D=3D IOREQ_STATUS_UNHANDLED ) failed++; } =20 return failed; } =20 -void hvm_ioreq_init(struct domain *d) +void ioreq_domain_init(struct domain *d) { spin_lock_init(&d->ioreq_server.lock); =20 @@ -1296,8 +1296,8 @@ int ioreq_server_dm_op(struct xen_dm_op *op, struct d= omain *d, bool *const_op) if ( data->pad[0] || data->pad[1] || data->pad[2] ) break; =20 - rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); + rc =3D ioreq_server_create(d, data->handle_bufioreq, + &data->id); break; } =20 @@ -1313,12 +1313,12 @@ int ioreq_server_dm_op(struct xen_dm_op *op, struct= domain *d, bool *const_op) if ( data->flags & ~valid_flags ) break; =20 - rc =3D hvm_get_ioreq_server_info(d, data->id, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : (unsigned long *)&data->iore= q_gfn, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : (unsigned long *)&data->bufi= oreq_gfn, - &data->bufioreq_port); + rc =3D ioreq_server_get_info(d, data->id, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->ioreq_gf= n, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->bufioreq= _gfn, + &data->bufioreq_port); break; } =20 @@ -1331,8 +1331,8 @@ int ioreq_server_dm_op(struct xen_dm_op *op, struct d= omain *d, bool *const_op) if ( data->pad ) break; =20 - rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); + rc =3D ioreq_server_map_io_range(d, data->id, data->type, + data->start, data->end); break; } =20 @@ -1345,8 +1345,8 @@ int ioreq_server_dm_op(struct xen_dm_op *op, struct d= omain *d, bool *const_op) if ( data->pad ) break; =20 - rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, - data->start, data->end); + rc =3D ioreq_server_unmap_io_range(d, data->id, data->type, + data->start, data->end); break; } =20 @@ -1359,7 +1359,7 @@ int ioreq_server_dm_op(struct xen_dm_op *op, struct d= omain *d, bool *const_op) if ( data->pad ) break; =20 - rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + rc =3D ioreq_server_set_state(d, data->id, !!data->enabled); break; } =20 @@ -1372,7 +1372,7 @@ int ioreq_server_dm_op(struct xen_dm_op *op, struct d= omain *d, bool *const_op) if ( data->pad ) break; =20 - rc =3D hvm_destroy_ioreq_server(d, data->id); + rc =3D ioreq_server_destroy(d, data->id); break; } =20 diff --git a/xen/common/memory.c b/xen/common/memory.c index 7e560b5..66828d9 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1112,7 +1112,7 @@ static int acquire_ioreq_server(struct domain *d, { mfn_t mfn; =20 - rc =3D hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + rc =3D ioreq_server_get_frame(d, id, frame + i, &mfn); if ( rc ) return rc; =20 diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index dffed60..ec7e98d 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -81,26 +81,26 @@ static inline bool ioreq_needs_completion(const ioreq_t= *ioreq) #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) =20 -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); +bool vcpu_ioreq_pending(struct vcpu *v); +bool vcpu_ioreq_handle_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); =20 -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags); +int ioreq_server_get_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); +int ioreq_server_map_mem_type(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags); =20 -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); +int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v); +void ioreq_server_remove_vcpu_all(struct domain *d, struct vcpu *v); +void ioreq_server_destroy_all(struct domain *d); =20 -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); +struct ioreq_server *ioreq_server_select(struct domain *d, + ioreq_t *p); +int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int ioreq_broadcast(ioreq_t *p, bool buffered); =20 -void hvm_ioreq_init(struct domain *d); +void ioreq_domain_init(struct domain *d); =20 int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const= _op); =20 --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488761; cv=none; d=zohomail.com; s=zohoarc; b=PyKcLG3olhKftCzjMJF2LI36DoOp0DAU5StkBS3sAGpDI1S3sFoo/koCilLP2Qdlq2Ytio4Uz/ChZYHb/QrdtGh57sh7a3BtIdSDvxP/4aIsvpDtPgCM856SJTxfWfjVfYWeEroefXecXGzcqg8lD9s88AjggvDWQ7Up7v+642g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488761; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=LLGfBggyWdYRwwW0ml3hX58E1A20s8FdtzBMFvcEooY=; b=HiPk7cLbLuJgkTtGiOeeRwqRA70yyVGO7LBHSSzZdHry6NZTEHQ387i5OOBuQ7rg5Fj82eos5guKHKmMPzxjrxZwaJifV1HgS7IZiOWszAYaPgz+8psbviuw9xU44YfEhI9utPzuDn3xWyLVQaO7b7anRFtf0/yYUI8HCvj5ZFg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488761159754.7823103820476; Tue, 12 Jan 2021 13:59:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66086.117303 (Exim 4.92) (envelope-from ) id 1kzRgs-0003na-EY; Tue, 12 Jan 2021 21:59:02 +0000 Received: by outflank-mailman (output) from mailman id 66086.117303; Tue, 12 Jan 2021 21:59:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgs-0003nL-4C; Tue, 12 Jan 2021 21:59:02 +0000 Received: by outflank-mailman (input) for mailman id 66086; Tue, 12 Jan 2021 21:59:01 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRcF-0002PK-Jm for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:15 +0000 Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 672ce908-4a2c-4f0d-b403-2a40745f87d5; Tue, 12 Jan 2021 21:53:07 +0000 (UTC) Received: by mail-wm1-x335.google.com with SMTP id i63so3214347wma.4 for ; Tue, 12 Jan 2021 13:53:07 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:06 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 672ce908-4a2c-4f0d-b403-2a40745f87d5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LLGfBggyWdYRwwW0ml3hX58E1A20s8FdtzBMFvcEooY=; b=Cu6yH0wsrNVLtjzeE2TIZhrHBhX7wc3sYH+G7Kl/gEjiuAz/IZPOPt8w2NyVfM259+ +MTM65/Tx/3ICPcVTyjtMzg0eoryaIQmi/iN66/ET7jBm9NHVuxU0FQQAQBecwG7YVhm b9WgU9CUPodfv8tYjcUPOzbd0jeJ708AX1xDLwFnY/PyyTIk/agR7skBC3xXUM29I9iy V7Y/cFjAIYh4UsT/9rKd95H0TRBg+Hzew4nLYMGjDK4NaTJPSAFffvjkqT+oOlSid6LM TiGK6VK4ktjqk3R2Lm50KeuanBP8M+/IQwXl3agBojvb4YECogVs5/ZLa6fYK2j3aUC/ DnRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LLGfBggyWdYRwwW0ml3hX58E1A20s8FdtzBMFvcEooY=; b=RDNLnPdsEyrUjirgyxWGCNG7fXtwhqniPoC16ldLtNoWdtdIfbzP/w22Brpj12fc2I c6g6jd0Gi3/WrKQPYKgP7o0srNE2mtLvXFWEXV7PAaYTFxLbdk1GEDiclpwnPFB0CVwV +7Tpw5THC8JALObXI4Uf2cRPr5zXHcir5K1qbXnVhziIdVVoMKscGGQdIMhpyZkLP03g CuQ5MfmsL9z/adYI+uFa3qxmxge4jphWbgBeitFLVjSGsJpDvY7Idutx4M68/XyEwDkR fTE1ROQ7wlgad8QIsZaKcQ1szb27fqjfJ9fH5262Zfzyi0C4Aho4LcpinFMqTF5Nn4iB rMug== X-Gm-Message-State: AOAM531E3YrpUjVsruOLEOug03a0+zxqtIIaN/InpiSRqOrD57Nre8Pr Ch9RBv0HxcdRexX3sGCiQdvf9NUW0nLBuA== X-Google-Smtp-Source: ABdhPJwKs8f6oRDiw7sgwHGdQPLp4lt4N8HltFSQl1XLQ/oDA9QMnqcj/5Ol1jDlZ20c7w5+lfhrQQ== X-Received: by 2002:a1c:7c09:: with SMTP id x9mr1148510wmc.98.1610488386987; Tue, 12 Jan 2021 13:53:06 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V4 13/24] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() Date: Tue, 12 Jan 2021 23:52:21 +0200 Message-Id: <1610488352-18494-14-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The cmpxchg() in ioreq_send_buffered() operates on memory shared with the emulator domain (and the target domain if the legacy interface is used). In order to be on the safe side we need to switch to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm. As there is no plan to support the legacy interface on Arm, we will have a page to be mapped in a single domain at the time, so we can use s->emulator in guest_cmpxchg64() safely. Thankfully the only user of the legacy interface is x86 so far and there is not concern regarding the atomics operations. Please note, that the legacy interface *must* not be used on Arm without revisiting the code. Signed-off-by: Oleksandr Tyshchenko Acked-by: Stefano Stabellini CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - move earlier to avoid breaking arm32 compilation - add an explanation to commit description and hvm_allow_set_param() - pass s->emulator Changes V2 -> V3: - update patch description Changes V3 -> V4: - add Stefano's A-b - drop comment from arm/hvm.c --- xen/common/ioreq.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index d233a49..d5f4dd3 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -29,6 +29,7 @@ #include #include =20 +#include #include =20 #include @@ -1185,7 +1186,7 @@ static int ioreq_send_buffered(struct ioreq_server *s= , ioreq_t *p) =20 new.read_pointer =3D old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; new.write_pointer =3D old.write_pointer - n * IOREQ_BUFFER_SLOT_NU= M; - cmpxchg(&pg->ptrs.full, old.full, new.full); + guest_cmpxchg64(s->emulator, &pg->ptrs.full, old.full, new.full); } =20 notify_via_xen_event_channel(d, s->bufioreq_evtchn); --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488765; cv=none; d=zohomail.com; s=zohoarc; b=YpvFosKPbfTEEn20ZfSnYRt7OnbYjd9oIiFFQdWgbis/D9QIGiSkD5HbGzacZUX1cgPgBLSrE/RFjvlwYe7WGMwkQz6tQK8OmagbjQGHS1yE2uQjPxhy9ccskKnwF+zL8PIrMGvVKzyp3peoLiggVysvZpJnhobniFvhJ0mxlqA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488765; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ctJHynHYB61JLOWuXCjsQ1SiKOGqSkrVJrmKjemxi5I=; b=TCO++lhQ/uqhNxaEhG5N6RReHgvMgJn15o9Q3tPPy+e3OyxFtGa7p3M3cmLaac+s9ViSL2x9rUImyf3ltbU+P2kzharq7vE+MrS8f0F4C0vALCWvfpRMaZ5YPDYnomIy36C4lQSO9aviovODGz1JTCbuyNNkNB9ZzT72jR3wq5I= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488765041884.1570630664922; Tue, 12 Jan 2021 13:59:25 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66087.117309 (Exim 4.92) (envelope-from ) id 1kzRgt-0003qT-AE; Tue, 12 Jan 2021 21:59:03 +0000 Received: by outflank-mailman (output) from mailman id 66087.117309; Tue, 12 Jan 2021 21:59:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgs-0003pc-Qp; Tue, 12 Jan 2021 21:59:02 +0000 Received: by outflank-mailman (input) for mailman id 66087; Tue, 12 Jan 2021 21:59:01 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRcK-0002PK-Hx for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:20 +0000 Received: from mail-wm1-x32c.google.com (unknown [2a00:1450:4864:20::32c]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 90c5522e-26e6-4917-b699-90783bb57146; Tue, 12 Jan 2021 21:53:09 +0000 (UTC) Received: by mail-wm1-x32c.google.com with SMTP id n16so2524460wmc.0 for ; Tue, 12 Jan 2021 13:53:09 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:07 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 90c5522e-26e6-4917-b699-90783bb57146 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ctJHynHYB61JLOWuXCjsQ1SiKOGqSkrVJrmKjemxi5I=; b=AuIUSAfQEzVauc7mnrzS8bjx01c7HufJJWAHcaKj37G6c7gZ2cS/MwW1wpZMc3Ga5k PHAHgsg8GqVdw76giytc55whAVa6VVhJR7UdkcBaN6fBfW7oDxYPP+XMilXzbMvty0ai qzc1iOVyvr+91vgXLY+lNG8SdwV8LWFsIwmlDSQQYDKqAepig1mHV7SUxoTvn1THqZgh cgpw9bFGj776pwWsC6Za1WJ4qeDtEfoLkK2lz7odMspxBF/zaqcmx7PSXkH9vvhNtQcR ktnlECv/9jtnlvrqlQeUpf7eP0hYFRumKgpSYJv3q6S9NST3nyXTTdn72anetlVJwQQ0 oaJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ctJHynHYB61JLOWuXCjsQ1SiKOGqSkrVJrmKjemxi5I=; b=W+RBYAJh26aG+8K9iz8A2pA47p1t/7PiRxhtiWHPaRajoylE8fcVQvbbdQ1rA16InH bYiNzAtUNk70IRRiUFq+9E1yNvRv0GRR98F9iSk7fEEJT8850eGDjrNDRrXZzx0unPfA bIJ47jmfNklY7XFL2sCdhqK0XjWWxu4V6A3O+yQtaIGVzp5ZJz+hVrId69NF9AcOBZhs KlQVqqPywZDCVhn5ath4qpV8OdCOO9WAaoc4bUBto3j9/ByY4MPSEsGdPzH6f32Z18ml /+lKfWCUR6s66oM9YwFSyRgt+lVq3uwPFpR9xppHsu2oM8qbvqqaEtYq4WBrBbIWDYt1 CW8g== X-Gm-Message-State: AOAM532JEGJ33FaH5OwmrZnGNEF7hdQT/IZSQjXEdnqM177BHB4Y9cYZ EuL2KCNey3dqWjc30KOh76smqR6M6FpUnA== X-Google-Smtp-Source: ABdhPJwPmq5AMJcnqJE90DDTICTNkKFHsVHAEO2hOkRSZ4UzLfZ03Wk2z6CTmp4OCagZW8eeTfr/lQ== X-Received: by 2002:a1c:4483:: with SMTP id r125mr1120439wma.80.1610488387898; Tue, 12 Jan 2021 13:53:07 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V4 14/24] arm/ioreq: Introduce arch specific bits for IOREQ/DM features Date: Tue, 12 Jan 2021 23:52:22 +0200 Message-Id: <1610488352-18494-15-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) From: Julien Grall This patch adds basic IOREQ/DM support on Arm. The subsequent patches will improve functionality and add remaining bits. The IOREQ/DM features are supposed to be built with IOREQ_SERVER option enabled, which is disabled by default on Arm for now. Please note, the "PIO handling" TODO is expected to left unaddressed for the current series. It is not an big issue for now while Xen doesn't have support for vPCI on Arm. On Arm64 they are only used for PCI IO Bar and we would probably want to expose them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling" should be implemented when we add support for vPCI. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was split into: - arm/ioreq: Introduce arch specific bits for IOREQ/DM features - xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm - update patch description - update asm-arm/hvm/ioreq.h according to the newly introduced arch func= tions: - arch_hvm_destroy_ioreq_server() - arch_handle_hvm_io_completion() - update arch files to include xen/ioreq.h - remove HVMOP plumbing - rewrite a logic to handle properly case when hvm_send_ioreq() returns = IO_RETRY - add a logic to handle properly handle_hvm_io_completion() return value - rename handle_mmio() to ioreq_handle_complete_mmio() - move paging_mark_pfn_dirty() to asm-arm/paging.h - remove forward declaration for hvm_ioreq_server in asm-arm/paging.h - move try_fwd_ioserv() to ioreq.c, provide stubs if !CONFIG_IOREQ_SERVER - do not remove #ifdef CONFIG_IOREQ_SERVER in memory.c for guarding xen/= ioreq.h - use gdprintk in try_fwd_ioserv(), remove unneeded prints - update list of #include-s - move has_vpci() to asm-arm/domain.h - add a comment (TODO) to unimplemented yet handle_pio() - remove hvm_mmio_first(last)_byte() and hvm_ioreq_(page/vcpu/server) st= ructs from the arch files, they were already moved to the common code - remove set_foreign_p2m_entry() changes, they will be properly implemen= ted in the follow-up patch - select IOREQ_SERVER for Arm instead of Arm64 in Kconfig - remove x86's realmode and other unneeded stubs from xen/ioreq.h - clafify ioreq_t p.df usage in try_fwd_ioserv() - set ioreq_t p.count to 1 in try_fwd_ioserv() Changes V1 -> V2: - was split into: - arm/ioreq: Introduce arch specific bits for IOREQ/DM features - xen/arm: Stick around in leave_hypervisor_to_guest until I/O has com= pleted - update the author of a patch - update patch description - move a loop in leave_hypervisor_to_guest() to a separate patch - set IOREQ_SERVER disabled by default - remove already clarified /* XXX */ - replace BUG() by ASSERT_UNREACHABLE() in handle_pio() - remove default case for handling the return value of try_handle_mmio() - remove struct hvm_domain, enum hvm_io_completion, struct hvm_vcpu_io, struct hvm_vcpu from asm-arm/domain.h, these are common materials now - update everything according to the recent changes (IOREQ related funct= ion names don't contain "hvm" prefixes/infixes anymore, IOREQ related fiel= ds are part of common struct vcpu/domain now, etc) Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - add dummy arch hooks - remove dummy paging_mark_pfn_dirty() - don=E2=80=99t include in common ioreq.c - don=E2=80=99t include in arch ioreq.h - remove #define ioreq_params(d, i) Changes V3 -> V4: - rebase - update patch according to the renaming IO_ -> VIO_ (io_ -> vio_) and misc changes to arch hooks - update patch according to the IOREQ related dm-op handling changes - don't include from arch header - make all arch hooks out-of-line - add a comment above IOREQ_STATUS_* #define-s --- xen/arch/arm/Makefile | 2 + xen/arch/arm/dm.c | 122 +++++++++++++++++++++++ xen/arch/arm/domain.c | 9 ++ xen/arch/arm/io.c | 12 ++- xen/arch/arm/ioreq.c | 213 ++++++++++++++++++++++++++++++++++++= ++++ xen/arch/arm/traps.c | 13 +++ xen/include/asm-arm/domain.h | 3 + xen/include/asm-arm/hvm/ioreq.h | 72 ++++++++++++++ xen/include/asm-arm/mmio.h | 1 + 9 files changed, 446 insertions(+), 1 deletion(-) create mode 100644 xen/arch/arm/dm.c create mode 100644 xen/arch/arm/ioreq.c create mode 100644 xen/include/asm-arm/hvm/ioreq.h diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 512ffdd..16e6523 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -13,6 +13,7 @@ obj-y +=3D cpuerrata.o obj-y +=3D cpufeature.o obj-y +=3D decode.o obj-y +=3D device.o +obj-$(CONFIG_IOREQ_SERVER) +=3D dm.o obj-y +=3D domain.o obj-y +=3D domain_build.init.o obj-y +=3D domctl.o @@ -27,6 +28,7 @@ obj-y +=3D guest_atomics.o obj-y +=3D guest_walk.o obj-y +=3D hvm.o obj-y +=3D io.o +obj-$(CONFIG_IOREQ_SERVER) +=3D ioreq.o obj-y +=3D irq.o obj-y +=3D kernel.init.o obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c new file mode 100644 index 0000000..e6dedf4 --- /dev/null +++ b/xen/arch/arm/dm.c @@ -0,0 +1,122 @@ +/* + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include + +static int dm_op(const struct dmop_args *op_args) +{ + struct domain *d; + struct xen_dm_op op; + bool const_op =3D true; + long rc; + size_t offset; + + static const uint8_t op_size[] =3D { + [XEN_DMOP_create_ioreq_server] =3D sizeof(struct xen_= dm_op_create_ioreq_server), + [XEN_DMOP_get_ioreq_server_info] =3D sizeof(struct xen_= dm_op_get_ioreq_server_info), + [XEN_DMOP_map_io_range_to_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), + [XEN_DMOP_unmap_io_range_from_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), + [XEN_DMOP_set_ioreq_server_state] =3D sizeof(struct xen_= dm_op_set_ioreq_server_state), + [XEN_DMOP_destroy_ioreq_server] =3D sizeof(struct xen_= dm_op_destroy_ioreq_server), + }; + + rc =3D rcu_lock_remote_domain_by_id(op_args->domid, &d); + if ( rc ) + return rc; + + rc =3D xsm_dm_op(XSM_DM_PRIV, d); + if ( rc ) + goto out; + + offset =3D offsetof(struct xen_dm_op, u); + + rc =3D -EFAULT; + if ( op_args->buf[0].size < offset ) + goto out; + + if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset)= ) + goto out; + + if ( op.op >=3D ARRAY_SIZE(op_size) ) + { + rc =3D -EOPNOTSUPP; + goto out; + } + + op.op =3D array_index_nospec(op.op, ARRAY_SIZE(op_size)); + + if ( op_args->buf[0].size < offset + op_size[op.op] ) + goto out; + + if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset, + op_size[op.op]) ) + goto out; + + rc =3D -EINVAL; + if ( op.pad ) + goto out; + + rc =3D ioreq_server_dm_op(&op, d, &const_op); + + if ( (!rc || rc =3D=3D -ERESTART) && + !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, + (void *)&op.u, op_size[op.op]) ) + rc =3D -EFAULT; + + out: + rcu_unlock_domain(d); + + return rc; +} + +long do_dm_op(domid_t domid, + unsigned int nr_bufs, + XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) +{ + struct dmop_args args; + int rc; + + if ( nr_bufs > ARRAY_SIZE(args.buf) ) + return -E2BIG; + + args.domid =3D domid; + args.nr_bufs =3D array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); + + if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) ) + return -EFAULT; + + rc =3D dm_op(&args); + + if ( rc =3D=3D -ERESTART ) + rc =3D hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", + domid, nr_bufs, bufs); + + return rc; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 18cafcd..8f55aba 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -696,6 +697,10 @@ int arch_domain_create(struct domain *d, =20 ASSERT(config !=3D NULL); =20 +#ifdef CONFIG_IOREQ_SERVER + ioreq_domain_init(d); +#endif + /* p2m_init relies on some value initialized by the IOMMU subsystem */ if ( (rc =3D iommu_domain_init(d, config->iommu_opts)) !=3D 0 ) goto fail; @@ -1014,6 +1019,10 @@ int domain_relinquish_resources(struct domain *d) if (ret ) return ret; =20 +#ifdef CONFIG_IOREQ_SERVER + ioreq_server_destroy_all(d); +#endif + PROGRESS(xen): ret =3D relinquish_memory(d, &d->xenpage_list); if ( ret ) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index ae7ef96..9814481 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -16,6 +16,7 @@ * GNU General Public License for more details. */ =20 +#include #include #include #include @@ -23,6 +24,7 @@ #include #include #include +#include =20 #include "decode.h" =20 @@ -123,7 +125,15 @@ enum io_state try_handle_mmio(struct cpu_user_regs *re= gs, =20 handler =3D find_mmio_handler(v->domain, info.gpa); if ( !handler ) - return IO_UNHANDLED; + { + int rc; + + rc =3D try_fwd_ioserv(regs, v, &info); + if ( rc =3D=3D IO_HANDLED ) + return handle_ioserv(regs, v); + + return rc; + } =20 /* All the instructions used on emulated MMIO region should be valid */ if ( !dabt.valid ) diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c new file mode 100644 index 0000000..3c4a24d --- /dev/null +++ b/xen/arch/arm/ioreq.c @@ -0,0 +1,213 @@ +/* + * arm/ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include + +#include + +#include + +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v) +{ + const union hsr hsr =3D { .bits =3D regs->hsr }; + const struct hsr_dabt dabt =3D hsr.dabt; + /* Code is similar to handle_read */ + uint8_t size =3D (1 << dabt.size) * 8; + register_t r =3D v->io.req.data; + + /* We are done with the IO */ + v->io.req.state =3D STATE_IOREQ_NONE; + + if ( dabt.write ) + return IO_HANDLED; + + /* + * Sign extend if required. + * Note that we expect the read handler to have zeroed the bits + * outside the requested access size. + */ + if ( dabt.sign && (r & (1UL << (size - 1))) ) + { + /* + * We are relying on register_t using the same as + * an unsigned long in order to keep the 32-bit assembly + * code smaller. + */ + BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long)); + r |=3D (~0UL) << size; + } + + set_user_reg(regs, dabt.reg, r); + + return IO_HANDLED; +} + +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info) +{ + struct vcpu_io *vio =3D &v->io; + ioreq_t p =3D { + .type =3D IOREQ_TYPE_COPY, + .addr =3D info->gpa, + .size =3D 1 << info->dabt.size, + .count =3D 1, + .dir =3D !info->dabt.write, + /* + * On x86, df is used by 'rep' instruction to tell the direction + * to iterate (forward or backward). + * On Arm, all the accesses to MMIO region will do a single + * memory access. So for now, we can safely always set to 0. + */ + .df =3D 0, + .data =3D get_user_reg(regs, info->dabt.reg), + .state =3D STATE_IOREQ_READY, + }; + struct ioreq_server *s =3D NULL; + enum io_state rc; + + switch ( vio->req.state ) + { + case STATE_IOREQ_NONE: + break; + + case STATE_IORESP_READY: + return IO_HANDLED; + + default: + gdprintk(XENLOG_ERR, "wrong state %u\n", vio->req.state); + return IO_ABORT; + } + + s =3D ioreq_server_select(v->domain, &p); + if ( !s ) + return IO_UNHANDLED; + + if ( !info->dabt.valid ) + return IO_ABORT; + + vio->req =3D p; + + rc =3D ioreq_send(s, &p, 0); + if ( rc !=3D IO_RETRY || v->domain->is_shutting_down ) + vio->req.state =3D STATE_IOREQ_NONE; + else if ( !ioreq_needs_completion(&vio->req) ) + rc =3D IO_HANDLED; + else + vio->completion =3D VIO_mmio_completion; + + return rc; +} + +bool arch_ioreq_complete_mmio(void) +{ + struct vcpu *v =3D current; + struct cpu_user_regs *regs =3D guest_cpu_user_regs(); + const union hsr hsr =3D { .bits =3D regs->hsr }; + paddr_t addr =3D v->io.req.addr; + + if ( try_handle_mmio(regs, hsr, addr) =3D=3D IO_HANDLED ) + { + advance_pc(regs, hsr); + return true; + } + + return false; +} + +bool arch_vcpu_ioreq_completion(enum vio_completion completion) +{ + ASSERT_UNREACHABLE(); + return true; +} + +/* + * The "legacy" mechanism of mapping magic pages for the IOREQ servers + * is x86 specific, so the following hooks don't need to be implemented on= Arm: + * - arch_ioreq_server_map_pages + * - arch_ioreq_server_unmap_pages + * - arch_ioreq_server_enable + * - arch_ioreq_server_disable + */ +int arch_ioreq_server_map_pages(struct ioreq_server *s) +{ + return -EOPNOTSUPP; +} + +void arch_ioreq_server_unmap_pages(struct ioreq_server *s) +{ +} + +void arch_ioreq_server_enable(struct ioreq_server *s) +{ +} + +void arch_ioreq_server_disable(struct ioreq_server *s) +{ +} + +void arch_ioreq_server_destroy(struct ioreq_server *s) +{ +} + +int arch_ioreq_server_map_mem_type(struct domain *d, + struct ioreq_server *s, + uint32_t flags) +{ + return -EOPNOTSUPP; +} + +void arch_ioreq_server_map_mem_type_completed(struct domain *d, + struct ioreq_server *s, + uint32_t flags) +{ +} + +bool arch_ioreq_server_destroy_all(struct domain *d) +{ + return true; +} + +bool arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) +{ + if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) + return false; + + *type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr =3D p->addr; + + return true; +} + +void arch_ioreq_domain_init(struct domain *d) +{ +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 22bd1bd..036b13f 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -1385,6 +1386,9 @@ static arm_hypercall_t arm_hypercall_table[] =3D { #ifdef CONFIG_HYPFS HYPERCALL(hypfs_op, 5), #endif +#ifdef CONFIG_IOREQ_SERVER + HYPERCALL(dm_op, 3), +#endif }; =20 #ifndef NDEBUG @@ -1956,6 +1960,9 @@ static void do_trap_stage2_abort_guest(struct cpu_use= r_regs *regs, case IO_HANDLED: advance_pc(regs, hsr); return; + case IO_RETRY: + /* finish later */ + return; case IO_UNHANDLED: /* IO unhandled, try another way to handle it. */ break; @@ -2254,6 +2261,12 @@ static void check_for_vcpu_work(void) { struct vcpu *v =3D current; =20 +#ifdef CONFIG_IOREQ_SERVER + local_irq_enable(); + vcpu_ioreq_handle_completion(v); + local_irq_disable(); +#endif + if ( likely(!v->arch.need_flush_to_ram) ) return; =20 diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 6819a3b..c235e5b 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -10,6 +10,7 @@ #include #include #include +#include #include =20 struct hvm_domain @@ -262,6 +263,8 @@ static inline void arch_vcpu_block(struct vcpu *v) {} =20 #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_f= lag) =20 +#define has_vpci(d) ({ (void)(d); false; }) + #endif /* __ASM_DOMAIN_H__ */ =20 /* diff --git a/xen/include/asm-arm/hvm/ioreq.h b/xen/include/asm-arm/hvm/iore= q.h new file mode 100644 index 0000000..19e1247 --- /dev/null +++ b/xen/include/asm-arm/hvm/ioreq.h @@ -0,0 +1,72 @@ +/* + * hvm.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#ifndef __ASM_ARM_HVM_IOREQ_H__ +#define __ASM_ARM_HVM_IOREQ_H__ + +#ifdef CONFIG_IOREQ_SERVER +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v); +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info); +#else +static inline enum io_state handle_ioserv(struct cpu_user_regs *regs, + struct vcpu *v) +{ + return IO_UNHANDLED; +} + +static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *in= fo) +{ + return IO_UNHANDLED; +} +#endif + +bool ioreq_complete_mmio(void); + +static inline bool handle_pio(uint16_t port, unsigned int size, int dir) +{ + /* + * TODO: For Arm64, the main user will be PCI. So this should be + * implemented when we add support for vPCI. + */ + ASSERT_UNREACHABLE(); + return true; +} + +static inline void msix_write_completion(struct vcpu *v) +{ +} + +/* This correlation must not be altered */ +#define IOREQ_STATUS_HANDLED IO_HANDLED +#define IOREQ_STATUS_UNHANDLED IO_UNHANDLED +#define IOREQ_STATUS_RETRY IO_RETRY + +#endif /* __ASM_ARM_HVM_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h index 8dbfb27..7ab873c 100644 --- a/xen/include/asm-arm/mmio.h +++ b/xen/include/asm-arm/mmio.h @@ -37,6 +37,7 @@ enum io_state IO_ABORT, /* The IO was handled by the helper and led to an abor= t. */ IO_HANDLED, /* The IO was successfully handled by the helper. */ IO_UNHANDLED, /* The IO was not handled by the helper. */ + IO_RETRY, /* Retry the emulation for some reason */ }; =20 typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info, --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488761; cv=none; d=zohomail.com; s=zohoarc; b=XQoOEO/VynJNnKwohoM2F3nOgpZe0fj3i6V2Frz6oafGtBZJHL1rqY3jJv9cWG2fv04onqtKb+QTURlcib//E2wv6L4yty2DJlBF9h+nij6Fn1ddQTeXETviVHTcqkvSO52fUuiTKJoFvs5AOmENm7+tksambxhEePHYQlFLxK8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488761; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=tamcZqwUn7y9mfgc6xAuRXb4k79cTKvUO9nzUgw3C9A=; b=NdSqPip/31NfSB1Lx0jV8eX688ilTiMVjTcNPsRM9SStBOlpI9EeKhJB+Mq73LkzzD5DGwhZ5qOKnpiLpelFxhRofhZN6jxtNvDIZlbxVcZvLTZ07vRFXxbblO6pl2MQ+ECXnGsAjTU5kMUuDaH/VmfB8DScEn4Vlw3LLv87BhE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 161048876108793.3626532327412; Tue, 12 Jan 2021 13:59:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66075.117243 (Exim 4.92) (envelope-from ) id 1kzRgo-0003cd-6k; Tue, 12 Jan 2021 21:58:58 +0000 Received: by outflank-mailman (output) from mailman id 66075.117243; Tue, 12 Jan 2021 21:58:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgo-0003cR-10; Tue, 12 Jan 2021 21:58:58 +0000 Received: by outflank-mailman (input) for mailman id 66075; Tue, 12 Jan 2021 21:58:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRcU-0002PK-I7 for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:30 +0000 Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 390650ae-9249-49ed-b83c-fbb2a182fa29; Tue, 12 Jan 2021 21:53:11 +0000 (UTC) Received: by mail-wm1-x335.google.com with SMTP id y187so3501669wmd.3 for ; Tue, 12 Jan 2021 13:53:09 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:08 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 390650ae-9249-49ed-b83c-fbb2a182fa29 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tamcZqwUn7y9mfgc6xAuRXb4k79cTKvUO9nzUgw3C9A=; b=trmbKV1xXaatlg45sBPBoKO+qOX9OFTMop1XsL0VvgTlQ9ZAiw9lRFRZg3XFl9kH6n 0MKVah2o8xjbPNhhgsLwP28GfyNOH0f06mgC8fxybAY7yLd4NprinB1BSZyzyWiF9QKd wWmXKNPStoFDH+fA060d7CTd05AWnSjPKSH5kjzJ//Sd9tdiTqVjCKPa8wQVfk6YYPEo PjtpzPyop77tgXSFrhGb9ik6dxZJ11SpaOpjFHCKFa98pwdhvhKOc7GGxLSH7gG/s2kF HEd7WcY0FcBmMj29xJ76npT6LzaEU+WAbeSiGmkuuY+t6ZXIrRyVBtLBA6vRKinSZrj7 iq5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tamcZqwUn7y9mfgc6xAuRXb4k79cTKvUO9nzUgw3C9A=; b=rUbVsYu6j7oUoc3oZDgWKnS6vIfagvzlpc7ArvC4Ie529H/yKPe+6FkxhrSM9B7gN2 a+QcsP1pcIEg6odN/ywQ/+0Bwo3zeIGFXqTopQr1seIo03km3p5ajuXL3q+yNJDLpZfQ riFL530EsH4MmbebU6PIdMPIo6DGmz637uhk5ND3A/NaVmBC0u58Fg3w46nJrUpCmDgm JR4u3o4Hmij2eB5oj//jWwcXgVR+11lqVgZL5RID+8S9V/eMZBjARwJdQdF0vq5kSlYW kK5Nw3dGMwtP6IhPIkSRNhr+HVVPVmnz/PAwjiFboWxgS5BhnmV7KpSUJLY0XFm8nyOn hscw== X-Gm-Message-State: AOAM532TiXT3c5BlzNogzNRAIrUG3c9kvoluQlX4+IMMW3NIdvli00ei TFs9cbJTd0EcR0aCApcri9R9K8fuuyVT+w== X-Google-Smtp-Source: ABdhPJyOyfeRFHOrhz4FlC7tzz1UJR4yQXPV6gAEQ03npZWdYJNii7zcoJrk1eGWghIJo2Aya38Beg== X-Received: by 2002:a1c:6055:: with SMTP id u82mr1149106wmb.61.1610488388968; Tue, 12 Jan 2021 13:53:08 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V4 15/24] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed Date: Tue, 12 Jan 2021 23:52:23 +0200 Message-Id: <1610488352-18494-16-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch adds proper handling of return value of vcpu_ioreq_handle_completion() which involves using a loop in leave_hypervisor_to_guest(). The reason to use an unbounded loop here is the fact that vCPU shouldn't continue until the I/O has completed. The IOREQ code is using wait_on_xen_event_channel(). Yet, this can still "exit" early if an event has been received. But this doesn't mean the I/O has completed (in can be just a spurious wake-up). So we need to check if the I/O has completed and wait again if it hasn't (we will block the vCPU again until an event is received). This loop makes sure that all the vCPU works are done before we return to the guest. The call chain below: check_for_vcpu_work -> vcpu_ioreq_handle_completion -> wait_for_io -> wait_on_xen_event_channel The worse that can happen here if the vCPU will never run again (the I/O will never complete). But, in Xen case, if the I/O never completes then it most likely means that something went horribly wrong with the Device Emulator. And it is most likely not safe to continue. So letting the vCPU to spin forever if the I/O never completes is a safer action than letting it continue and leaving the guest in unclear state and is the best what we can do for now. Please note, using this loop we will not spin forever on a pCPU, preventing any other vCPUs from being scheduled. At every loop we will call check_for_pcpu_work() that will process pending softirqs. In case of failure, the guest will crash and the vCPU will be unscheduled. In normal case, if the rescheduling is necessary (might be set by a timer or by a caller in check_for_vcpu_work(), where wait_for_io() is a preemption point) the vCPU will be rescheduled to give place to someone else. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Stefano Stabellini --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch, changes were derived from (+ new explanation): arm/ioreq: Introduce arch specific bits for IOREQ/DM features Changes V2 -> V3: - update patch description Changes V3 -> V4: - update patch description and comment in code --- xen/arch/arm/traps.c | 38 +++++++++++++++++++++++++++++++++----- 1 file changed, 33 insertions(+), 5 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 036b13f..4a83e1e 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2257,18 +2257,23 @@ static void check_for_pcpu_work(void) * Process pending work for the vCPU. Any call should be fast or * implement preemption. */ -static void check_for_vcpu_work(void) +static bool check_for_vcpu_work(void) { struct vcpu *v =3D current; =20 #ifdef CONFIG_IOREQ_SERVER + bool handled; + local_irq_enable(); - vcpu_ioreq_handle_completion(v); + handled =3D vcpu_ioreq_handle_completion(v); local_irq_disable(); + + if ( !handled ) + return true; #endif =20 if ( likely(!v->arch.need_flush_to_ram) ) - return; + return false; =20 /* * Give a chance for the pCPU to process work before handling the vCPU @@ -2279,6 +2284,8 @@ static void check_for_vcpu_work(void) local_irq_enable(); p2m_flush_vm(v); local_irq_disable(); + + return false; } =20 /* @@ -2291,8 +2298,29 @@ void leave_hypervisor_to_guest(void) { local_irq_disable(); =20 - check_for_vcpu_work(); - check_for_pcpu_work(); + /* + * The reason to use an unbounded loop here is the fact that vCPU + * shouldn't continue until the I/O has completed. + * + * The worse that can happen here if the vCPU will never run again + * (the I/O will never complete). But, in Xen case, if the I/O never + * completes then it most likely means that something went horribly + * wrong with the Device Emulator. And it is most likely not safe + * to continue. So letting the vCPU to spin forever if the I/O never + * completes is a safer action than letting it continue and leaving + * the guest in unclear state and is the best what we can do for now. + * + * Please note, using this loop we will not spin forever on a pCPU, + * preventing any other vCPUs from being scheduled. At every loop + * we will call check_for_pcpu_work() that will process pending + * softirqs. In case of failure, the guest will crash and the vCPU + * will be unscheduled. In normal case, if the rescheduling is necessa= ry + * (might be set by a timer or by a caller in check_for_vcpu_work(), + * the vCPU will be rescheduled to give place to someone else. + */ + do { + check_for_pcpu_work(); + } while ( check_for_vcpu_work() ); =20 vgic_sync_to_lrs(); =20 --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488770; cv=none; d=zohomail.com; s=zohoarc; b=gOFqqWvOQDvaxI9dQzISSTawfAKWxpa4hs3hsvqgmLAAyQW+HTE6EXACU4j8fYAnvOoqD1yVyU+C8mEmclIEetr1ofSo/50Rs8DPYVgbgMpggl+Tylx9wByX49OjnJR+J6cdYtWFJcwlsSN0p8+5lRzlJ/xMlG65s4Kc2dLWX6E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488770; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=yvuQXcgBTSPbANOLGJhBtlryDzUAQMb8D/9PARVMV4Q=; b=OTo7lmue6sE6Hm2fT0mC1ZxJzmAc5KcZv1rpgYYiHihHV2ApbV0y2A2RSJF67m9hUrymAylMupjUnMkybjbO26njVeHp/JWhwByl2sVJKBN/u0tGo/+PGJ4QaPdxfMFqhuuEF4BKcViI7CCjmfEoc5Cxl5U2r4DA5CabE9RrFiM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488770696810.3267890802907; Tue, 12 Jan 2021 13:59:30 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66097.117364 (Exim 4.92) (envelope-from ) id 1kzRh3-0004Ir-Hw; Tue, 12 Jan 2021 21:59:13 +0000 Received: by outflank-mailman (output) from mailman id 66097.117364; Tue, 12 Jan 2021 21:59:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRh2-0004Hb-Lr; Tue, 12 Jan 2021 21:59:12 +0000 Received: by outflank-mailman (input) for mailman id 66097; Tue, 12 Jan 2021 21:59:10 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRce-0002PK-Ia for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:40 +0000 Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d38512fd-ee03-4a0f-8db0-7011a21b1bba; Tue, 12 Jan 2021 21:53:11 +0000 (UTC) Received: by mail-wm1-x330.google.com with SMTP id y23so3504487wmi.1 for ; Tue, 12 Jan 2021 13:53:11 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:09 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d38512fd-ee03-4a0f-8db0-7011a21b1bba DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yvuQXcgBTSPbANOLGJhBtlryDzUAQMb8D/9PARVMV4Q=; b=FjTq0V/5W1Bj0Y9u0Xfby+RU5G1ULZqZkss5au7mvgFf/u/DTwrkemyk6XGdWrqgus jS2yzvNeVpQ4P19UoiLVZUoPmOgbxaUXE0je5z809TuAnQw7RocF7f6/VD805O4vOLpD FIkal3C3DX1Q0HhGAWcTWMlm8MAQxJjoayeZ/rMK4uTEW3q/5pkmnezjiLtQgbxN86ut Hp75XhRJHpdcT1iftThJ1X0EgERdt0SEZdhNyBJjQFua+g/zmhwLqDg3QmnWN8BnN8HT cDCGhVMN2bjdKrqbhBbIk25wQUTvFLtPQiwhXeNS0yuYjwWBykDJRYcUwIldnblGhsAQ LrNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yvuQXcgBTSPbANOLGJhBtlryDzUAQMb8D/9PARVMV4Q=; b=Dvaco/JktznM4Z8mA9YNba1AfF5inrX28kOfllrxjF6bS10nMIJE+9idKq0JVVxAS4 NaH+ilYFbXmlzwjIUGug27oxosI6vHC1fxCAIv4azDhlItZzfQIQSDYRV8B+jO1U3sTa loRD5BaLKx2TN0MiB7WOvt3jcavT+5bmjWjQs+8U2aVsAbEsXxrG4Mf4ptZWcfQyRZpk 9whgnlj5iZD1KIYDiyYR+ZEyRzZc0dPy9I7bilblgK83crh8RTNstzJcMpe7OgIVbkb3 5ZB5m/HVA5tqvRKCyMG7larlPa/wildoXbuuUFJfN5IkGVNErXrlDO3ixSBI+xvdk9Hw gCLw== X-Gm-Message-State: AOAM531Z2F/aHixy4AlNWWDp5Dw/TniY0IW90hrPN4CpGLON9o6hl3F+ udfDQXFbYmX77hrQ0oPSp68ovvx0Srd/rg== X-Google-Smtp-Source: ABdhPJwyrNxyiy/lYTcJlKpHjqaJlg8k4XGpOiswPmNGKLwRdgLGGF4PTuxqliP0RinfNtLdGqYpHw== X-Received: by 2002:a1c:2155:: with SMTP id h82mr1132574wmh.132.1610488390131; Tue, 12 Jan 2021 13:53:10 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall Subject: [PATCH V4 16/24] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm Date: Tue, 12 Jan 2021 23:52:24 +0200 Message-Id: <1610488352-18494-17-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) From: Oleksandr Tyshchenko This patch implements reference counting of foreign entries in in set_foreign_p2m_entry() on Arm. This is a mandatory action if we want to run emulator (IOREQ server) in other than dom0 domain, as we can't trust it to do the right thing if it is not running in dom0. So we need to grab a reference on the page to avoid it disappearing. It is valid to always pass "p2m_map_foreign_rw" type to guest_physmap_add_entry() since the current and foreign domains would be always different. A case when they are equal would be rejected by rcu_lock_remote_domain_by_id(). Besides the similar comment in the code put a respective ASSERT() to catch incorrect usage in future. It was tested with IOREQ feature to confirm that all the pages given to this function belong to a domain, so we can use the same approach as for XENMAPSPACE_gmfn_foreign handling in xenmem_add_to_physmap_one(). This involves adding an extra parameter for the foreign domain to set_foreign_p2m_entry() and a helper to indicate whether the arch supports the reference counting of foreign entries and the restriction for the hardware domain in the common code can be skipped for it. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Acked-by: Stefano Stabellini Reviewed-by: Jan Beulich Reviewed-by: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/= DM features" - rewrite a logic to handle properly reference in set_foreign_p2m_entry() instead of treating foreign entries as p2m_ram_rw Changes V1 -> V2: - rebase according to the recent changes to acquire_resource() - update patch description - introduce arch_refcounts_p2m() - add an explanation why p2m_map_foreign_rw is valid - move set_foreign_p2m_entry() to p2m-common.h - add const to new parameter Changes V2 -> V3: - update patch description - rename arch_refcounts_p2m() to arch_acquire_resource_check() - move comment to x86=E2=80=99s arch_acquire_resource_check() - return rc in Arm's set_foreign_p2m_entry() - put a respective ASSERT() into Arm's set_foreign_p2m_entry() Changes V3 -> V4: - update arch_acquire_resource_check() implementation on x86 and common code which uses it, pass struct domain to the function - put ASSERT() to x86/Arm set_foreign_p2m_entry() - use arch_acquire_resource_check() in p2m_add_foreign() instead of open-coding it --- xen/arch/arm/p2m.c | 26 ++++++++++++++++++++++++++ xen/arch/x86/mm/p2m.c | 9 ++++++--- xen/common/memory.c | 9 ++------- xen/include/asm-arm/p2m.h | 19 +++++++++---------- xen/include/asm-x86/p2m.h | 19 ++++++++++++++++--- xen/include/xen/p2m-common.h | 4 ++++ 6 files changed, 63 insertions(+), 23 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 4eeb867..d41c4fa 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1380,6 +1380,32 @@ int guest_physmap_remove_page(struct domain *d, gfn_= t gfn, mfn_t mfn, return p2m_remove_mapping(d, gfn, (1 << page_order), mfn); } =20 +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) +{ + struct page_info *page =3D mfn_to_page(mfn); + int rc; + + ASSERT(arch_acquire_resource_check(d)); + + if ( !get_page(page, fd) ) + return -EINVAL; + + /* + * It is valid to always use p2m_map_foreign_rw here as if this gets + * called then d !=3D fd. A case when d =3D=3D fd would be rejected by + * rcu_lock_remote_domain_by_id() earlier. Put a respective ASSERT() + * to catch incorrect usage in future. + */ + ASSERT(d !=3D fd); + + rc =3D guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_r= w); + if ( rc ) + put_page(page); + + return rc; +} + static struct page_info *p2m_allocate_root(void) { struct page_info *page; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 71fda06..cbeea85 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1323,8 +1323,11 @@ static int set_typed_p2m_entry(struct domain *d, uns= igned long gfn_l, } =20 /* Set foreign mfn in the given guest's p2m table. */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) { + ASSERT(arch_acquire_resource_check(d)); + return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign, p2m_get_hostp2m(d)->default_access); } @@ -2579,7 +2582,7 @@ static int p2m_add_foreign(struct domain *tdom, unsig= ned long fgfn, * hvm fixme: until support is added to p2m teardown code to cleanup a= ny * foreign entries, limit this to hardware domain only. */ - if ( !is_hardware_domain(tdom) ) + if ( !arch_acquire_resource_check(tdom) ) return -EPERM; =20 if ( foreigndom =3D=3D DOMID_XEN ) @@ -2635,7 +2638,7 @@ static int p2m_add_foreign(struct domain *tdom, unsig= ned long fgfn, * will update the m2p table which will result in mfn -> gpfn of dom0 * and not fgfn of domU. */ - rc =3D set_foreign_p2m_entry(tdom, gpfn, mfn); + rc =3D set_foreign_p2m_entry(tdom, fdom, gpfn, mfn); if ( rc ) gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. " "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n", diff --git a/xen/common/memory.c b/xen/common/memory.c index 66828d9..d625a9b 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1138,12 +1138,7 @@ static int acquire_resource( xen_pfn_t mfn_list[32]; int rc; =20 - /* - * FIXME: Until foreign pages inserted into the P2M are properly - * reference counted, it is unsafe to allow mapping of - * resource pages unless the caller is the hardware domain. - */ - if ( paging_mode_translate(currd) && !is_hardware_domain(currd) ) + if ( !arch_acquire_resource_check(currd) ) return -EACCES; =20 if ( copy_from_guest(&xmar, arg, 1) ) @@ -1211,7 +1206,7 @@ static int acquire_resource( =20 for ( i =3D 0; !rc && i < xmar.nr_frames; i++ ) { - rc =3D set_foreign_p2m_entry(currd, gfn_list[i], + rc =3D set_foreign_p2m_entry(currd, d, gfn_list[i], _mfn(mfn_list[i])); /* rc should be -EIO for any iteration other than the first */ if ( rc && i ) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 28ca9a8..4f8b3b0 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -161,6 +161,15 @@ typedef enum { #endif #include =20 +static inline bool arch_acquire_resource_check(struct domain *d) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is supported on Arm. + */ + return true; +} + static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) { @@ -392,16 +401,6 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsig= ned int order) return gfn_add(gfn, 1UL << order); } =20 -static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gf= n, - mfn_t mfn) -{ - /* - * NOTE: If this is implemented then proper reference counting of - * foreign entries will need to be implemented. - */ - return -EOPNOTSUPP; -} - /* * A vCPU has cache enabled only when the MMU is enabled and data cache * is enabled. diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 7df2878..1d64c12 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -382,6 +382,22 @@ struct p2m_domain { #endif #include =20 +static inline bool arch_acquire_resource_check(struct domain *d) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is not supported for translated domains on x86. + * + * FIXME: Until foreign pages inserted into the P2M are properly + * reference counted, it is unsafe to allow mapping of + * resource pages unless the caller is the hardware domain. + */ + if ( paging_mode_translate(d) && !is_hardware_domain(d) ) + return false; + + return true; +} + /* * Updates vCPU's n2pm to match its np2m_base in VMCx12 and returns that n= p2m. */ @@ -647,9 +663,6 @@ int p2m_finish_type_change(struct domain *d, int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start, unsigned long end); =20 -/* Set foreign entry in the p2m table (for priv-mapping) */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn); - /* Set mmio addresses in the p2m table (for pass-through) */ int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int order); diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h index 58031a6..b4bc709 100644 --- a/xen/include/xen/p2m-common.h +++ b/xen/include/xen/p2m-common.h @@ -3,6 +3,10 @@ =20 #include =20 +/* Set foreign entry in the p2m table */ +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn); + /* Remove a page from a domain's p2m table */ int __must_check guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488761; cv=none; d=zohomail.com; s=zohoarc; b=bWK7p/TYxER9j1iO4rMuvrxTwAn7XcEi0LgKjtvv1Vi04Tv6qw6oFCuTHMP29GgMaXTOho7g2TFzCA6Cqg2UicOBBhp3fyccdSKTNCvkmux9vCAjpZryLiRMtLtg8TrndX93tuZscBU1qgm0ngHxKb2Cxbpbu2Dex63hWrXvJvE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488761; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=4gV1F/7iWkG+qIxLslLKVNePdnGToVQ3S9ee1k+suSE=; b=U5I6HhbzcF3aCWaz+85bydTSkgHbMNy3SOyY77HCkE+eCZs8TlJA/DiGlTcymOMiG1j7M+zjs0ePRFmar3QVoknlL7wbPK5RKx9Ta0+1r0Qhv3Nye1bDaTIDVVipxkSefnBoH685A7cuMLVtSlwgVSoGmQHVS48zYyzmQd+QXpA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488761111759.8317186972735; Tue, 12 Jan 2021 13:59:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66073.117227 (Exim 4.92) (envelope-from ) id 1kzRgl-0003Z1-Q2; Tue, 12 Jan 2021 21:58:55 +0000 Received: by outflank-mailman (output) from mailman id 66073.117227; Tue, 12 Jan 2021 21:58:55 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgl-0003Yk-JU; Tue, 12 Jan 2021 21:58:55 +0000 Received: by outflank-mailman (input) for mailman id 66073; Tue, 12 Jan 2021 21:58:54 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRcj-0002PK-Ik for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:45 +0000 Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a4b63d93-25e6-437e-b459-e0880ef941a4; Tue, 12 Jan 2021 21:53:15 +0000 (UTC) Received: by mail-wr1-x435.google.com with SMTP id i9so22270wrc.4 for ; Tue, 12 Jan 2021 13:53:11 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.10 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:10 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a4b63d93-25e6-437e-b459-e0880ef941a4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4gV1F/7iWkG+qIxLslLKVNePdnGToVQ3S9ee1k+suSE=; b=VQNaOAI0D9Al/JYCJ/M+E8Vk7qpGkmtmIbHy9cv4NrB/21nCn4ce0AWcr3PSMDbyDB mOzYnOEFtJMm14QcPlTzYS438qS0JgV4D413m7qTnGp252f03xJlAiv0+yXSWJccq0Rz DX+cOJArZyjKLDzQFMvThYUGqc2G5kcwU2hc7OKhswI5GPAZ8FjXqQOoqIen1v6j6A1F rLCB8b6Kyw8mUSHLFjXbB1II+ZEAnWoAasFX4mVIkvbo6cMDT8oR/wIuXb+CpOgdLoZT Wjo05J3T+ELrpdtbt15iFAYo2ZMNQ0uY9gzauXhh8+Ik0u6h0YhDEC7M/2yfq1tEKSxt 9rnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4gV1F/7iWkG+qIxLslLKVNePdnGToVQ3S9ee1k+suSE=; b=HAA+6OzNEDcWdGre5z65KZ0zrGY4oVwE+xC5zD268kJpJkOz2TVYDhclNQPj3wj4y7 iurI3BPAg/xdyFaXNMmklMjn9nvjV1KED1PrEalk7OAxqcGOMqhmx7UiE5Nimlfh3OGE okBoMQha6JEu11nbrWpstwoZqsAtcKnzm+Ln7zjtvNIVbiteRKltRq2OZYv7Cy8d/Jlk E/OnNCxlsSr8lpP3dFCPJLCMH69dfCKw7foFh63hr+WHvZn7wKvx5P/PK09jXbgdmHCS oJpD0j5jtpYcGrYxtHmmeVxX5nPz/SFo3/rhO7mH2wkwCeLAq0MxEz5X9G4S3s+H9cht OiZw== X-Gm-Message-State: AOAM5327E/ue8ROv5Gi48e27q21z/lRyNw5QW0cWfBjjbw5gJN6mqhHh Z0iALEqVpBeTR96PyKKGsHhMLR8jpjsnew== X-Google-Smtp-Source: ABdhPJykC8uvoHTbSy7OQGfg+bnoBoG3kuupl+GfMjGsUKUEPCvBWFILb+w5JbykKR0cb7oIyjFkSQ== X-Received: by 2002:a5d:56c3:: with SMTP id m3mr774837wrw.419.1610488391081; Tue, 12 Jan 2021 13:53:11 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Paul Durrant , Julien Grall Subject: [PATCH V4 17/24] xen/ioreq: Introduce domain_has_ioreq_server() Date: Tue, 12 Jan 2021 23:52:25 +0200 Message-Id: <1610488352-18494-18-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch introduces a helper the main purpose of which is to check if a domain is using IOREQ server(s). On Arm the current benefit is to avoid calling vcpu_ioreq_handle_completion= () (which implies iterating over all possible IOREQ servers anyway) on every return in leave_hypervisor_to_guest() if there is no active servers for the particular domain. Also this helper will be used by one of the subsequent patches on Arm. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Paul Durrant Reviewed-by: Stefano Stabellini --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - update patch description - guard helper with CONFIG_IOREQ_SERVER - remove "hvm" prefix - modify helper to just return d->arch.hvm.ioreq_server.nr_servers - put suitable ASSERT()s - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server() - remove d->ioreq_server.nr_servers =3D 0 from hvm_ioreq_init() Changes V2 -> V3: - update patch description - remove ASSERT()s from the helper, add a comment - use #ifdef CONFIG_IOREQ_SERVER inside function body - use new ASSERT() construction in set_ioreq_server() Changes V3 -> V4: - update patch description - drop per-domain variable "nr_servers" - reimplement a helper to count the non-NULL entries - make the helper out-of-line --- xen/arch/arm/traps.c | 15 +++++++++------ xen/common/ioreq.c | 16 ++++++++++++++++ xen/include/xen/ioreq.h | 2 ++ 3 files changed, 27 insertions(+), 6 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 4a83e1e..35094d8 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2262,14 +2262,17 @@ static bool check_for_vcpu_work(void) struct vcpu *v =3D current; =20 #ifdef CONFIG_IOREQ_SERVER - bool handled; + if ( domain_has_ioreq_server(v->domain) ) + { + bool handled; =20 - local_irq_enable(); - handled =3D vcpu_ioreq_handle_completion(v); - local_irq_disable(); + local_irq_enable(); + handled =3D vcpu_ioreq_handle_completion(v); + local_irq_disable(); =20 - if ( !handled ) - return true; + if ( !handled ) + return true; + } #endif =20 if ( likely(!v->arch.need_flush_to_ram) ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index d5f4dd3..59f4990 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -80,6 +80,22 @@ static ioreq_t *get_ioreq(struct ioreq_server *s, struct= vcpu *v) return &p->vcpu_ioreq[v->vcpu_id]; } =20 +/* + * This should only be used when d =3D=3D current->domain or when they're + * distinct and d is paused. Otherwise the result is stale before + * the caller can inspect it. + */ +bool domain_has_ioreq_server(const struct domain *d) +{ + const struct ioreq_server *s; + unsigned int id; + + FOR_EACH_IOREQ_SERVER(d, id, s) + return true; + + return false; +} + static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, struct ioreq_server **srvp) { diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index ec7e98d..f0908af 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -81,6 +81,8 @@ static inline bool ioreq_needs_completion(const ioreq_t *= ioreq) #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) =20 +bool domain_has_ioreq_server(const struct domain *d); + bool vcpu_ioreq_pending(struct vcpu *v); bool vcpu_ioreq_handle_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488764; cv=none; d=zohomail.com; s=zohoarc; b=LHs3eGPayEYVj4daYjbCcBSlFx14WyTrBRtOp0Zb08/4bCKPiE+PGGXC6b5STvD1ku5jEBG1HPB6X7Faa+s9lwav8ObIhN9FubSTTQ6NMoDuujLQsMbW9LWHrpoMQNPG04KPR3DcN0/jCVCIQ0AZIYCENI+GOgLWFHt3E6FBjpI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488764; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=RGoEQJCEmcPoETIax9GErbN7obEbYOgZDwowTTTEDII=; b=Ju82aZikVMtCz1xY7D0CJLvIhVhL3gDhcI6R8EAQmnnUx3NAicbrOJZ93//61lsv3JQk2Q/rqWLe6PG37ku+SN21Btk1xBnOMX0xmoCWxidDXBzysLh4N4CsYtMXiC8AzakPskBiWfeQsoPM/OM2a9wXCxUB+UZRsdpTj9sBvpA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488764012812.5241236270862; Tue, 12 Jan 2021 13:59:24 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66072.117220 (Exim 4.92) (envelope-from ) id 1kzRgl-0003YA-FM; Tue, 12 Jan 2021 21:58:55 +0000 Received: by outflank-mailman (output) from mailman id 66072.117220; Tue, 12 Jan 2021 21:58:55 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgl-0003Y3-AF; Tue, 12 Jan 2021 21:58:55 +0000 Received: by outflank-mailman (input) for mailman id 66072; Tue, 12 Jan 2021 21:58:54 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRco-0002PK-K1 for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:50 +0000 Received: from mail-wr1-x42e.google.com (unknown [2a00:1450:4864:20::42e]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id db1f7416-fb6b-40cb-ae21-f4816f6bc00a; Tue, 12 Jan 2021 21:53:16 +0000 (UTC) Received: by mail-wr1-x42e.google.com with SMTP id m4so111wrx.9 for ; Tue, 12 Jan 2021 13:53:13 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:11 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: db1f7416-fb6b-40cb-ae21-f4816f6bc00a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=RGoEQJCEmcPoETIax9GErbN7obEbYOgZDwowTTTEDII=; b=fayvXEdHuOf096l4NKJ0f4wnVhXkt6Xds+Oybn5bihzDG9Sur+/hmB/YdxsSJ/mJE4 CIU8aT1qH1U6JjYr9W5Ab/Zae4F1EBvYSfUcxQCasauC1o1C88HapUsExhK5Idn/rrxB Pb39Ip7ByGctiZQdGYqx8pmXfE7E8+lv1yRQYb5fu4eWe6IQyi9/L2pCJeFWp+8Xr7cW d0oUYf+n7EWflsAkksQcAiRj65Ekt5vRww/+qxg7/Wxg2QtLxG8vvlUKh8dUn3OV/+sz ztl97v+H+5AvKUSOBSKHwWIhgbugfOhJnrzAo1VC3Rs77kbSo+7SRcFNM3wqvZ0EZ50h nddw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=RGoEQJCEmcPoETIax9GErbN7obEbYOgZDwowTTTEDII=; b=Tt/UL2TTc72YiWCGBwZlCociIqW3cG7VigyQBdCPfg28cwMA3fwhmQDQLHBTCLJAXM 5Nz5OgaHl6KnPsRdyTkPqcrvM7bNKRJENQ4W9cU7aXr4kzf//o/CV4Rgz0mCLJp+Ue2Z CpEcN8dt+VKDSsZnAEcjdD6Z7LqxbWC0aNTAsvFw8viIotRDvOZoAMi1nmudtETpRUxh CStSX88+m4j7BEtKEismR7gERUUe2epLNGgelJHnu47rxgnEcmQ+z2g+dsHiUOrkLIp9 5Qdh1WJyLq2ecQNK/2FjQ8jj3Vmx37Htw7Pt+ahe/gTFSf1sI9Tb2Kk4Yuxo7myV+u2I INlA== X-Gm-Message-State: AOAM532BYxuS1FmleFnY0YUqdDMd6TcyQ9y0fmh4Q+nxuKLcOSR4PD0V w/2vZWBTD6kb+JxoKaxACEfEZI29GTdemA== X-Google-Smtp-Source: ABdhPJyqUYZ8nN9Ya3Bercumh+QsDqOVCfec1mvmV4j6aPkVyi7LceyCrMvEYYtrZePrr93Rm7UxQA== X-Received: by 2002:adf:e9d0:: with SMTP id l16mr768128wrn.376.1610488392094; Tue, 12 Jan 2021 13:53:12 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Ian Jackson , Wei Liu , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V4 18/24] xen/dm: Introduce xendevicemodel_set_irq_level DM op Date: Tue, 12 Jan 2021 23:52:26 +0200 Message-Id: <1610488352-18494-19-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall This patch adds ability to the device emulator to notify otherend (some entity running in the guest) using a SPI and implements Arm specific bits for it. Proposed interface allows emulator to set the logical level of a one of a domain's IRQ lines. We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level) to inject an interrupt as the "isa_irq" field is only 8-bit and able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020). Please note, for egde-triggered interrupt (which is used for the virtio-mmio emulation) we only trigger the interrupt on Arm if the level is asserted (rising edge) and do nothing if the level is deasserted (falling edge), so the call could be named "trigger_irq" (without the level parameter). But, in order to model the line closely (to be able to support level-triggered interrupt) we need to know whether the line is low or high, so the proposed interface has been chosen. However, it is worth mentioning that in case of the level-triggered interrupt, we should keep injecting the interrupt to the guest until the line is deasserted (this is not covered by current patch). Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko [On Arm only] Tested-by: Wei Chen Acked-by: Stefano Stabellini --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - check incoming parameters in arch_dm_op() - add explicit padding to struct xen_dm_op_set_irq_level Changes V1 -> V2: - update the author of a patch - update patch description - check that padding is always 0 - mention that interface is Arm only and only SPIs are supported for now - allow to set the logical level of a line for non-allocated interrupts only - add xen_dm_op_set_irq_level_t Changes V2 -> V3: - no changes Changes V3 -> V4: - update patch description - update patch according to the IOREQ related dm-op handling changes --- tools/include/xendevicemodel.h | 4 +++ tools/libs/devicemodel/core.c | 18 ++++++++++ tools/libs/devicemodel/libxendevicemodel.map | 1 + xen/arch/arm/dm.c | 54 ++++++++++++++++++++++++= +++- xen/include/public/hvm/dm_op.h | 16 +++++++++ 5 files changed, 92 insertions(+), 1 deletion(-) diff --git a/tools/include/xendevicemodel.h b/tools/include/xendevicemodel.h index e877f5c..c06b3c8 100644 --- a/tools/include/xendevicemodel.h +++ b/tools/include/xendevicemodel.h @@ -209,6 +209,10 @@ int xendevicemodel_set_isa_irq_level( xendevicemodel_handle *dmod, domid_t domid, uint8_t irq, unsigned int level); =20 +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, unsigned int irq, + unsigned int level); + /** * This function maps a PCI INTx line to a an IRQ line. * diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c index 4d40639..30bd79f 100644 --- a/tools/libs/devicemodel/core.c +++ b/tools/libs/devicemodel/core.c @@ -430,6 +430,24 @@ int xendevicemodel_set_isa_irq_level( return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); } =20 +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, uint32_t irq, + unsigned int level) +{ + struct xen_dm_op op; + struct xen_dm_op_set_irq_level *data; + + memset(&op, 0, sizeof(op)); + + op.op =3D XEN_DMOP_set_irq_level; + data =3D &op.u.set_irq_level; + + data->irq =3D irq; + data->level =3D level; + + return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); +} + int xendevicemodel_set_pci_link_route( xendevicemodel_handle *dmod, domid_t domid, uint8_t link, uint8_t irq) { diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devi= cemodel/libxendevicemodel.map index 561c62d..a0c3012 100644 --- a/tools/libs/devicemodel/libxendevicemodel.map +++ b/tools/libs/devicemodel/libxendevicemodel.map @@ -32,6 +32,7 @@ VERS_1.2 { global: xendevicemodel_relocate_memory; xendevicemodel_pin_memory_cacheattr; + xendevicemodel_set_irq_level; } VERS_1.1; =20 VERS_1.3 { diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c index e6dedf4..804830a 100644 --- a/xen/arch/arm/dm.c +++ b/xen/arch/arm/dm.c @@ -20,6 +20,8 @@ #include #include =20 +#include + static int dm_op(const struct dmop_args *op_args) { struct domain *d; @@ -35,6 +37,7 @@ static int dm_op(const struct dmop_args *op_args) [XEN_DMOP_unmap_io_range_from_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), [XEN_DMOP_set_ioreq_server_state] =3D sizeof(struct xen_= dm_op_set_ioreq_server_state), [XEN_DMOP_destroy_ioreq_server] =3D sizeof(struct xen_= dm_op_destroy_ioreq_server), + [XEN_DMOP_set_irq_level] =3D sizeof(struct xen_= dm_op_set_irq_level), }; =20 rc =3D rcu_lock_remote_domain_by_id(op_args->domid, &d); @@ -73,7 +76,56 @@ static int dm_op(const struct dmop_args *op_args) if ( op.pad ) goto out; =20 - rc =3D ioreq_server_dm_op(&op, d, &const_op); + switch ( op.op ) + { + case XEN_DMOP_set_irq_level: + { + const struct xen_dm_op_set_irq_level *data =3D + &op.u.set_irq_level; + unsigned int i; + + /* Only SPIs are supported */ + if ( (data->irq < NR_LOCAL_IRQS) || (data->irq >=3D vgic_num_irqs(= d)) ) + { + rc =3D -EINVAL; + break; + } + + if ( data->level !=3D 0 && data->level !=3D 1 ) + { + rc =3D -EINVAL; + break; + } + + /* Check that padding is always 0 */ + for ( i =3D 0; i < sizeof(data->pad); i++ ) + { + if ( data->pad[i] ) + { + rc =3D -EINVAL; + break; + } + } + + /* + * Allow to set the logical level of a line for non-allocated + * interrupts only. + */ + if ( test_bit(data->irq, d->arch.vgic.allocated_irqs) ) + { + rc =3D -EINVAL; + break; + } + + vgic_inject_irq(d, NULL, data->irq, data->level); + rc =3D 0; + break; + } + + default: + rc =3D ioreq_server_dm_op(&op, d, &const_op); + break; + } =20 if ( (!rc || rc =3D=3D -ERESTART) && !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index 66cae1a..1f70d58 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -434,6 +434,21 @@ struct xen_dm_op_pin_memory_cacheattr { }; typedef struct xen_dm_op_pin_memory_cacheattr xen_dm_op_pin_memory_cacheat= tr_t; =20 +/* + * XEN_DMOP_set_irq_level: Set the logical level of a one of a domain's + * IRQ lines (currently Arm only). + * Only SPIs are supported. + */ +#define XEN_DMOP_set_irq_level 19 + +struct xen_dm_op_set_irq_level { + uint32_t irq; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; + uint8_t pad[3]; +}; +typedef struct xen_dm_op_set_irq_level xen_dm_op_set_irq_level_t; + struct xen_dm_op { uint32_t op; uint32_t pad; @@ -447,6 +462,7 @@ struct xen_dm_op { xen_dm_op_track_dirty_vram_t track_dirty_vram; xen_dm_op_set_pci_intx_level_t set_pci_intx_level; xen_dm_op_set_isa_irq_level_t set_isa_irq_level; + xen_dm_op_set_irq_level_t set_irq_level; xen_dm_op_set_pci_link_route_t set_pci_link_route; xen_dm_op_modified_memory_t modified_memory; xen_dm_op_set_mem_type_t set_mem_type; --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488773; cv=none; d=zohomail.com; s=zohoarc; b=TbU1wQqGi33joNLld+cBvKU1Pq85xCi0zlwnVIb7PFIqMdOqpzTPTbTw2sDPBVqx66f0+EKYEJQxBi95HXNFgnkOk/NVyxMs2uvr7RRHU1fNd/ailX+uJD8CWL7LGhF8JBVKsVPYZyVNF5Jj2VbxYjKk/9P4NPRyp54NnXfzC7w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488773; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=mGbtY4ekbbtE/VZfKxRi6ik4jZjcmPPDvGdUbgYeomY=; b=WtgiNVAH6TKHweMcdH2r4gm0T3KdZRNTY65YDz8TKyaOApUcP0Y8ycZodE6ojDW860M9+wE82SwVKeJZutIUSgwgL7LqHLfOIf33YjX0Q4tjWncODJMVyiPodqEYOYb/zmrnP4kECxSxMnmVMa0OqiFYuwJn3vv4xRASAG/eGQE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488773357256.09646855649123; Tue, 12 Jan 2021 13:59:33 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66105.117395 (Exim 4.92) (envelope-from ) id 1kzRhA-0004cx-4e; Tue, 12 Jan 2021 21:59:20 +0000 Received: by outflank-mailman (output) from mailman id 66105.117395; Tue, 12 Jan 2021 21:59:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRh9-0004bo-Cm; Tue, 12 Jan 2021 21:59:19 +0000 Received: by outflank-mailman (input) for mailman id 66105; Tue, 12 Jan 2021 21:59:17 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRct-0002PK-K1 for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:54:55 +0000 Received: from mail-wr1-x42b.google.com (unknown [2a00:1450:4864:20::42b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 89569031-775c-4f8d-9bf7-33356974a3f3; Tue, 12 Jan 2021 21:53:16 +0000 (UTC) Received: by mail-wr1-x42b.google.com with SMTP id i9so22313wrc.4 for ; Tue, 12 Jan 2021 13:53:13 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:12 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 89569031-775c-4f8d-9bf7-33356974a3f3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mGbtY4ekbbtE/VZfKxRi6ik4jZjcmPPDvGdUbgYeomY=; b=R+KM6NJLAAKJRp7GgcdcfKDebUTS1tYsg7s8PNMs32duBdOOSHDAiFmuTxw3ljud6U pz8/RAdUTkggEQoNMQZDV1+3xsHaELYNwJaky3kgjt5yhxjX8SXv0QhVg+qJrDR4LC2K y7zE4YhCkRI0ezGxEwXuAy+meRrD1qFo4Mss4b+QRlQP/ewP9+OM9byYz2I+fRxd8kgV edG/QgsClP883OhknDNyBhNqJev40JTtqXYVjGOd2tQCdGkxhN007bS5bFxNleEe8laP OSGI61+prNTupZqWZjgWPKtBELS7zH4ko6fdOjTyht4fNH/i6ZsTw3JCXt2j5e/HhcrU uNOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mGbtY4ekbbtE/VZfKxRi6ik4jZjcmPPDvGdUbgYeomY=; b=DzaytzZpgYjAWRGGPWDgSXIjeNAyzsVvpijFJY56Q347pCkhwag9H0lxYHQqv6Mo8j 8T8bifzEO5lWG0plHpi+yL2Vqe8uMOvtryvW4Egurpb3OMhEU7EMEc4EVgtSzoN2wXk1 0yPtqIdcNwpueXDT9urwHx0kxEkkTOnPLdBBPMZgFB7Lf8TAvFq3I14NPbopkVP5cCci jEV4tWKugGquGq7QcsEbbQIeXCImRg4MVgyAyDYHqSqJ8WT7ph04/+17ySENFwbl/pup 6mAwThOAdd3JWVYzWqgfw+lp27l7CwiF4T+quXJ7M6Kbv3S4xHhdA1ba38ksB1JztHGV aQbQ== X-Gm-Message-State: AOAM5327G2q9ByByiurpJLIh1a2MW+dpI7jjh4nGwgcjjbBXJ5vRERkE agvXpYDZ/kSdOqHo2bbZXfwasAZt8RFrBw== X-Google-Smtp-Source: ABdhPJz2KezndcVpBKfrPhdwl3q8eh6cVscb/37x6CQuUAYyjsjvgpDboQyklYrp6d/4BcKc77wzWg== X-Received: by 2002:adf:e9d2:: with SMTP id l18mr765538wrn.179.1610488392939; Tue, 12 Jan 2021 13:53:12 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V4 19/24] xen/arm: io: Abstract sign-extension Date: Tue, 12 Jan 2021 23:52:27 +0200 Message-Id: <1610488352-18494-20-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko In order to avoid code duplication (both handle_read() and handle_ioserv() contain the same code for the sign-extension) put this code to a common helper to be used for both. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Stefano Stabellini --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - no changes Changes V3 -> V4: - no changes here, but in new patch: "xen/arm: io: Harden sign extension check" --- xen/arch/arm/io.c | 18 ++---------------- xen/arch/arm/ioreq.c | 17 +---------------- xen/include/asm-arm/traps.h | 24 ++++++++++++++++++++++++ 3 files changed, 27 insertions(+), 32 deletions(-) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index 9814481..307c521 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -24,6 +24,7 @@ #include #include #include +#include #include =20 #include "decode.h" @@ -40,26 +41,11 @@ static enum io_state handle_read(const struct mmio_hand= ler *handler, * setting r). */ register_t r =3D 0; - uint8_t size =3D (1 << dabt.size) * 8; =20 if ( !handler->ops->read(v, info, &r, handler->priv) ) return IO_ABORT; =20 - /* - * Sign extend if required. - * Note that we expect the read handler to have zeroed the bits - * outside the requested access size. - */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) - { - /* - * We are relying on register_t using the same as - * an unsigned long in order to keep the 32-bit assembly - * code smaller. - */ - BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long)); - r |=3D (~0UL) << size; - } + r =3D sign_extend(dabt, r); =20 set_user_reg(regs, dabt.reg, r); =20 diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c index 3c4a24d..40b9e59 100644 --- a/xen/arch/arm/ioreq.c +++ b/xen/arch/arm/ioreq.c @@ -28,7 +28,6 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, s= truct vcpu *v) const union hsr hsr =3D { .bits =3D regs->hsr }; const struct hsr_dabt dabt =3D hsr.dabt; /* Code is similar to handle_read */ - uint8_t size =3D (1 << dabt.size) * 8; register_t r =3D v->io.req.data; =20 /* We are done with the IO */ @@ -37,21 +36,7 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, = struct vcpu *v) if ( dabt.write ) return IO_HANDLED; =20 - /* - * Sign extend if required. - * Note that we expect the read handler to have zeroed the bits - * outside the requested access size. - */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) - { - /* - * We are relying on register_t using the same as - * an unsigned long in order to keep the 32-bit assembly - * code smaller. - */ - BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long)); - r |=3D (~0UL) << size; - } + r =3D sign_extend(dabt, r); =20 set_user_reg(regs, dabt.reg, r); =20 diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h index 997c378..e301c44 100644 --- a/xen/include/asm-arm/traps.h +++ b/xen/include/asm-arm/traps.h @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_= user_regs *regs) (unsigned long)abort_guest_exit_end =3D=3D regs->pc; } =20 +/* Check whether the sign extension is required and perform it */ +static inline register_t sign_extend(const struct hsr_dabt dabt, register_= t r) +{ + uint8_t size =3D (1 << dabt.size) * 8; + + /* + * Sign extend if required. + * Note that we expect the read handler to have zeroed the bits + * outside the requested access size. + */ + if ( dabt.sign && (r & (1UL << (size - 1))) ) + { + /* + * We are relying on register_t using the same as + * an unsigned long in order to keep the 32-bit assembly + * code smaller. + */ + BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long)); + r |=3D (~0UL) << size; + } + + return r; +} + #endif /* __ASM_ARM_TRAPS__ */ /* * Local variables: --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488761; cv=none; d=zohomail.com; s=zohoarc; b=hX5UJsY8vSj6pHnZzFkpcC7zkMtH9gaQf+PKu0+25uSbOYZyMDEGBkGyH9XrwTmSRgGUUZ559NQn+gvlddDigFkk5LaDuSKsOtc5bX67N4gUlUJ7Ie5BRx9vbf61mrod8jlfIG+VXzzSbeKCkOHJMVDzb7VytNCtTnbXAFXcmAY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488761; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=pY2EFgr5mZYb8VB78RF90bV56k/p4A7pIlBY1qio2Gw=; b=US++Kn0MxRnomS0eofSjLDj6fzRDZhfljRASmKaiU12/2Lp7Xpu/kTskoLThw2/qcnxWKRZVDvVo9NQPMkyvsq3Dsp56P2nYx0HO8S836mGSyo2rdJjNgJqXLmgQbVdS0e/nVKBQqr/BcJQnKvUnP3rTFMGD88E05Qub5azTBqs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488761567874.7641876034243; Tue, 12 Jan 2021 13:59:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66076.117251 (Exim 4.92) (envelope-from ) id 1kzRgo-0003dd-LO; Tue, 12 Jan 2021 21:58:58 +0000 Received: by outflank-mailman (output) from mailman id 66076.117251; Tue, 12 Jan 2021 21:58:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgo-0003dA-Cd; Tue, 12 Jan 2021 21:58:58 +0000 Received: by outflank-mailman (input) for mailman id 66076; Tue, 12 Jan 2021 21:58:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRcy-0002PK-KI for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:55:00 +0000 Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 289e7733-795a-4b97-bc99-0e0332592d63; Tue, 12 Jan 2021 21:53:16 +0000 (UTC) Received: by mail-wr1-x431.google.com with SMTP id r7so18279wrc.5 for ; Tue, 12 Jan 2021 13:53:14 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:13 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 289e7733-795a-4b97-bc99-0e0332592d63 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pY2EFgr5mZYb8VB78RF90bV56k/p4A7pIlBY1qio2Gw=; b=F1sEvpaf++vKOVcm7VJUbBFnQs64PZvbL8vV4c5mNsxuApay5BbcZxkB5Uv36KX71p BOTB/GPgXjQ4UXbEwzGsbkBGR9M1HAJ60M6r84HwJR8Z9NYFsxmYhvLDDF74TP8Y1Ump x4/2gnLzwBMAwh0bYuTVGp9UKPZPOR7iBK+2y6/+54ixeQkcYkhgmdVHPqXCHA5GnC72 SmKKt9zZbowlD41oA7VJ1wUZxd/UkAMroquAZCDbMXmq2kG7f7PzvCbb457kqIDQY6zP 3VoNzeevYUb/LmArzh3MyF+QpQeXCQGC7rVRgEUC2wnYIKUukQh+AIL/UJj9VqlwbIrO aG+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pY2EFgr5mZYb8VB78RF90bV56k/p4A7pIlBY1qio2Gw=; b=rTWx8SeX1dove/pIziZ+61/g70WKLtjy+ne62UUtLsiXfD8yMgA+oFVrJsdArC76Zv lf1mjz/wUw0OJN92hG814NURQzdKpJbkQMO9WJgz3X99J0fZULl+YWiJp9yLLsgh4ymt k8/y+v2pjXscrNmS3B8Tsf9uJoaP5L9Mw7emzKAbbRlqSDjK5q7Hy7e+TejFTx3U+ncK qR98dD22IXMs8jzLcg8wDZxepQACvDu+iab8WaXVekf3saWrkIQbTd69Vp/+Ja+zn//M 71/24QMNwv4A28mzHNtnc1xRwqP3jCMBtIiFK6cnhA8HsbAB9QhTc4eX41WJVPaApJ7P iwaw== X-Gm-Message-State: AOAM5303JFh2ekXrwcCqyZM+jv08ZTLLQIjJNaE52TCeBaJ8iWTmk2UN Dwy9yM5rX8+xFCROXmiSqRXyX9pumbV3NA== X-Google-Smtp-Source: ABdhPJz/WFN/AicNIBoaiz3sEvig6nDC2HbYwIaKJVX7I6VBPHLqmeE2eJTbI1Z76Z0Id9wHJa1wFw== X-Received: by 2002:adf:a4cc:: with SMTP id h12mr775366wrb.391.1610488393829; Tue, 12 Jan 2021 13:53:13 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V4 20/24] xen/arm: io: Harden sign extension check Date: Tue, 12 Jan 2021 23:52:28 +0200 Message-Id: <1610488352-18494-21-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko In the ideal world we would never get an undefined behavior when propagating the sign bit since that bit can only be set for access size smaller than the register size (i.e byte/half-word for aarch32, byte/half-word/word for aarch64). In the real world we need to care for *possible* hardware bug such as advertising a sign extension for either 64-bit (or 32-bit) on Arm64 (resp. Arm32). So harden a bit more the code to prevent undefined behavior when propagating the sign bit in case of buggy hardware. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Stefano Stabellini Reviewed-by: Volodymyr Babchuk Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V3 -> V4: - new patch --- xen/include/asm-arm/traps.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h index e301c44..992d537 100644 --- a/xen/include/asm-arm/traps.h +++ b/xen/include/asm-arm/traps.h @@ -93,7 +93,8 @@ static inline register_t sign_extend(const struct hsr_dab= t dabt, register_t r) * Note that we expect the read handler to have zeroed the bits * outside the requested access size. */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) + if ( dabt.sign && (size < sizeof(register_t) * 8) && + (r & (1UL << (size - 1))) ) { /* * We are relying on register_t using the same as --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488772; cv=none; d=zohomail.com; s=zohoarc; b=ESY8yKJgyrSW0Yrc7GwHcx8r8THobNJPkAxj4b0D0sNAbVx+7oGtrxjWW11/ll/p6dt3UCHZnfSWj/PI95GJB/ja1fySCZuVIEOlf1Bx8vxNruIDoz66guOeC3rVKm3MulPDVJrY1dVsse1KR2q+T0Ha1QDkcX+lOOhg7iqSunc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488772; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=MIH572tckzyqDgDHK6bJ/MeONG4OmFZM99c4IxUYjYE=; b=XJPxDr+VxPYkHUzeifF1ld9jxqlOMcEy3R0ZwCR94eapu6kgip8gI1bIRAGXllzm46OL5hkiZU38/CKUFGISdLMeoP+B7TYqG3kPnVweYXXcmgLSYchlg2+mkBQFYoenup4tOF1mYVLl7hUU+iZ1ZyQNWfSqSTJL9xyAJiM6Bs8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488772158299.4456167084247; Tue, 12 Jan 2021 13:59:32 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66104.117387 (Exim 4.92) (envelope-from ) id 1kzRh8-0004aJ-Rt; Tue, 12 Jan 2021 21:59:18 +0000 Received: by outflank-mailman (output) from mailman id 66104.117387; Tue, 12 Jan 2021 21:59:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRh8-0004Zc-Gu; Tue, 12 Jan 2021 21:59:18 +0000 Received: by outflank-mailman (input) for mailman id 66104; Tue, 12 Jan 2021 21:59:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRdD-0002PK-KW for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:55:15 +0000 Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 261ce1d2-5879-425d-aa15-38c7ac3d5095; Tue, 12 Jan 2021 21:53:16 +0000 (UTC) Received: by mail-wr1-x436.google.com with SMTP id m4so526wrx.9 for ; Tue, 12 Jan 2021 13:53:15 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:14 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 261ce1d2-5879-425d-aa15-38c7ac3d5095 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MIH572tckzyqDgDHK6bJ/MeONG4OmFZM99c4IxUYjYE=; b=BIagn9PvZ/HwuH3xthBj4COy1oPZhIRA+YxMR8kaA4A4t5Gah8MAA0dU8NOqdojYXD XpX+oDshCS9lWFklfEtfd7gJs6XwyyHlvFjrCBTVJKZwRZP4wJW/61+M62qM9KV/UmR+ +3qqTyNb0MjWrrxym5pc0AKVWHDcV0ek8iInuAR3yQLSzrrmK91nJqn/zPVGoljdZ0dI EXhE5/JRYjGNKgTJEF4iGSu8p9POSruBePkZJMZB1DT57ftiSPMqxB/RMio97JRGdfED R9IvBtv1cchXTHOhfRSGygqydA1WqYwu4c5ImoryE9MXicBhv8GrZ77KT2C/63l0cAtU Zz4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MIH572tckzyqDgDHK6bJ/MeONG4OmFZM99c4IxUYjYE=; b=lx5chgaAgdOPvli+0CBF5+Ew5z+wgg476GwKIPLcYj5m4J6IJuab8J1TVxVImQ41Op oXeVqeLN8kCLhqQeS2zzvPqWbNFCYsFEHF961AxoBva7hLf3ybAhts3/HzOKiv9HOL1F GZJ7ggKWLMQZlFLqXF/p3xgspTwPuxbLgQqTeHuisEY8mXBHvc1kSoqqErqPrDoa4fN3 S06OOmENwkzmrbAUpd/0gnB1iX2gTEhgG4aIBZ4Lgq9bsTY61KPRddJRUe3abUHVTCvh qzDKBVZyX5ZodDapYzpWj/b7F4UZhSewPLB26hmobpyeaNKEJfwTKVTDfbQMdRxG9ezC 14cQ== X-Gm-Message-State: AOAM531TXDLcwZD6hS5vE2thJclanBrmaL/bvu1dHSeigAtiH9MRMYas rCV1zfRRjySOWYGMS8n7xjRQCzkLi5bzDw== X-Google-Smtp-Source: ABdhPJy3JpnToaWfrCjEpOpi7uYU4eOVdsOkcmAUkT2iwJ4tj7hoOQQyRD30gOSPkpEV9rvJZG8iuw== X-Received: by 2002:adf:ce82:: with SMTP id r2mr795876wrn.181.1610488394917; Tue, 12 Jan 2021 13:53:14 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Paul Durrant , Julien Grall Subject: [PATCH V4 21/24] xen/ioreq: Make x86's send_invalidate_req() common Date: Tue, 12 Jan 2021 23:52:29 +0200 Message-Id: <1610488352-18494-22-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko As the IOREQ is a common feature now and we also need to invalidate qemu/demu mapcache on Arm when the required condition occurs this patch moves this function to the common code (and remames it to ioreq_signal_mapcache_invalidate). This patch also moves per-domain qemu_mapcache_invalidate variable out of the arch sub-struct (and drops "qemu" prefix). We don't put this variable inside the #ifdef CONFIG_IOREQ_SERVER at the end of struct domain, but in the hole next to the group of 5 bools further up which is more efficient. The subsequent patch will add mapcache invalidation handling on Arm. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Acked-by: Jan Beulich Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - move send_invalidate_req() to the common code - update patch subject/description - move qemu_mapcache_invalidate out of the arch sub-struct, update checks - remove #if defined(CONFIG_ARM64) from the common code Changes V1 -> V2: - was split into: - xen/ioreq: Make x86's send_invalidate_req() common - xen/arm: Add mapcache invalidation handling - update patch description/subject - move Arm bits to a separate patch - don't alter the common code, the flag is set by arch code - rename send_invalidate_req() to send_invalidate_ioreq() - guard qemu_mapcache_invalidate with CONFIG_IOREQ_SERVER - use bool instead of bool_t - remove blank line blank line between head comment and #include-s Changes V2 -> V3: - update patch description - drop "qemu" prefix from the variable name - rename send_invalidate_req() to ioreq_signal_mapcache_invalidate() Changes V3 -> V4: - change variable location in struct domain --- xen/arch/x86/hvm/hypercall.c | 9 +++++---- xen/arch/x86/hvm/io.c | 14 -------------- xen/common/ioreq.c | 14 ++++++++++++++ xen/include/asm-x86/hvm/domain.h | 1 - xen/include/asm-x86/hvm/io.h | 1 - xen/include/xen/ioreq.h | 1 + xen/include/xen/sched.h | 5 +++++ 7 files changed, 25 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c index ac573c8..6d41c56 100644 --- a/xen/arch/x86/hvm/hypercall.c +++ b/xen/arch/x86/hvm/hypercall.c @@ -20,6 +20,7 @@ */ #include #include +#include #include =20 #include @@ -47,7 +48,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM= (void) arg) rc =3D compat_memory_op(cmd, arg); =20 if ( (cmd & MEMOP_CMD_MASK) =3D=3D XENMEM_decrease_reservation ) - curr->domain->arch.hvm.qemu_mapcache_invalidate =3D true; + curr->domain->mapcache_invalidate =3D true; =20 return rc; } @@ -326,9 +327,9 @@ int hvm_hypercall(struct cpu_user_regs *regs) =20 HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax); =20 - if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) && - test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) ) - send_invalidate_req(); + if ( unlikely(currd->mapcache_invalidate) && + test_and_clear_bool(currd->mapcache_invalidate) ) + ioreq_signal_mapcache_invalidate(); =20 return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_complet= ed; } diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 66a37ee..046a8eb 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -64,20 +64,6 @@ void send_timeoffset_req(unsigned long timeoff) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } =20 -/* Ask ioemu mapcache to invalidate mappings. */ -void send_invalidate_req(void) -{ - ioreq_t p =3D { - .type =3D IOREQ_TYPE_INVALIDATE, - .size =3D 4, - .dir =3D IOREQ_WRITE, - .data =3D ~0UL, /* flush all */ - }; - - if ( ioreq_broadcast(&p, false) !=3D 0 ) - gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); -} - bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *de= scr) { struct hvm_emulate_ctxt ctxt; diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 59f4990..050891f 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -35,6 +35,20 @@ #include #include =20 +/* Ask ioemu mapcache to invalidate mappings. */ +void ioreq_signal_mapcache_invalidate(void) +{ + ioreq_t p =3D { + .type =3D IOREQ_TYPE_INVALIDATE, + .size =3D 4, + .dir =3D IOREQ_WRITE, + .data =3D ~0UL, /* flush all */ + }; + + if ( ioreq_broadcast(&p, false) !=3D 0 ) + gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); +} + static void set_ioreq_server(struct domain *d, unsigned int id, struct ioreq_server *s) { diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index b8be1ad..cf959f6 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -122,7 +122,6 @@ struct hvm_domain { =20 struct viridian_domain *viridian; =20 - bool_t qemu_mapcache_invalidate; bool_t is_s3_suspended; =20 /* diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index fb64294..3da0136 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -97,7 +97,6 @@ bool relocate_portio_handler( unsigned int size); =20 void send_timeoffset_req(unsigned long timeoff); -void send_invalidate_req(void); bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec); bool handle_pio(uint16_t port, unsigned int size, int dir); diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index f0908af..dc47ec7 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -101,6 +101,7 @@ struct ioreq_server *ioreq_server_select(struct domain = *d, int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p, bool buffered); unsigned int ioreq_broadcast(ioreq_t *p, bool buffered); +void ioreq_signal_mapcache_invalidate(void); =20 void ioreq_domain_init(struct domain *d); =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 7aea2bb..5139b44 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -444,6 +444,11 @@ struct domain * unpaused for the first time by the systemcontroller. */ bool creation_finished; + /* + * Indicates that mapcache invalidation request should be sent to + * the device emulator. + */ + bool mapcache_invalidate; =20 /* Which guest this guest has privileges on */ struct domain *target; --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488761; cv=none; d=zohomail.com; s=zohoarc; b=DCH5I4yDWgFMtCDIpg1tCC97WVXOpGbn4dXl+hoHpgELjfzYSsysojbFvvVeEswAHmtd5hpio1Ot5TX+qvO4++u/1fuQRCtiW9UEKBaMGBqO9GKbEe4xuQlCcbE/59lYezr2ACoWsxgevtbV/t5spBTLrBVKU1pckDTSk1cmNZc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488761; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=H5uewMgosXn3H0+pT6G4aWJdH9FhtW6Zn18uiEMi7Zc=; b=BbseL8tqbs1fAZXcFzMBnK+fiN+wNiQUdbgM7W4411yAn46qZ9KkIRIdTC6I8WJ9Z4Jev1dPfU4W3CHX2hts+dQ7nTizhz7TEfGO5gw2Gl6HFmjn23TiIhlofBiVBDJFIlCGh2+ocSG6WXEuoIAd6o7Oy1QAIT14I2uknSD+5i8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 161048876195150.545518631355094; Tue, 12 Jan 2021 13:59:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66088.117324 (Exim 4.92) (envelope-from ) id 1kzRgv-0003uy-2r; Tue, 12 Jan 2021 21:59:05 +0000 Received: by outflank-mailman (output) from mailman id 66088.117324; Tue, 12 Jan 2021 21:59:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgu-0003tR-9c; Tue, 12 Jan 2021 21:59:04 +0000 Received: by outflank-mailman (input) for mailman id 66088; Tue, 12 Jan 2021 21:59:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRdI-0002PK-KB for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:55:20 +0000 Received: from mail-wm1-x336.google.com (unknown [2a00:1450:4864:20::336]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 56067d6e-2695-4279-ba1b-c8ccc29c6e9e; Tue, 12 Jan 2021 21:53:27 +0000 (UTC) Received: by mail-wm1-x336.google.com with SMTP id e25so3517831wme.0 for ; Tue, 12 Jan 2021 13:53:16 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:15 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 56067d6e-2695-4279-ba1b-c8ccc29c6e9e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=H5uewMgosXn3H0+pT6G4aWJdH9FhtW6Zn18uiEMi7Zc=; b=CtxRk08dnRLLcdbqls/VHFkgFClWtsWKovWDhqFMMeZ6/NZXmWGpapqM6Xs5xwtWcf BVXDkHwWIIVt7dyMQXT8xsHGjPuA1koNanKYLKHNEOw7yuu1eCx+Fa00K4haQXBPw2m9 pVUGxn3IUmaLf7FMxnPo9rk+MwR4P/mXtkuIaSMc+QuFuIsHnMbrLCaswWhs+JXztAhV QgUvErxQPH4mIs6JmnrFsGoBcEnPNQcNIL2VKizf6b3kNRM7k/cItvTIOnuKMxHWIY/z ctHmYFVWfDCFr7aJDq9mHZJ5P9YL+05DMvPkhHJHFLTv2N+8UXPL9J0PE2srHjndwrMc CWQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=H5uewMgosXn3H0+pT6G4aWJdH9FhtW6Zn18uiEMi7Zc=; b=kc0r7dVWzOX/pAJscKp1xxE2FNlJg/6ydHy7ik797WlxJn54PwRl4dZK6oDWPTn5LU r8l/El2brbGp1LCrvjk7Eo03DJFacQxYxnQ1MeRO32OxeTawwFurpf4TPEdy1EC+96Sf A4VA1VSSiwKEKfvY0GcjJ1itHGkrk7Ighx6lLwlIV6xfoB+DX0Se9c4tboGnd+bng2Ko pilmTCPTwOLhxqDbb+lkdUQcmXpb8pEsAP1nU1NsRyc+khrNOY7jIZXiLRs+ZDoKK6Kk kTVC3Lz4NaRLsBU00YxoHVL1GBkKJsAtNcupKB+7vHpTMlGVkwkJPDCDDNFkJOX/lLZP U5kw== X-Gm-Message-State: AOAM530rGQFFibDnT2Iv9v0cEuqkztRKVTrorax5Kx4L01vcRcuxDqu/ 2SfhaOnu1ltCB6Wm5JoS4rn466Pg2x83vg== X-Google-Smtp-Source: ABdhPJyQ5Q/uNlVB5jgYfkLi1AaM4XzaA0jTul+xm9r+1yOUowHYzp/qeki0Onv13rgeiI16U21yVA== X-Received: by 2002:a1c:cc19:: with SMTP id h25mr1165319wmb.124.1610488395761; Tue, 12 Jan 2021 13:53:15 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V4 22/24] xen/arm: Add mapcache invalidation handling Date: Tue, 12 Jan 2021 23:52:30 +0200 Message-Id: <1610488352-18494-23-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko We need to send mapcache invalidation request to qemu/demu everytime the page gets removed from a guest. At the moment, the Arm code doesn't explicitely remove the existing mapping before inserting the new mapping. Instead, this is done implicitely by __p2m_set_entry(). So we need to recognize a case when old entry is a RAM page *and* the new MFN is different in order to set the corresponding flag. The most suitable place to do this is p2m_free_entry(), there we can find the correct leaf type. The invalidation request will be sent in do_trap_hypercall() later on. Taking into the account the following the do_trap_hypercall() is the best place to send invalidation request: - The only way a guest can modify its P2M on Arm is via an hypercall - When sending the invalidation request, the vCPU will be blocked until all the IOREQ servers have acknowledged the invalidation Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Stefano Stabellini --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** Please note, this patch depends on the following which is on review: https://patchwork.kernel.org/patch/11803383/ This patch is on par with x86 code (whether it is buggy or not). If there is a need to improve/harden something, this can be done on a follow-up. *** Changes V1 -> V2: - new patch, some changes were derived from (+ new explanation): xen/ioreq: Make x86's invalidate qemu mapcache handling common - put setting of the flag into __p2m_set_entry() - clarify the conditions when the flag should be set - use domain_has_ioreq_server() - update do_trap_hypercall() by adding local variable Changes V2 -> V3: - update patch description - move check to p2m_free_entry() - add a comment - use "curr" instead of "v" in do_trap_hypercall() Changes V3 -> V4: - update patch description - re-order check in p2m_free_entry() to call domain_has_ioreq_server() only if p2m->domain =3D=3D current->domain - add a comment in do_trap_hypercall() --- xen/arch/arm/p2m.c | 25 +++++++++++++++++-------- xen/arch/arm/traps.c | 20 +++++++++++++++++--- 2 files changed, 34 insertions(+), 11 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index d41c4fa..26acb95d 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -749,17 +750,25 @@ static void p2m_free_entry(struct p2m_domain *p2m, if ( !p2m_is_valid(entry) ) return; =20 - /* Nothing to do but updating the stats if the entry is a super-page. = */ - if ( p2m_is_superpage(entry, level) ) + if ( p2m_is_superpage(entry, level) || (level =3D=3D 3) ) { - p2m->stats.mappings[level]--; - return; - } +#ifdef CONFIG_IOREQ_SERVER + /* + * If this gets called (non-recursively) then either the entry + * was replaced by an entry with a different base (valid case) or + * the shattering of a superpage was failed (error case). + * So, at worst, the spurious mapcache invalidation might be sent. + */ + if ( (p2m->domain =3D=3D current->domain) && + domain_has_ioreq_server(p2m->domain) && + p2m_is_ram(entry.p2m.type) ) + p2m->domain->mapcache_invalidate =3D true; +#endif =20 - if ( level =3D=3D 3 ) - { p2m->stats.mappings[level]--; - p2m_put_l3_page(entry); + /* Nothing to do if the entry is a super-page. */ + if ( level =3D=3D 3 ) + p2m_put_l3_page(entry); return; } =20 diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 35094d8..1070d1b 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1443,6 +1443,7 @@ static void do_trap_hypercall(struct cpu_user_regs *r= egs, register_t *nr, const union hsr hsr) { arm_hypercall_fn_t call =3D NULL; + struct vcpu *curr =3D current; =20 BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) ); =20 @@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct cpu_user_regs *r= egs, register_t *nr, return; } =20 - current->hcall_preempted =3D false; + curr->hcall_preempted =3D false; =20 perfc_incra(hypercalls, *nr); call =3D arm_hypercall_table[*nr].fn; @@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct cpu_user_regs *r= egs, register_t *nr, HYPERCALL_RESULT_REG(regs) =3D call(HYPERCALL_ARGS(regs)); =20 #ifndef NDEBUG - if ( !current->hcall_preempted ) + if ( !curr->hcall_preempted ) { /* Deliberately corrupt parameter regs used by this hypercall. */ switch ( arm_hypercall_table[*nr].nr_args ) { @@ -1489,8 +1490,21 @@ static void do_trap_hypercall(struct cpu_user_regs *= regs, register_t *nr, #endif =20 /* Ensure the hypercall trap instruction is re-executed. */ - if ( current->hcall_preempted ) + if ( curr->hcall_preempted ) regs->pc -=3D 4; /* re-execute 'hvc #XEN_HYPERCALL_TAG' */ + +#ifdef CONFIG_IOREQ_SERVER + /* + * Taking into the account the following the do_trap_hypercall() + * is the best place to send invalidation request: + * - The only way a guest can modify its P2M on Arm is via an hypercall + * - When sending the invalidation request, the vCPU will be blocked + * until all the IOREQ servers have acknowledged the invalidation + */ + if ( unlikely(curr->domain->mapcache_invalidate) && + test_and_clear_bool(curr->domain->mapcache_invalidate) ) + ioreq_signal_mapcache_invalidate(); +#endif } =20 void arch_hypercall_tasklet_result(struct vcpu *v, long res) --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488760; cv=none; d=zohomail.com; s=zohoarc; b=XGOMq/CSgJ8iggxPKNKNsc1AiEgpQ8ghGp6i8bNlOdGjVea0Z3IdqCWVv6M0JxYtI7f6jVBu030gJV8B6CLndppT+dKs0AYmmz7fKqNsHJb+MEHHDaRZaIs41T3mnRlQbrPjLZTKEoK2GQf6ZZ5HQvntTNB7ayVs+JRsu2Q8Xvg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488760; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=sScU+5oicaGgW9qBRRGxQ8d8IF93O73wsVAs5ty06UA=; b=P18VLt+m3wpm2m0z4wKsvmkuNTyi1xQwtQNCOQbiCcl+bJX+vNjH81Bas/x9bV28oBolniN3d/8cpt3sfSiNPG8VQ5gProvb3SxUCwWWipnjjWoVXu9AFtQLcFryWMY/4Wrxqf3xf44zi6VHArfDo9OQtlWbl6uKhJlQ0YW9X7g= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488760392671.8233143389662; Tue, 12 Jan 2021 13:59:20 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66084.117289 (Exim 4.92) (envelope-from ) id 1kzRgr-0003jX-42; Tue, 12 Jan 2021 21:59:01 +0000 Received: by outflank-mailman (output) from mailman id 66084.117289; Tue, 12 Jan 2021 21:59:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgq-0003iu-NX; Tue, 12 Jan 2021 21:59:00 +0000 Received: by outflank-mailman (input) for mailman id 66084; Tue, 12 Jan 2021 21:58:59 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRdN-0002PK-K6 for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:55:25 +0000 Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id aa162e05-41c1-4320-b83f-51adccf96237; Tue, 12 Jan 2021 21:53:30 +0000 (UTC) Received: by mail-wr1-x436.google.com with SMTP id 91so10067wrj.7 for ; Tue, 12 Jan 2021 13:53:17 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:16 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: aa162e05-41c1-4320-b83f-51adccf96237 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sScU+5oicaGgW9qBRRGxQ8d8IF93O73wsVAs5ty06UA=; b=q0vahR7/hPCvHtl56HNnZkTbeu7dlQAE1RoHbqvupatJM826lo6eLq39sKNxAjep5n Fuzmf+yrmXL7dL3tZ9RoqDnzBsiokBaZlygyZgNTVK48qRXIIyA5msQjtB6bM4eNERq+ fdayKLiqsv/w12039DKs2YrVxGN6idxmXAAEpcDSbS+yGsaUiJDFIMLdHJdzCrxHX9UE hhnGCwapV/AbuBIPjTjz5XW1JbfrsaSlUdezqXWq/53OlSMnFrz1QQ09DoLrWapWdcn3 TG2OsA13Jihf5DbOgXcN7Y/t+kvmFC8Wqy1m/twNYFUHs0D/fJsurslDRyZus1jDfUDo 5MVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=sScU+5oicaGgW9qBRRGxQ8d8IF93O73wsVAs5ty06UA=; b=BqYUFBcLrNkDEC5UHtfDWwfDLKnPxktIz3pr7YTalKU6tIyLrc64LZNSbG8xX7kLTO DemZiYcwRWvA/j4TlkvRIlyQwv0DUZ6aJ6qg6lw3+1uld/5d64vj+5oAkXC+zvDtVQ94 NRSH2Tv++IfxnJamIQzQeVTj/zlTcsFkAm2maoptw9UAx0d0YleH6x9iQwYvrS7gXMHZ JlDc9w8eUfbBzeCDjICji3cShOMI8K5LuwBJetsq7FfS69lDhIxSrzeg2TOImzWpb3SE 4WdrmptftuXU40Cvq40SBFefJLtpnvV2emQurM4DlLH8Q4wNcfNoJefIjboFg4dq3A/F Py5g== X-Gm-Message-State: AOAM532tTZ89aizLlpJgXNKIDrs3D0YN3rNfuomJyOVcGWVLbLJ0Z98L SGpDXlJRkbJeOKN6Wobx2iBuVim4Y8G+BQ== X-Google-Smtp-Source: ABdhPJy/iUFLQb6aiqG+w2JoMlszFzTv57QWfhopC1ianIyG5901PZpbjQNM0YEZWCzBovGmfr5xRw== X-Received: by 2002:adf:dc87:: with SMTP id r7mr784130wrj.305.1610488396744; Tue, 12 Jan 2021 13:53:16 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Ian Jackson , Wei Liu , Anthony PERARD , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V4 23/24] libxl: Introduce basic virtio-mmio support on Arm Date: Tue, 12 Jan 2021 23:52:31 +0200 Message-Id: <1610488352-18494-24-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall This patch creates specific device node in the Guest device-tree with allocated MMIO range and SPI interrupt if specific 'virtio' property is present in domain config. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was squashed with: "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct wa= y" "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virti= o-mmio device node" "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT" - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h Changes V1 -> V2: - update the author of a patch Changes V2 -> V3: - no changes Changes V3 -> V4: - no changes --- tools/libs/light/libxl_arm.c | 58 ++++++++++++++++++++++++++++++++++++= ++-- tools/libs/light/libxl_types.idl | 1 + tools/xl/xl_parse.c | 1 + xen/include/public/arch-arm.h | 5 ++++ 4 files changed, 63 insertions(+), 2 deletions(-) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index 66e8a06..588ee5a 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -26,8 +26,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, { uint32_t nr_spis =3D 0; unsigned int i; - uint32_t vuart_irq; - bool vuart_enabled =3D false; + uint32_t vuart_irq, virtio_irq; + bool vuart_enabled =3D false, virtio_enabled =3D false; =20 /* * If pl011 vuart is enabled then increment the nr_spis to allow alloc= ation @@ -39,6 +39,17 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled =3D true; } =20 + /* + * XXX: Handle properly virtio + * A proper solution would be the toolstack to allocate the interrupts + * used by each virtio backend and let the backend now which one is us= ed + */ + if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) { + nr_spis +=3D (GUEST_VIRTIO_MMIO_SPI - 32) + 1; + virtio_irq =3D GUEST_VIRTIO_MMIO_SPI; + virtio_enabled =3D true; + } + for (i =3D 0; i < d_config->b_info.num_irqs; i++) { uint32_t irq =3D d_config->b_info.irqs[i]; uint32_t spi; @@ -58,6 +69,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, return ERROR_FAIL; } =20 + /* The same check as for vpl011 */ + if (virtio_enabled && irq =3D=3D virtio_irq) { + LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", ir= q); + return ERROR_FAIL; + } + if (irq < 32) continue; =20 @@ -658,6 +675,39 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *= fdt, return 0; } =20 +static int make_virtio_mmio_node(libxl__gc *gc, void *fdt, + uint64_t base, uint32_t irq) +{ + int res; + gic_interrupt intr; + /* Placeholder for virtio@ + a 64-bit number + \0 */ + char buf[24]; + + snprintf(buf, sizeof(buf), "virtio@%"PRIx64, base); + res =3D fdt_begin_node(fdt, buf); + if (res) return res; + + res =3D fdt_property_compat(gc, fdt, 1, "virtio,mmio"); + if (res) return res; + + res =3D fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROO= T_SIZE_CELLS, + 1, base, GUEST_VIRTIO_MMIO_SIZE); + if (res) return res; + + set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING); + res =3D fdt_property_interrupts(gc, fdt, &intr, 1); + if (res) return res; + + res =3D fdt_property(fdt, "dma-coherent", NULL, 0); + if (res) return res; + + res =3D fdt_end_node(fdt); + if (res) return res; + + return 0; + +} + static const struct arch_info *get_arch_info(libxl__gc *gc, const struct xc_dom_image *do= m) { @@ -961,6 +1011,9 @@ next_resize: if (info->tee =3D=3D LIBXL_TEE_TYPE_OPTEE) FDT( make_optee_node(gc, fdt) ); =20 + if (libxl_defbool_val(info->arch_arm.virtio)) + FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GU= EST_VIRTIO_MMIO_SPI) ); + if (pfdt) FDT( copy_partial_fdt(gc, fdt, pfdt) ); =20 @@ -1178,6 +1231,7 @@ void libxl__arch_domain_build_info_setdefault(libxl__= gc *gc, { /* ACPI is disabled by default */ libxl_defbool_setdefault(&b_info->acpi, false); + libxl_defbool_setdefault(&b_info->arch_arm.virtio, false); =20 if (b_info->type !=3D LIBXL_DOMAIN_TYPE_PV) return; diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_type= s.idl index 0532473..839df86 100644 --- a/tools/libs/light/libxl_types.idl +++ b/tools/libs/light/libxl_types.idl @@ -640,6 +640,7 @@ libxl_domain_build_info =3D Struct("domain_build_info",[ =20 =20 ("arch_arm", Struct(None, [("gic_version", libxl_gic_version), + ("virtio", libxl_defbool), ("vuart", libxl_vuart_type), ])), # Alternate p2m is not bound to any architecture or guest type, as it = is diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index 4ebf396..2a3364b 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -2581,6 +2581,7 @@ skip_usbdev: } =20 xlu_cfg_get_defbool(config, "dm_restrict", &b_info->dm_restrict, 0); + xlu_cfg_get_defbool(config, "virtio", &b_info->arch_arm.virtio, 0); =20 if (c_info->type =3D=3D LIBXL_DOMAIN_TYPE_HVM) { if (!xlu_cfg_get_string (config, "vga", &buf, 0)) { diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index c365b1b..be7595f 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -464,6 +464,11 @@ typedef uint64_t xen_callback_t; #define PSCI_cpu_on 2 #define PSCI_migrate 3 =20 +/* VirtIO MMIO definitions */ +#define GUEST_VIRTIO_MMIO_BASE xen_mk_ullong(0x02000000) +#define GUEST_VIRTIO_MMIO_SIZE xen_mk_ullong(0x200) +#define GUEST_VIRTIO_MMIO_SPI 33 + #endif =20 #ifndef __ASSEMBLY__ --=20 2.7.4 From nobody Thu Apr 25 00:25:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488764; cv=none; d=zohomail.com; s=zohoarc; b=ce9sUs/s/MbARd/wBFTikA5BzH0MHg06dI313nnLoGoI3INxgvv0371YDZOkzSOVA9kgLa8oqS6L6WfLzwL3XLQpfgs8bCJCXTyOV/xmivEmloOjnJkLNRKeIN7qBRUytrMXl2QK/tCAYKbfypai6jKVCRzkFrOMsfGgg9OQZa0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488764; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=6+MvqP3/wy8WctsKQCAZvhD2hS6tC6hdBbzfK595FRE=; b=UtnqticRRoIpaZjBXqOPtsrx9d2iHzLZdYT3+8f9SwXTqRxhXYQPwBfNN3vp1j2jjef6hDb/CJut/xTwdDpPknDERDWXqhCUrYnBI2LoO++sJvjJZ0Pps7WkZ8KybpLXtcvEARtPZ1mjyzQRPljbIJsR/rkJUHYF9hZOoP9kQw0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488764907625.9683752200419; Tue, 12 Jan 2021 13:59:24 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66092.117340 (Exim 4.92) (envelope-from ) id 1kzRgy-00043i-0p; Tue, 12 Jan 2021 21:59:08 +0000 Received: by outflank-mailman (output) from mailman id 66092.117340; Tue, 12 Jan 2021 21:59:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgx-00043I-Dw; Tue, 12 Jan 2021 21:59:07 +0000 Received: by outflank-mailman (input) for mailman id 66092; Tue, 12 Jan 2021 21:59:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRdc-0002PK-Kc for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:55:40 +0000 Received: from mail-wr1-x42f.google.com (unknown [2a00:1450:4864:20::42f]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 788fded2-3af2-4d1c-b092-3beb13b0ad76; Tue, 12 Jan 2021 21:53:30 +0000 (UTC) Received: by mail-wr1-x42f.google.com with SMTP id i9so22447wrc.4 for ; Tue, 12 Jan 2021 13:53:18 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:17 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 788fded2-3af2-4d1c-b092-3beb13b0ad76 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6+MvqP3/wy8WctsKQCAZvhD2hS6tC6hdBbzfK595FRE=; b=NAxv8kAwXZxSMdFK/Zs7RqluM/phbVkbI/NgXZrV/1+QU71wsTuxdK4khKvOvc9Yg8 yyDsfJJKiuZn5LJ7h7Oc3jjg/HSH/Z5AnISzNumqmEESYpaz2/T6Nj9uAbmJbC+7jvUC HhD6DYiPRSzWfV4M0Azmj3/E6t8BoSa5P/aP2O6PFbsjuXo1jFjMf/3ZSJJVn9SLkOL+ E8AqeeY85ul3HeIsd9Wde1+BidSH0yewdLU+LD5uyx6akEEvQgILjTZhw4Ehr1kQoSIn FTBC9V4M8h30QQWrX9e8xUYIbxCsdGHpp3JXBzrnMLOCbSNxkjycK/scFxntiJE0t2Zu EQkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6+MvqP3/wy8WctsKQCAZvhD2hS6tC6hdBbzfK595FRE=; b=fUpXBwzcf8/CEEAG5j+3HJHkCis13h87IheGcDSaFyuG824u28hs7Hp9Pp8kkMntD8 f61DzbmJdmNn0MnjRejNMRKR/8cRdwJ40V8XFMlvpe/aCBu4h+a7GJiZh9eawp4Th1k5 WNsumS8jVQ76UT2F+Rog5INETuqKo1JPI0TkRwXjeMOglCNzSpqLiYNnodwj2aRj7VKT W9o43y0pkPfObQj8e2h6eNbYt+tUiOapH0yLM381EFZLCknJEllnWUmClnEiI/z4nIRr TiEbyuVCjCmwMNctJvES8zWYXZDZfloqWAfFO1g9fYHPnOkPeepV/ls2hfnt14N4HkfT ZmHA== X-Gm-Message-State: AOAM530czJNGnf/ek1gKWXW3L9muH6ASfEuQZ2tTr46WYmT/2+jIqYlt dleaoxPMJ3fO59adUdUkw3VbL7hF6Eo70w== X-Google-Smtp-Source: ABdhPJzuBNcgOSNauWtA/MLXj+Cm9IvdJyTcGNaAZlGQCknmEUe1PIGwDFgA6oWJ48ryu15s5VkpGw== X-Received: by 2002:a5d:6108:: with SMTP id v8mr764588wrt.0.1610488397708; Tue, 12 Jan 2021 13:53:17 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Ian Jackson , Wei Liu , Anthony PERARD , Julien Grall , Stefano Stabellini Subject: [PATCH V4 24/24] [RFC] libxl: Add support for virtio-disk configuration Date: Tue, 12 Jan 2021 23:52:32 +0200 Message-Id: <1610488352-18494-25-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch adds basic support for configuring and assisting virtio-disk backend (emualator) which is intended to run out of Qemu and could be run in any domain. Xenstore was chosen as a communication interface for the emulator running in non-toolstack domain to be able to get configuration either by reading Xenstore directly or by receiving command line parameters (an updated 'xl d= evd' running in the same domain would read Xenstore beforehand and call backend executable with the required arguments). An example of domain configuration (two disks are assigned to the guest, the latter is in readonly mode): vdisk =3D [ 'backend=3DDomD, disks=3Drw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ] Where per-disk Xenstore entries are: - filename and readonly flag (configured via "vdisk" property) - base and irq (allocated dynamically) Besides handling 'visible' params described in configuration file, patch also allocates virtio-mmio specific ones for each device and writes them into Xenstore. virtio-mmio params (irq and base) are unique per guest domain, they allocated at the domain creation time and passed through to the emulator. Each VirtIO device has at least one pair of these params. TODO: 1. An extra "virtio" property could be removed. 2. Update documentation. Signed-off-by: Oleksandr Tyshchenko [On Arm only] Tested-by: Wei Chen --- Changes RFC -> V1: - no changes Changes V1 -> V2: - rebase according to the new location of libxl_virtio_disk.c Changes V2 -> V3: - no changes Changes V3 -> V4: - rebase according to the new argument for DEFINE_DEVICE_TYPE_STRUCT Please note, there is a real concern about VirtIO interrupts allocation. [Just copy here what Stefano said in RFC thread] So, if we end up allocating let's say 6 virtio interrupts for a domain, the chance of a clash with a physical interrupt of a passthrough device is = real. I am not entirely sure how to solve it, but these are a few ideas: - choosing virtio interrupts that are less likely to conflict (maybe > 1000) - make the virtio irq (optionally) configurable so that a user could override the default irq and specify one that doesn't conflict - implementing support for virq !=3D pirq (even the xl interface doesn't allow to specify the virq number for passthrough devices, see "irqs") Also there is one suggestion from Wei Chen regarding a parameter for domain config file which I haven't addressed yet. [Just copy here what Wei said in V2 thread] Can we keep use the same 'disk' parameter for virtio-disk, but add an optio= n like "model=3Dvirtio-disk"? For example: disk =3D [ 'backend=3DDomD, disks=3Drw:/dev/mmcblk0p3,model=3Dvirtio-disk' ] Just like what Xen has done for x86 virtio-net. --- tools/libs/light/Makefile | 1 + tools/libs/light/libxl_arm.c | 56 ++++++++++++--- tools/libs/light/libxl_create.c | 1 + tools/libs/light/libxl_internal.h | 1 + tools/libs/light/libxl_types.idl | 15 ++++ tools/libs/light/libxl_types_internal.idl | 1 + tools/libs/light/libxl_virtio_disk.c | 109 ++++++++++++++++++++++++++= ++ tools/xl/Makefile | 2 +- tools/xl/xl.h | 3 + tools/xl/xl_cmdtable.c | 15 ++++ tools/xl/xl_parse.c | 115 ++++++++++++++++++++++++++= ++++ tools/xl/xl_virtio_disk.c | 46 ++++++++++++ 12 files changed, 354 insertions(+), 11 deletions(-) create mode 100644 tools/libs/light/libxl_virtio_disk.c create mode 100644 tools/xl/xl_virtio_disk.c diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile index 68f6fa3..ccc91b9 100644 --- a/tools/libs/light/Makefile +++ b/tools/libs/light/Makefile @@ -115,6 +115,7 @@ SRCS-y +=3D libxl_genid.c SRCS-y +=3D _libxl_types.c SRCS-y +=3D libxl_flask.c SRCS-y +=3D _libxl_types_internal.c +SRCS-y +=3D libxl_virtio_disk.c =20 ifeq ($(CONFIG_LIBNL),y) CFLAGS_LIBXL +=3D $(LIBNL3_CFLAGS) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index 588ee5a..9eb3022 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -8,6 +8,12 @@ #include #include =20 +#ifndef container_of +#define container_of(ptr, type, member) ({ \ + typeof( ((type *)0)->member ) *__mptr =3D (ptr); \ + (type *)( (char *)__mptr - offsetof(type,member) );}) +#endif + static const char *gicv_to_string(libxl_gic_version gic_version) { switch (gic_version) { @@ -39,14 +45,32 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled =3D true; } =20 - /* - * XXX: Handle properly virtio - * A proper solution would be the toolstack to allocate the interrupts - * used by each virtio backend and let the backend now which one is us= ed - */ if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) { - nr_spis +=3D (GUEST_VIRTIO_MMIO_SPI - 32) + 1; + uint64_t virtio_base; + libxl_device_virtio_disk *virtio_disk; + + virtio_base =3D GUEST_VIRTIO_MMIO_BASE; virtio_irq =3D GUEST_VIRTIO_MMIO_SPI; + + if (!d_config->num_virtio_disks) { + LOG(ERROR, "Virtio is enabled, but no Virtio devices present\n= "); + return ERROR_FAIL; + } + virtio_disk =3D &d_config->virtio_disks[0]; + + for (i =3D 0; i < virtio_disk->num_disks; i++) { + virtio_disk->disks[i].base =3D virtio_base; + virtio_disk->disks[i].irq =3D virtio_irq; + + LOG(DEBUG, "Allocate Virtio MMIO params: IRQ %u BASE 0x%"PRIx6= 4, + virtio_irq, virtio_base); + + virtio_irq ++; + virtio_base +=3D GUEST_VIRTIO_MMIO_SIZE; + } + virtio_irq --; + + nr_spis +=3D (virtio_irq - 32) + 1; virtio_enabled =3D true; } =20 @@ -70,8 +94,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, } =20 /* The same check as for vpl011 */ - if (virtio_enabled && irq =3D=3D virtio_irq) { - LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", ir= q); + if (virtio_enabled && + (irq >=3D GUEST_VIRTIO_MMIO_SPI && irq <=3D virtio_irq)) { + LOG(ERROR, "Physical IRQ %u conflicting with Virtio IRQ range\= n", irq); return ERROR_FAIL; } =20 @@ -1011,8 +1036,19 @@ next_resize: if (info->tee =3D=3D LIBXL_TEE_TYPE_OPTEE) FDT( make_optee_node(gc, fdt) ); =20 - if (libxl_defbool_val(info->arch_arm.virtio)) - FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GU= EST_VIRTIO_MMIO_SPI) ); + if (libxl_defbool_val(info->arch_arm.virtio)) { + libxl_domain_config *d_config =3D + container_of(info, libxl_domain_config, b_info); + libxl_device_virtio_disk *virtio_disk =3D &d_config->virtio_di= sks[0]; + unsigned int i; + + for (i =3D 0; i < virtio_disk->num_disks; i++) { + uint64_t base =3D virtio_disk->disks[i].base; + uint32_t irq =3D virtio_disk->disks[i].irq; + + FDT( make_virtio_mmio_node(gc, fdt, base, irq) ); + } + } =20 if (pfdt) FDT( copy_partial_fdt(gc, fdt, pfdt) ); diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_creat= e.c index 86f4a83..1734fcd 100644 --- a/tools/libs/light/libxl_create.c +++ b/tools/libs/light/libxl_create.c @@ -1821,6 +1821,7 @@ const libxl__device_type *device_type_tbl[] =3D { &libxl__dtdev_devtype, &libxl__vdispl_devtype, &libxl__vsnd_devtype, + &libxl__virtio_disk_devtype, NULL }; =20 diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_int= ernal.h index c79523b..5edef85 100644 --- a/tools/libs/light/libxl_internal.h +++ b/tools/libs/light/libxl_internal.h @@ -3999,6 +3999,7 @@ extern const libxl__device_type libxl__vdispl_devtype; extern const libxl__device_type libxl__p9_devtype; extern const libxl__device_type libxl__pvcallsif_devtype; extern const libxl__device_type libxl__vsnd_devtype; +extern const libxl__device_type libxl__virtio_disk_devtype; =20 extern const libxl__device_type *device_type_tbl[]; =20 diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_type= s.idl index 839df86..2c40bc2 100644 --- a/tools/libs/light/libxl_types.idl +++ b/tools/libs/light/libxl_types.idl @@ -936,6 +936,20 @@ libxl_device_vsnd =3D Struct("device_vsnd", [ ("pcms", Array(libxl_vsnd_pcm, "num_vsnd_pcms")) ]) =20 +libxl_virtio_disk_param =3D Struct("virtio_disk_param", [ + ("filename", string), + ("readonly", bool), + ("irq", uint32), + ("base", uint64), + ]) + +libxl_device_virtio_disk =3D Struct("device_virtio_disk", [ + ("backend_domid", libxl_domid), + ("backend_domname", string), + ("devid", libxl_devid), + ("disks", Array(libxl_virtio_disk_param, "num_disks")), + ]) + libxl_domain_config =3D Struct("domain_config", [ ("c_info", libxl_domain_create_info), ("b_info", libxl_domain_build_info), @@ -952,6 +966,7 @@ libxl_domain_config =3D Struct("domain_config", [ ("pvcallsifs", Array(libxl_device_pvcallsif, "num_pvcallsifs")), ("vdispls", Array(libxl_device_vdispl, "num_vdispls")), ("vsnds", Array(libxl_device_vsnd, "num_vsnds")), + ("virtio_disks", Array(libxl_device_virtio_disk, "num_virtio_disks")), # a channel manifests as a console with a name, # see docs/misc/channels.txt ("channels", Array(libxl_device_channel, "num_channels")), diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/l= ibxl_types_internal.idl index 3593e21..8f71980 100644 --- a/tools/libs/light/libxl_types_internal.idl +++ b/tools/libs/light/libxl_types_internal.idl @@ -32,6 +32,7 @@ libxl__device_kind =3D Enumeration("device_kind", [ (14, "PVCALLS"), (15, "VSND"), (16, "VINPUT"), + (17, "VIRTIO_DISK"), ]) =20 libxl__console_backend =3D Enumeration("console_backend", [ diff --git a/tools/libs/light/libxl_virtio_disk.c b/tools/libs/light/libxl_= virtio_disk.c new file mode 100644 index 0000000..be769ad --- /dev/null +++ b/tools/libs/light/libxl_virtio_disk.c @@ -0,0 +1,109 @@ +/* + * Copyright (C) 2020 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include "libxl_internal.h" + +static int libxl__device_virtio_disk_setdefault(libxl__gc *gc, uint32_t do= mid, + libxl_device_virtio_disk *= virtio_disk, + bool hotplug) +{ + return libxl__resolve_domid(gc, virtio_disk->backend_domname, + &virtio_disk->backend_domid); +} + +static int libxl__virtio_disk_from_xenstore(libxl__gc *gc, const char *lib= xl_path, + libxl_devid devid, + libxl_device_virtio_disk *virt= io_disk) +{ + const char *be_path; + int rc; + + virtio_disk->devid =3D devid; + rc =3D libxl__xs_read_mandatory(gc, XBT_NULL, + GCSPRINTF("%s/backend", libxl_path), + &be_path); + if (rc) return rc; + + rc =3D libxl__backendpath_parse_domid(gc, be_path, &virtio_disk->backe= nd_domid); + if (rc) return rc; + + return 0; +} + +static void libxl__update_config_virtio_disk(libxl__gc *gc, + libxl_device_virtio_disk *dst, + libxl_device_virtio_disk *src) +{ + dst->devid =3D src->devid; +} + +static int libxl_device_virtio_disk_compare(libxl_device_virtio_disk *d1, + libxl_device_virtio_disk *d2) +{ + return COMPARE_DEVID(d1, d2); +} + +static void libxl__device_virtio_disk_add(libxl__egc *egc, uint32_t domid, + libxl_device_virtio_disk *virtio= _disk, + libxl__ao_device *aodev) +{ + libxl__device_add_async(egc, domid, &libxl__virtio_disk_devtype, virti= o_disk, aodev); +} + +static int libxl__set_xenstore_virtio_disk(libxl__gc *gc, uint32_t domid, + libxl_device_virtio_disk *virti= o_disk, + flexarray_t *back, flexarray_t = *front, + flexarray_t *ro_front) +{ + int rc; + unsigned int i; + + for (i =3D 0; i < virtio_disk->num_disks; i++) { + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/filename", i), + GCSPRINTF("%s", virtio_disk->disks[i].f= ilename)); + if (rc) return rc; + + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/readonly", i), + GCSPRINTF("%d", virtio_disk->disks[i].r= eadonly)); + if (rc) return rc; + + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/base", i), + GCSPRINTF("%lu", virtio_disk->disks[i].= base)); + if (rc) return rc; + + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/irq", i), + GCSPRINTF("%u", virtio_disk->disks[i].i= rq)); + if (rc) return rc; + } + + return 0; +} + +static LIBXL_DEFINE_UPDATE_DEVID(virtio_disk) +static LIBXL_DEFINE_DEVICE_FROM_TYPE(virtio_disk) +static LIBXL_DEFINE_DEVICES_ADD(virtio_disk) + +DEFINE_DEVICE_TYPE_STRUCT(virtio_disk, VIRTIO_DISK, virtio_disks, + .update_config =3D (device_update_config_fn_t) libxl__update_config_vi= rtio_disk, + .from_xenstore =3D (device_from_xenstore_fn_t) libxl__virtio_disk_from= _xenstore, + .set_xenstore_config =3D (device_set_xenstore_config_fn_t) libxl__set_= xenstore_virtio_disk +); + +/* + * Local variables: + * mode: C + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/tools/xl/Makefile b/tools/xl/Makefile index bdf67c8..9d8f2aa 100644 --- a/tools/xl/Makefile +++ b/tools/xl/Makefile @@ -23,7 +23,7 @@ XL_OBJS +=3D xl_vtpm.o xl_block.o xl_nic.o xl_usb.o XL_OBJS +=3D xl_sched.o xl_pci.o xl_vcpu.o xl_cdrom.o xl_mem.o XL_OBJS +=3D xl_info.o xl_console.o xl_misc.o XL_OBJS +=3D xl_vmcontrol.o xl_saverestore.o xl_migrate.o -XL_OBJS +=3D xl_vdispl.o xl_vsnd.o xl_vkb.o +XL_OBJS +=3D xl_vdispl.o xl_vsnd.o xl_vkb.o xl_virtio_disk.o =20 $(XL_OBJS): CFLAGS +=3D $(CFLAGS_libxentoollog) $(XL_OBJS): CFLAGS +=3D $(CFLAGS_XL) diff --git a/tools/xl/xl.h b/tools/xl/xl.h index 06569c6..3d26f19 100644 --- a/tools/xl/xl.h +++ b/tools/xl/xl.h @@ -178,6 +178,9 @@ int main_vsnddetach(int argc, char **argv); int main_vkbattach(int argc, char **argv); int main_vkblist(int argc, char **argv); int main_vkbdetach(int argc, char **argv); +int main_virtio_diskattach(int argc, char **argv); +int main_virtio_disklist(int argc, char **argv); +int main_virtio_diskdetach(int argc, char **argv); int main_usbctrl_attach(int argc, char **argv); int main_usbctrl_detach(int argc, char **argv); int main_usbdev_attach(int argc, char **argv); diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c index 6ab5e47..696b190 100644 --- a/tools/xl/xl_cmdtable.c +++ b/tools/xl/xl_cmdtable.c @@ -435,6 +435,21 @@ struct cmd_spec cmd_table[] =3D { "Destroy a domain's virtual sound device", " ", }, + { "virtio-disk-attach", + &main_virtio_diskattach, 1, 1, + "Create a new virtio block device", + " TBD\n" + }, + { "virtio-disk-list", + &main_virtio_disklist, 0, 0, + "List virtio block devices for a domain", + "", + }, + { "virtio-disk-detach", + &main_virtio_diskdetach, 0, 1, + "Destroy a domain's virtio block device", + " ", + }, { "uptime", &main_uptime, 0, 0, "Print uptime for all/some domains", diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index 2a3364b..054a0c9 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -1204,6 +1204,120 @@ out: if (rc) exit(EXIT_FAILURE); } =20 +#define MAX_VIRTIO_DISKS 4 + +static int parse_virtio_disk_config(libxl_device_virtio_disk *virtio_disk,= char *token) +{ + char *oparg; + libxl_string_list disks =3D NULL; + int i, rc; + + if (MATCH_OPTION("backend", token, oparg)) { + virtio_disk->backend_domname =3D strdup(oparg); + } else if (MATCH_OPTION("disks", token, oparg)) { + split_string_into_string_list(oparg, ";", &disks); + + virtio_disk->num_disks =3D libxl_string_list_length(&disks); + if (virtio_disk->num_disks > MAX_VIRTIO_DISKS) { + fprintf(stderr, "vdisk: currently only %d disks are supported", + MAX_VIRTIO_DISKS); + return 1; + } + virtio_disk->disks =3D xcalloc(virtio_disk->num_disks, + sizeof(*virtio_disk->disks)); + + for(i =3D 0; i < virtio_disk->num_disks; i++) { + char *disk_opt; + + rc =3D split_string_into_pair(disks[i], ":", &disk_opt, + &virtio_disk->disks[i].filename); + if (rc) { + fprintf(stderr, "vdisk: failed to split \"%s\" into pair\n= ", + disks[i]); + goto out; + } + + if (!strcmp(disk_opt, "ro")) + virtio_disk->disks[i].readonly =3D 1; + else if (!strcmp(disk_opt, "rw")) + virtio_disk->disks[i].readonly =3D 0; + else { + fprintf(stderr, "vdisk: failed to parse \"%s\" disk option= \n", + disk_opt); + rc =3D 1; + } + free(disk_opt); + + if (rc) goto out; + } + } else { + fprintf(stderr, "Unknown string \"%s\" in vdisk spec\n", token); + rc =3D 1; goto out; + } + + rc =3D 0; + +out: + libxl_string_list_dispose(&disks); + return rc; +} + +static void parse_virtio_disk_list(const XLU_Config *config, + libxl_domain_config *d_config) +{ + XLU_ConfigList *virtio_disks; + const char *item; + char *buf =3D NULL; + int rc; + + if (!xlu_cfg_get_list (config, "vdisk", &virtio_disks, 0, 0)) { + libxl_domain_build_info *b_info =3D &d_config->b_info; + int entry =3D 0; + + /* XXX Remove an extra property */ + libxl_defbool_setdefault(&b_info->arch_arm.virtio, false); + if (!libxl_defbool_val(b_info->arch_arm.virtio)) { + fprintf(stderr, "Virtio device requires Virtio property to be = set\n"); + exit(EXIT_FAILURE); + } + + while ((item =3D xlu_cfg_get_listitem(virtio_disks, entry)) !=3D N= ULL) { + libxl_device_virtio_disk *virtio_disk; + char *p; + + virtio_disk =3D ARRAY_EXTEND_INIT(d_config->virtio_disks, + d_config->num_virtio_disks, + libxl_device_virtio_disk_init); + + buf =3D strdup(item); + + p =3D strtok (buf, ","); + while (p !=3D NULL) + { + while (*p =3D=3D ' ') p++; + + rc =3D parse_virtio_disk_config(virtio_disk, p); + if (rc) goto out; + + p =3D strtok (NULL, ","); + } + + entry++; + + if (virtio_disk->num_disks =3D=3D 0) { + fprintf(stderr, "At least one virtio disk should be specif= ied\n"); + rc =3D 1; goto out; + } + } + } + + rc =3D 0; + +out: + free(buf); + if (rc) exit(EXIT_FAILURE); +} + void parse_config_data(const char *config_source, const char *config_data, int config_len, @@ -2734,6 +2848,7 @@ skip_usbdev: } =20 parse_vkb_list(config, d_config); + parse_virtio_disk_list(config, d_config); =20 xlu_cfg_get_defbool(config, "xend_suspend_evtchn_compat", &c_info->xend_suspend_evtchn_compat, 0); diff --git a/tools/xl/xl_virtio_disk.c b/tools/xl/xl_virtio_disk.c new file mode 100644 index 0000000..808a7da --- /dev/null +++ b/tools/xl/xl_virtio_disk.c @@ -0,0 +1,46 @@ +/* + * Copyright (C) 2020 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include + +#include +#include +#include + +#include "xl.h" +#include "xl_utils.h" +#include "xl_parse.h" + +int main_virtio_diskattach(int argc, char **argv) +{ + return 0; +} + +int main_virtio_disklist(int argc, char **argv) +{ + return 0; +} + +int main_virtio_diskdetach(int argc, char **argv) +{ + return 0; +} + +/* + * Local variables: + * mode: C + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.7.4