From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769373; cv=none; d=zohomail.com; s=zohoarc; b=V7wHTK+/iTKb47kTKu/UPJnkULNAU/Che8pZjAhNuem9ei2+gADrSoMZv456NS609rKtzUm6dGQbmiOa0TbN/SFgefJ9g3F4r6FOEKuUENskSbzzoRE0foMLm08wL3L3z1KVU6HSfVwjML4eumq2s1DmRIywFsE5Lllx2fsf6q0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769373; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=n2nf5IYkeDI6RdI6d71oNNusBGrb8791Be8/WSU4WGo=; b=OelsasgfPRmfVsmKx51wSwbuVOVZMTZY6rDUemlwiWXKO629CqqQBvDxJ7AQYUJFTq9cA5kddbVJJ4bm5IUSBME8cHEV3tLLxLjwnjzk/yOPOC9iUIfqkW9XQCr6+K+5j1024plSHXeitgvAw980LKQBeklsfempPHMXG6PdobE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769373786658.575484230211; Thu, 10 Sep 2020 13:22:53 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5W-0004Jj-Kp; Thu, 10 Sep 2020 20:22:34 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5U-0004JK-SH for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:22:32 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3bba4dcf-c72b-4725-90a8-348d49e92c4b; Thu, 10 Sep 2020 20:22:27 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id y11so4304714lfl.5 for ; Thu, 10 Sep 2020 13:22:27 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:25 -0700 (PDT) X-Inumbo-ID: 3bba4dcf-c72b-4725-90a8-348d49e92c4b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=n2nf5IYkeDI6RdI6d71oNNusBGrb8791Be8/WSU4WGo=; b=Py/3IXhws5PM+yVYrHIdbxRBYMn3I1qIvkKGaMLSdwnCMpUarVDwXesFnQ5wtl7Lx6 y7E+Zr7GJVX+SldQ3Y1JkHWwlMxlkTu88vVXK5Shptom2uF0OoySU6zn/bkhAe/66Y3H ZTp8mnQ9V7zmHd5cUSxyPsYzbQfvmPm3yqfjtuOdjbV64BQDU5Tl3usZa2gJxuLNB0sf sH5FLCMH4tRIlxVWvwA6KKZkT6rVkF3HAVQHCRl3G9yNjrdoGRAh44fX58Jg2LWceTxC ZrRqVsJANZOs+U+loeev31kQmjpKF0UeCRTtjkU8H3DYVVBYE43PCTvdX5ROCTcIjEjZ zaxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=n2nf5IYkeDI6RdI6d71oNNusBGrb8791Be8/WSU4WGo=; b=NnYsbkulNE8mTdMYFUoLIHgr6t9/WAayVCila5Qxe4g8zia5AOpBpJ2blG7F2tkxJS wtfGsapQzQWTiqdviX9vrC9d3ljh+lHvvZZL0/+nPgnFXBB+bMi4DVqPQFkfMqQTlUmz andW2CGH1QNd4b//bvQBz8Z9uqiRBKcHdeyyp+UfUCQiPdhJ91Sht/cBelF1k3jMBflQ T6B3J3++SeLVyMdY7zGhxm9DIMetiUXlUbGdlbwYp9VfmUXcob/zCnlVk6s6JFYcQm7A 7b4W+amD218yPKApsV1eKv4Xq/G7okpJYDk8jLFWbbE+c9XHvwiTXbVE/WMSt7xpO0iu jpZg== X-Gm-Message-State: AOAM533ueCNqTE62u61VzKlccnMB6bj4dxtWZUMZq0krjHsBXDLACe5t AcF/EUgj4iIitvWIgT5JBaBDcb0NZ7Fg+g== X-Google-Smtp-Source: ABdhPJwIZc5VNl81pBjgzc3nALZ4KlXWGfDDxaok3/m8HGZY4dO9fpP3yUJXuBUyTtXmIE7W8GmHsg== X-Received: by 2002:a19:5214:: with SMTP id m20mr4996228lfb.138.1599769346337; Thu, 10 Sep 2020 13:22:26 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V1 01/16] x86/ioreq: Prepare IOREQ feature for making it common Date: Thu, 10 Sep 2020 23:21:55 +0300 Message-Id: <1599769330-17656-2-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch prepares IOREQ support before moving to the common code. This way we will get almost a verbatim copy for a code movement. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" - fold the check of p->type into hvm_get_ioreq_server_range_type() and make it return success/failure - remove relocate_portio_handler() call from arch_hvm_ioreq_destroy() in arch/x86/hvm/ioreq.c - introduce arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completio= n() --- --- xen/arch/x86/hvm/ioreq.c | 117 ++++++++++++++++++++++++++----------= ---- xen/include/asm-x86/hvm/ioreq.h | 16 ++++++ 2 files changed, 93 insertions(+), 40 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 1cc27df..d912655 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -170,6 +170,29 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv,= ioreq_t *p) return true; } =20 +bool arch_handle_hvm_io_completion(enum hvm_io_completion io_completion) +{ + switch ( io_completion ) + { + case HVMIO_realmode_completion: + { + struct hvm_emulate_ctxt ctxt; + + hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); + vmx_realmode_emulate_one(&ctxt); + hvm_emulate_writeback(&ctxt); + + break; + } + + default: + ASSERT_UNREACHABLE(); + break; + } + + return true; +} + bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; @@ -209,19 +232,8 @@ bool handle_hvm_io_completion(struct vcpu *v) return handle_pio(vio->io_req.addr, vio->io_req.size, vio->io_req.dir); =20 - case HVMIO_realmode_completion: - { - struct hvm_emulate_ctxt ctxt; - - hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); - vmx_realmode_emulate_one(&ctxt); - hvm_emulate_writeback(&ctxt); - - break; - } default: - ASSERT_UNREACHABLE(); - break; + return arch_handle_hvm_io_completion(io_completion); } =20 return true; @@ -836,6 +848,12 @@ int hvm_create_ioreq_server(struct domain *d, int bufi= oreq_handling, return rc; } =20 +/* Called when target domain is paused */ +int arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *s) +{ + return p2m_set_ioreq_server(s->target, 0, s); +} + int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { struct hvm_ioreq_server *s; @@ -855,7 +873,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) =20 domain_pause(d); =20 - p2m_set_ioreq_server(d, 0, s); + arch_hvm_destroy_ioreq_server(s); =20 hvm_ioreq_server_disable(s); =20 @@ -1215,8 +1233,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) struct hvm_ioreq_server *s; unsigned int id; =20 - if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) ) - return; + arch_hvm_ioreq_destroy(d); =20 spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); =20 @@ -1239,19 +1256,15 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +int hvm_get_ioreq_server_range_type(struct domain *d, + ioreq_t *p, + uint8_t *type, + uint64_t *addr) { - struct hvm_ioreq_server *s; - uint32_t cf8; - uint8_t type; - uint64_t addr; - unsigned int id; + uint32_t cf8 =3D d->arch.hvm.pci_cf8; =20 if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) - return NULL; - - cf8 =3D d->arch.hvm.pci_cf8; + return -EINVAL; =20 if ( p->type =3D=3D IOREQ_TYPE_PIO && (p->addr & ~3) =3D=3D 0xcfc && @@ -1264,8 +1277,8 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(stru= ct domain *d, reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf); =20 /* PCI config data cycle */ - type =3D XEN_DMOP_IO_RANGE_PCI; - addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; + *type =3D XEN_DMOP_IO_RANGE_PCI; + *addr =3D ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ if ( CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD && @@ -1277,16 +1290,30 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(st= ruct domain *d, =20 if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) - addr |=3D CF8_ADDR_HI(cf8); + *addr |=3D CF8_ADDR_HI(cf8); } } else { - type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - addr =3D p->addr; + *type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr =3D p->addr; } =20 + return 0; +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( hvm_get_ioreq_server_range_type(d, p, &type, &addr) ) + return NULL; + FOR_EACH_IOREQ_SERVER(d, id, s) { struct rangeset *r; @@ -1351,7 +1378,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) pg =3D iorp->va; =20 if ( !pg ) - return X86EMUL_UNHANDLEABLE; + return IOREQ_IO_UNHANDLED; =20 /* * Return 0 for the cases we can't deal with: @@ -1381,7 +1408,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) break; default: gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return X86EMUL_UNHANDLEABLE; + return IOREQ_IO_UNHANDLED; } =20 spin_lock(&s->bufioreq_lock); @@ -1391,7 +1418,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) { /* The queue is full: send the iopacket through the normal path. */ spin_unlock(&s->bufioreq_lock); - return X86EMUL_UNHANDLEABLE; + return IOREQ_IO_UNHANDLED; } =20 pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D bp; @@ -1422,7 +1449,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) notify_via_xen_event_channel(d, s->bufioreq_evtchn); spin_unlock(&s->bufioreq_lock); =20 - return X86EMUL_OKAY; + return IOREQ_IO_HANDLED; } =20 int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, @@ -1438,7 +1465,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_= t *proto_p, return hvm_send_buffered_ioreq(s, proto_p); =20 if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return X86EMUL_RETRY; + return IOREQ_IO_RETRY; =20 list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -1478,11 +1505,11 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, iore= q_t *proto_p, notify_via_xen_event_channel(d, port); =20 sv->pending =3D true; - return X86EMUL_RETRY; + return IOREQ_IO_RETRY; } } =20 - return X86EMUL_UNHANDLEABLE; + return IOREQ_IO_UNHANDLED; } =20 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) @@ -1496,7 +1523,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buf= fered) if ( !s->enabled ) continue; =20 - if ( hvm_send_ioreq(s, p, buffered) =3D=3D X86EMUL_UNHANDLEABLE ) + if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_IO_UNHANDLED ) failed++; } =20 @@ -1515,11 +1542,21 @@ static int hvm_access_cf8( return X86EMUL_UNHANDLEABLE; } =20 +void arch_hvm_ioreq_init(struct domain *d) +{ + register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); +} + +void arch_hvm_ioreq_destroy(struct domain *d) +{ + +} + void hvm_ioreq_init(struct domain *d) { spin_lock_init(&d->arch.hvm.ioreq_server.lock); =20 - register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); + arch_hvm_ioreq_init(d); } =20 /* diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index e2588e9..151b92b 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -55,6 +55,22 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffer= ed); =20 void hvm_ioreq_init(struct domain *d); =20 +int arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *s); + +bool arch_handle_hvm_io_completion(enum hvm_io_completion io_completion); + +int hvm_get_ioreq_server_range_type(struct domain *d, + ioreq_t *p, + uint8_t *type, + uint64_t *addr); + +void arch_hvm_ioreq_init(struct domain *d); +void arch_hvm_ioreq_destroy(struct domain *d); + +#define IOREQ_IO_HANDLED X86EMUL_OKAY +#define IOREQ_IO_UNHANDLED X86EMUL_UNHANDLEABLE +#define IOREQ_IO_RETRY X86EMUL_RETRY + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769398; cv=none; d=zohomail.com; s=zohoarc; b=A6ePCSwN7d0ERKep3updpNWXPgCmM0Il8KRxdXgiY+blHK83wHQab1bU3S3oYvNkTy/qS4gCTATYLusaNoqloVQJs+/SwNtPsrjuFY2D/60rvzyMMhk4Ey7AnIcv7M6aeXQ68MtTPnnz5W6dRP8lU1gB1QCabLCx2UZluWQgBD8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769398; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=5CeOYR6RBSX4tTsFsgT0L2dlzIXbRBloL+vzQYUkEEU=; b=IQSEzGW2zfrCzrWiD66w/5HzBnKh27D5VzrNQN0MxQTe2rn0GQ/JN8TZS2SYvXL30EwAIr6d2S+xeVqgfrY/0JTxE0eJCZKE4h45EQ578uj/uOT9xOJvawuXT8t1i3LcrmmW59QusF3VfBLNvjok2ZQipwUheZOhGZATnJnXAjE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769398359420.75937650904405; Thu, 10 Sep 2020 13:23:18 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5l-0004PG-Oo; Thu, 10 Sep 2020 20:22:49 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5j-0004JK-Sn for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:22:47 +0000 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6bce3753-aa3c-43b3-afe2-043a4c3c8c50; Thu, 10 Sep 2020 20:22:30 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id d15so4277226lfq.11 for ; Thu, 10 Sep 2020 13:22:30 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:27 -0700 (PDT) X-Inumbo-ID: 6bce3753-aa3c-43b3-afe2-043a4c3c8c50 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5CeOYR6RBSX4tTsFsgT0L2dlzIXbRBloL+vzQYUkEEU=; b=HKQu2beaEm+z34bnyl4LRThEdq4FxsAxXaJrZbJZttKfWXXYhYOrppJcnYf2xVIOnj m4Y0R7t5dgSZ+ltt5lakmLMQNo3AKtO2PYAUgBzb1mhal9LcVNeR0lLLiEy4dEftoxhz yIAleUxa+cxfOfSnM4HIa+IQkpZP38nnARwBjVAqiiHz1qPjPvCHvUurCtbN2nhLgyYh qgH9FD0j9JIcBJe5tozoltY4oSlP4wW8V8XxsxnUT+jLmy9LGUQU3yUSCNf8P/XWPeS2 XjUEezGT2in7GaeZTgpGjFPLYjghXNVdA0JztaWZzLPbUflVksMfadnO8LOaClnzi/VW o4mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5CeOYR6RBSX4tTsFsgT0L2dlzIXbRBloL+vzQYUkEEU=; b=A4vczfs8yfQGia2lD63VJor7LOuszAlWpgaOLRtx/y/b5VEHqUm+mx+oyqoUhdAyuD qHktYxjYx9Yclkb0eCV00MPUqyniTbEJG8l+l133IAA+0pt/I292eygin96Xx+nZq2k0 q7qZ0ry1HMTcW9EcEzRsDsZK7gfGEHZ/qYWsAxUIBbGJkcYDhUNq0DKhBP5NnlBlypv4 ZHGeR/o4cxQh56V3UWs84RvFzlqzrkIhMELwnYNKJP3WLGLztWaGUnR3uFJqAJUFCrsL C8fg1/nVmE+ESBobTmaBk6a5EutodTCK6UW1tG6uUfpuuAUteemq4tD/sV6m56WFVjEH 3rwg== X-Gm-Message-State: AOAM533Ou3HGOCfTm8PqMHqZ6FgGTpWSz7nxb7e3v3lBwE3MPt4AKr3c 0oRFXrvVvaOdT2XfQOpW7mTzsGPR+2Rqkw== X-Google-Smtp-Source: ABdhPJwEvzInJHioJWf1rygxkROVNV3Knm6DZuGrLRAddAOfHU9fQiMn9v4RdyHYBAqMPlT6lVKNDw== X-Received: by 2002:ac2:46fc:: with SMTP id q28mr5067140lfo.76.1599769348136; Thu, 10 Sep 2020 13:22:28 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Paul Durrant , Jun Nakajima , Kevin Tian , Tim Deegan , Julien Grall Subject: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common Date: Thu, 10 Sep 2020 23:21:56 +0300 Message-Id: <1599769330-17656-3-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch moves previously prepared IOREQ support to the common code. The code movement is almost a verbatim copy with re-ordering the headers alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was split into three patches: - x86/ioreq: Prepare IOREQ feature for making it common - xen/ioreq: Make x86's IOREQ feature common - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common - update MAINTAINERS file - do not use a separate subdir for the IOREQ stuff, move it to: - xen/common/ioreq.c - xen/include/xen/ioreq.h - update x86's files to include xen/ioreq.h - remove unneeded headers in arch/x86/hvm/ioreq.c - re-order the headers alphabetically in common/ioreq.c - update common/ioreq.c according to the newly introduced arch functions: arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion() --- --- MAINTAINERS | 8 +- xen/arch/x86/Kconfig | 1 + xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 2 +- xen/arch/x86/hvm/ioreq.c | 1425 +----------------------------------= ---- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/hvm/vmx/vvmx.c | 3 +- xen/arch/x86/mm.c | 2 +- xen/arch/x86/mm/shadow/common.c | 2 +- xen/common/Kconfig | 3 + xen/common/Makefile | 1 + xen/common/ioreq.c | 1410 +++++++++++++++++++++++++++++++++++= +++ xen/include/asm-x86/hvm/ioreq.h | 35 +- xen/include/xen/ioreq.h | 82 +++ 16 files changed, 1533 insertions(+), 1449 deletions(-) create mode 100644 xen/common/ioreq.c create mode 100644 xen/include/xen/ioreq.h diff --git a/MAINTAINERS b/MAINTAINERS index 33fe513..72ba472 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -333,6 +333,13 @@ X: xen/drivers/passthrough/vtd/ X: xen/drivers/passthrough/device_tree.c F: xen/include/xen/iommu.h =20 +I/O EMULATION (IOREQ) +M: Paul Durrant +S: Supported +F: xen/common/ioreq.c +F: xen/include/xen/ioreq.h +F: xen/include/public/hvm/ioreq.h + KCONFIG M: Doug Goldstein S: Supported @@ -549,7 +556,6 @@ F: xen/arch/x86/hvm/ioreq.c F: xen/include/asm-x86/hvm/emulate.h F: xen/include/asm-x86/hvm/io.h F: xen/include/asm-x86/hvm/ioreq.h -F: xen/include/public/hvm/ioreq.h =20 X86 MEMORY MANAGEMENT M: Jan Beulich diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index a636a4b..f5a9f87 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -91,6 +91,7 @@ config PV_LINEAR_PT =20 config HVM def_bool !PV_SHIM_EXCLUSIVE + select IOREQ_SERVER prompt "HVM support" ---help--- Interfaces to support HVM domains. HVM domains require hardware diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 9930d68..5ce484a 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -17,12 +17,12 @@ #include #include #include +#include #include #include =20 #include #include -#include #include =20 #include diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 8b4e73a..39bdf8d 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -10,6 +10,7 @@ */ =20 #include +#include #include #include #include @@ -20,7 +21,6 @@ #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index a9d1685..498e0e0 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -20,6 +20,7 @@ =20 #include #include +#include #include #include #include @@ -64,7 +65,6 @@ #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 724ab44..14f8c89 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -19,6 +19,7 @@ */ =20 #include +#include #include #include #include @@ -35,7 +36,6 @@ #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index d912655..102b758 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -16,1086 +16,39 @@ * this program; If not, see . */ =20 -#include -#include -#include -#include -#include -#include -#include #include -#include -#include -#include +#include =20 -#include -#include -#include -#include - -#include -#include - -static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) -{ - ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); - - d->arch.hvm.ioreq_server.server[id] =3D s; -} - -#define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] - -static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) -{ - if ( id >=3D MAX_NR_IOREQ_SERVERS ) - return NULL; - - return GET_IOREQ_SERVER(d, id); -} - -/* - * Iterate over all possible ioreq servers. - * - * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). - * This is a semantic that previously existed when ioreq servers - * were held in a linked list. - */ -#define FOR_EACH_IOREQ_SERVER(d, id, s) \ - for ( (id) =3D MAX_NR_IOREQ_SERVERS; (id) !=3D 0; ) \ - if ( !(s =3D GET_IOREQ_SERVER(d, --(id))) ) \ - continue; \ - else - -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) -{ - shared_iopage_t *p =3D s->ioreq.va; - - ASSERT((v =3D=3D current) || !vcpu_runnable(v)); - ASSERT(p !=3D NULL); - - return &p->vcpu_ioreq[v->vcpu_id]; -} - -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **s= rvp) -{ - struct domain *d =3D v->domain; - struct hvm_ioreq_server *s; - unsigned int id; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct hvm_ioreq_vcpu *sv; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu =3D=3D v && sv->pending ) - { - if ( srvp ) - *srvp =3D s; - return sv; - } - } - } - - return NULL; -} - -bool hvm_io_pending(struct vcpu *v) -{ - return get_pending_vcpu(v, NULL); -} - -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) -{ - unsigned int prev_state =3D STATE_IOREQ_NONE; - unsigned int state =3D p->state; - uint64_t data =3D ~0; - - smp_rmb(); - - /* - * The only reason we should see this condition be false is when an - * emulator dying races with I/O being requested. - */ - while ( likely(state !=3D STATE_IOREQ_NONE) ) - { - if ( unlikely(state < prev_state) ) - { - gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %= u\n", - prev_state, state); - sv->pending =3D false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - switch ( prev_state =3D state ) - { - case STATE_IORESP_READY: /* IORESP_READY -> NONE */ - p->state =3D STATE_IOREQ_NONE; - data =3D p->data; - break; - - case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READ= Y */ - case STATE_IOREQ_INPROCESS: - wait_on_xen_event_channel(sv->ioreq_evtchn, - ({ state =3D p->state; - smp_rmb(); - state !=3D prev_state; })); - continue; - - default: - gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); - sv->pending =3D false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - break; - } - - p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; - if ( hvm_ioreq_needs_completion(p) ) - p->data =3D data; - - sv->pending =3D false; - - return true; -} - -bool arch_handle_hvm_io_completion(enum hvm_io_completion io_completion) -{ - switch ( io_completion ) - { - case HVMIO_realmode_completion: - { - struct hvm_emulate_ctxt ctxt; - - hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); - vmx_realmode_emulate_one(&ctxt); - hvm_emulate_writeback(&ctxt); - - break; - } - - default: - ASSERT_UNREACHABLE(); - break; - } - - return true; -} - -bool handle_hvm_io_completion(struct vcpu *v) -{ - struct domain *d =3D v->domain; - struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; - enum hvm_io_completion io_completion; - - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - - sv =3D get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) - return false; - - vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) ? - STATE_IORESP_READY : STATE_IOREQ_NONE; - - msix_write_completion(v); - vcpu_end_shutdown_deferral(v); - - io_completion =3D vio->io_completion; - vio->io_completion =3D HVMIO_no_completion; - - switch ( io_completion ) - { - case HVMIO_no_completion: - break; - - case HVMIO_mmio_completion: - return handle_mmio(); - - case HVMIO_pio_completion: - return handle_pio(vio->io_req.addr, vio->io_req.size, - vio->io_req.dir); - - default: - return arch_handle_hvm_io_completion(io_completion); - } - - return true; -} - -static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) -{ - struct domain *d =3D s->target; - unsigned int i; - - BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN !=3D HVM_PARAM_IOREQ_PFN + 1); - - for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) - { - if ( !test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) ) - return _gfn(d->arch.hvm.params[i]); - } - - return INVALID_GFN; -} - -static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) -{ - struct domain *d =3D s->target; - unsigned int i; - - for ( i =3D 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ ) - { - if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) ) - return _gfn(d->arch.hvm.ioreq_gfn.base + i); - } - - /* - * If we are out of 'normal' GFNs then we may still have a 'legacy' - * GFN available. - */ - return hvm_alloc_legacy_ioreq_gfn(s); -} - -static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, - gfn_t gfn) -{ - struct domain *d =3D s->target; - unsigned int i; - - for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) - { - if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) ) - break; - } - if ( i > HVM_PARAM_BUFIOREQ_PFN ) - return false; - - set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask); - return true; -} - -static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) -{ - struct domain *d =3D s->target; - unsigned int i =3D gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; - - ASSERT(!gfn_eq(gfn, INVALID_GFN)); - - if ( !hvm_free_legacy_ioreq_gfn(s, gfn) ) - { - ASSERT(i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8); - set_bit(i, &d->arch.hvm.ioreq_gfn.mask); - } -} - -static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return; - - destroy_ring_for_helper(&iorp->va, iorp->page); - iorp->page =3D NULL; - - hvm_free_ioreq_gfn(s, iorp->gfn); - iorp->gfn =3D INVALID_GFN; -} - -static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) -{ - struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - int rc; - - if ( iorp->page ) - { - /* - * If a page has already been allocated (which will happen on - * demand if hvm_get_ioreq_server_frame() is called), then - * mapping a guest frame is not permitted. - */ - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; - - return 0; - } - - if ( d->is_dying ) - return -EINVAL; - - iorp->gfn =3D hvm_alloc_ioreq_gfn(s); - - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return -ENOMEM; - - rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, - &iorp->va); - - if ( rc ) - hvm_unmap_ioreq_gfn(s, buf); - - return rc; -} - -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - - if ( iorp->page ) - { - /* - * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then - * allocating a page is not permitted. - */ - if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; - - return 0; - } - - page =3D alloc_domheap_page(s->target, MEMF_no_refcount); - - if ( !page ) - return -ENOMEM; - - if ( !get_page_and_type(page, s->target, PGT_writable_page) ) - { - /* - * The domain can't possibly know about this page yet, so failure - * here is a clear indication of something fishy going on. - */ - domain_crash(s->emulator); - return -ENODATA; - } - - iorp->va =3D __map_domain_page_global(page); - if ( !iorp->va ) - goto fail; - - iorp->page =3D page; - clear_page(iorp->va); - return 0; - - fail: - put_page_alloc_ref(page); - put_page_and_type(page); - - return -ENOMEM; -} - -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - struct page_info *page =3D iorp->page; - - if ( !page ) - return; - - iorp->page =3D NULL; - - unmap_domain_page_global(iorp->va); - iorp->va =3D NULL; - - put_page_alloc_ref(page); - put_page_and_type(page); -} - -bool is_ioreq_server_page(struct domain *d, const struct page_info *page) -{ - const struct hvm_ioreq_server *s; - unsigned int id; - bool found =3D false; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) - { - found =3D true; - break; - } - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return found; -} - -static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) - -{ - struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return; - - if ( guest_physmap_remove_page(d, iorp->gfn, - page_to_mfn(iorp->page), 0) ) - domain_crash(d); - clear_page(iorp->va); -} - -static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) -{ - struct domain *d =3D s->target; - struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; - int rc; - - if ( gfn_eq(iorp->gfn, INVALID_GFN) ) - return 0; - - clear_page(iorp->va); - - rc =3D guest_physmap_add_page(d, iorp->gfn, - page_to_mfn(iorp->page), 0); - if ( rc =3D=3D 0 ) - paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); - - return rc; -} - -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) -{ - ASSERT(spin_is_locked(&s->lock)); - - if ( s->ioreq.va !=3D NULL ) - { - ioreq_t *p =3D get_ioreq(s, sv->vcpu); - - p->vp_eport =3D sv->ioreq_evtchn; - } -} - -#define HANDLE_BUFIOREQ(s) \ - ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) - -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - int rc; - - sv =3D xzalloc(struct hvm_ioreq_vcpu); - - rc =3D -ENOMEM; - if ( !sv ) - goto fail1; - - spin_lock(&s->lock); - - rc =3D alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail2; - - sv->ioreq_evtchn =3D rc; - - if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) - { - rc =3D alloc_unbound_xen_event_channel(v->domain, 0, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail3; - - s->bufioreq_evtchn =3D rc; - } - - sv->vcpu =3D v; - - list_add(&sv->list_entry, &s->ioreq_vcpu_list); - - if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); - - spin_unlock(&s->lock); - return 0; - - fail3: - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - fail2: - spin_unlock(&s->lock); - xfree(sv); - - fail1: - return rc; -} - -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - - spin_lock(&s->lock); - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu !=3D v ) - continue; - - list_del(&sv->list_entry); - - if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - break; - } - - spin_unlock(&s->lock); -} - -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) -{ - struct hvm_ioreq_vcpu *sv, *next; - - spin_lock(&s->lock); - - list_for_each_entry_safe ( sv, - next, - &s->ioreq_vcpu_list, - list_entry ) - { - struct vcpu *v =3D sv->vcpu; - - list_del(&sv->list_entry); - - if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - } - - spin_unlock(&s->lock); -} - -static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) -{ - int rc; - - rc =3D hvm_map_ioreq_gfn(s, false); - - if ( !rc && HANDLE_BUFIOREQ(s) ) - rc =3D hvm_map_ioreq_gfn(s, true); - - if ( rc ) - hvm_unmap_ioreq_gfn(s, false); - - return rc; -} - -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) -{ - hvm_unmap_ioreq_gfn(s, true); - hvm_unmap_ioreq_gfn(s, false); -} - -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) -{ - int rc; - - rc =3D hvm_alloc_ioreq_mfn(s, false); - - if ( !rc && (s->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc =3D hvm_alloc_ioreq_mfn(s, true); - - if ( rc ) - hvm_free_ioreq_mfn(s, false); - - return rc; -} - -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) -{ - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); -} - -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) -{ - unsigned int i; - - for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) - rangeset_destroy(s->range[i]); -} - -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, - ioservid_t id) -{ - unsigned int i; - int rc; - - for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) - { - char *name; - - rc =3D asprintf(&name, "ioreq_server %d %s", id, - (i =3D=3D XEN_DMOP_IO_RANGE_PORT) ? "port" : - (i =3D=3D XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : - (i =3D=3D XEN_DMOP_IO_RANGE_PCI) ? "pci" : - ""); - if ( rc ) - goto fail; - - s->range[i] =3D rangeset_new(s->target, name, - RANGESETF_prettyprint_hex); - - xfree(name); - - rc =3D -ENOMEM; - if ( !s->range[i] ) - goto fail; - - rangeset_limit(s->range[i], MAX_NR_IO_RANGES); - } - - return 0; - - fail: - hvm_ioreq_server_free_rangesets(s); - - return rc; -} - -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) -{ - struct hvm_ioreq_vcpu *sv; - - spin_lock(&s->lock); - - if ( s->enabled ) - goto done; - - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); - - s->enabled =3D true; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); - - done: - spin_unlock(&s->lock); -} - -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) -{ - spin_lock(&s->lock); - - if ( !s->enabled ) - goto done; - - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); - - s->enabled =3D false; - - done: - spin_unlock(&s->lock); -} - -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) -{ - struct domain *currd =3D current->domain; - struct vcpu *v; - int rc; - - s->target =3D d; - - get_knownalive_domain(currd); - s->emulator =3D currd; - - spin_lock_init(&s->lock); - INIT_LIST_HEAD(&s->ioreq_vcpu_list); - spin_lock_init(&s->bufioreq_lock); - - s->ioreq.gfn =3D INVALID_GFN; - s->bufioreq.gfn =3D INVALID_GFN; - - rc =3D hvm_ioreq_server_alloc_rangesets(s, id); - if ( rc ) - return rc; - - s->bufioreq_handling =3D bufioreq_handling; - - for_each_vcpu ( d, v ) - { - rc =3D hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail_add; - } - - return 0; - - fail_add: - hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); - return rc; -} - -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) -{ - ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); - - /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. - */ - hvm_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); -} - -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) -{ - struct hvm_ioreq_server *s; - unsigned int i; - int rc; - - if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) - return -EINVAL; - - s =3D xzalloc(struct hvm_ioreq_server); - if ( !s ) - return -ENOMEM; - - domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) - { - if ( !GET_IOREQ_SERVER(d, i) ) - break; - } - - rc =3D -ENOSPC; - if ( i >=3D MAX_NR_IOREQ_SERVERS ) - goto fail; - - /* - * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. - */ - set_ioreq_server(d, i, s); - - rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); - if ( rc ) - { - set_ioreq_server(d, i, NULL); - goto fail; - } - - if ( id ) - *id =3D i; - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - return 0; - - fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - xfree(s); - return rc; -} - -/* Called when target domain is paused */ -int arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *s) -{ - return p2m_set_ioreq_server(s->target, 0, s); -} - -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - domain_pause(d); - - arch_hvm_destroy_ioreq_server(s); - - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is paused. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - domain_unpause(d); - - xfree(s); - - rc =3D 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - if ( ioreq_gfn || bufioreq_gfn ) - { - rc =3D hvm_ioreq_server_map_pages(s); - if ( rc ) - goto out; - } - - if ( ioreq_gfn ) - *ioreq_gfn =3D gfn_x(s->ioreq.gfn); - - if ( HANDLE_BUFIOREQ(s) ) - { - if ( bufioreq_gfn ) - *bufioreq_gfn =3D gfn_x(s->bufioreq.gfn); - - if ( bufioreq_port ) - *bufioreq_port =3D s->bufioreq_evtchn; - } - - rc =3D 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) -{ - struct hvm_ioreq_server *s; - int rc; - - ASSERT(is_hvm_domain(d)); - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - rc =3D hvm_ioreq_server_alloc_pages(s); - if ( rc ) - goto out; +#include +#include =20 - switch ( idx ) +bool arch_handle_hvm_io_completion(enum hvm_io_completion io_completion) +{ + switch ( io_completion ) { - case XENMEM_resource_ioreq_server_frame_bufioreq: - rc =3D -ENOENT; - if ( !HANDLE_BUFIOREQ(s) ) - goto out; - - *mfn =3D page_to_mfn(s->bufioreq.page); - rc =3D 0; - break; + case HVMIO_realmode_completion: + { + struct hvm_emulate_ctxt ctxt; =20 - case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn =3D page_to_mfn(s->ioreq.page); - rc =3D 0; - break; + hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); + vmx_realmode_emulate_one(&ctxt); + hvm_emulate_writeback(&ctxt); =20 - default: - rc =3D -EINVAL; break; } =20 - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) -{ - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r =3D s->range[type]; - break; - default: - r =3D NULL; + ASSERT_UNREACHABLE(); break; } =20 - rc =3D -EINVAL; - if ( !r ) - goto out; - - rc =3D -EEXIST; - if ( rangeset_overlaps_range(r, start, end) ) - goto out; - - rc =3D rangeset_add_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; + return true; } =20 -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +/* Called when target domain is paused */ +int arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *s) { - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r =3D s->range[type]; - break; - - default: - r =3D NULL; - break; - } - - rc =3D -EINVAL; - if ( !r ) - goto out; - - rc =3D -ENOENT; - if ( !rangeset_contains_range(r, start, end) ) - goto out; - - rc =3D rangeset_remove_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; + return p2m_set_ioreq_server(s->target, 0, s); } =20 /* @@ -1146,116 +99,6 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d= , ioservid_t id, return rc; } =20 -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s =3D get_ioreq_server(d, id); - - rc =3D -ENOENT; - if ( !s ) - goto out; - - rc =3D -EPERM; - if ( s->emulator !=3D current->domain ) - goto out; - - domain_pause(d); - - if ( enabled ) - hvm_ioreq_server_enable(s); - else - hvm_ioreq_server_disable(s); - - domain_unpause(d); - - rc =3D 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - return rc; -} - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - rc =3D hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail; - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return 0; - - fail: - while ( ++id !=3D MAX_NR_IOREQ_SERVERS ) - { - s =3D GET_IOREQ_SERVER(d, id); - - if ( !s ) - continue; - - hvm_ioreq_server_remove_vcpu(s, v); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - -void hvm_destroy_all_ioreq_servers(struct domain *d) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - arch_hvm_ioreq_destroy(d); - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - /* No need to domain_pause() as the domain is being torn down */ - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is being destroyed. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - xfree(s); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - int hvm_get_ioreq_server_range_type(struct domain *d, ioreq_t *p, uint8_t *type, @@ -1303,233 +146,6 @@ int hvm_get_ioreq_server_range_type(struct domain *d, return 0; } =20 -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) -{ - struct hvm_ioreq_server *s; - uint8_t type; - uint64_t addr; - unsigned int id; - - if ( hvm_get_ioreq_server_range_type(d, p, &type, &addr) ) - return NULL; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct rangeset *r; - - if ( !s->enabled ) - continue; - - r =3D s->range[type]; - - switch ( type ) - { - unsigned long start, end; - - case XEN_DMOP_IO_RANGE_PORT: - start =3D addr; - end =3D start + p->size - 1; - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_MEMORY: - start =3D hvm_mmio_first_byte(p); - end =3D hvm_mmio_last_byte(p); - - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_PCI: - if ( rangeset_contains_singleton(r, addr >> 32) ) - { - p->type =3D IOREQ_TYPE_PCI_CONFIG; - p->addr =3D addr; - return s; - } - - break; - } - } - - return NULL; -} - -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) -{ - struct domain *d =3D current->domain; - struct hvm_ioreq_page *iorp; - buffered_iopage_t *pg; - buf_ioreq_t bp =3D { .data =3D p->data, - .addr =3D p->addr, - .type =3D p->type, - .dir =3D p->dir }; - /* Timeoffset sends 64b data, but no address. Use two consecutive slot= s. */ - int qw =3D 0; - - /* Ensure buffered_iopage fits in a page */ - BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); - - iorp =3D &s->bufioreq; - pg =3D iorp->va; - - if ( !pg ) - return IOREQ_IO_UNHANDLED; - - /* - * Return 0 for the cases we can't deal with: - * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB - * - we cannot buffer accesses to guest memory buffers, as the guest - * may expect the memory buffer to be synchronously accessed - * - the count field is usually used with data_is_ptr and since we do= n't - * support data_is_ptr we do not waste space for the count field ei= ther - */ - if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count !=3D 1) ) - return 0; - - switch ( p->size ) - { - case 1: - bp.size =3D 0; - break; - case 2: - bp.size =3D 1; - break; - case 4: - bp.size =3D 2; - break; - case 8: - bp.size =3D 3; - qw =3D 1; - break; - default: - gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return IOREQ_IO_UNHANDLED; - } - - spin_lock(&s->bufioreq_lock); - - if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=3D - (IOREQ_BUFFER_SLOT_NUM - qw) ) - { - /* The queue is full: send the iopacket through the normal path. */ - spin_unlock(&s->bufioreq_lock); - return IOREQ_IO_UNHANDLED; - } - - pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D bp; - - if ( qw ) - { - bp.data =3D p->data >> 32; - pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = =3D bp; - } - - /* Make the ioreq_t visible /before/ write_pointer. */ - smp_wmb(); - pg->ptrs.write_pointer +=3D qw ? 2 : 1; - - /* Canonicalize read/write pointers to prevent their overflow. */ - while ( (s->bufioreq_handling =3D=3D HVM_IOREQSRV_BUFIOREQ_ATOMIC) && - qw++ < IOREQ_BUFFER_SLOT_NUM && - pg->ptrs.read_pointer >=3D IOREQ_BUFFER_SLOT_NUM ) - { - union bufioreq_pointers old =3D pg->ptrs, new; - unsigned int n =3D old.read_pointer / IOREQ_BUFFER_SLOT_NUM; - - new.read_pointer =3D old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; - new.write_pointer =3D old.write_pointer - n * IOREQ_BUFFER_SLOT_NU= M; - cmpxchg(&pg->ptrs.full, old.full, new.full); - } - - notify_via_xen_event_channel(d, s->bufioreq_evtchn); - spin_unlock(&s->bufioreq_lock); - - return IOREQ_IO_HANDLED; -} - -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered) -{ - struct vcpu *curr =3D current; - struct domain *d =3D curr->domain; - struct hvm_ioreq_vcpu *sv; - - ASSERT(s); - - if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); - - if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return IOREQ_IO_RETRY; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu =3D=3D curr ) - { - evtchn_port_t port =3D sv->ioreq_evtchn; - ioreq_t *p =3D get_ioreq(s, curr); - - if ( unlikely(p->state !=3D STATE_IOREQ_NONE) ) - { - gprintk(XENLOG_ERR, "device model set bad IO state %d\n", - p->state); - break; - } - - if ( unlikely(p->vp_eport !=3D port) ) - { - gprintk(XENLOG_ERR, "device model set bad event channel %d= \n", - p->vp_eport); - break; - } - - proto_p->state =3D STATE_IOREQ_NONE; - proto_p->vp_eport =3D port; - *p =3D *proto_p; - - prepare_wait_on_xen_event_channel(port); - - /* - * Following happens /after/ blocking and setting up ioreq - * contents. prepare_wait_on_xen_event_channel() is an implicit - * barrier. - */ - p->state =3D STATE_IOREQ_READY; - notify_via_xen_event_channel(d, port); - - sv->pending =3D true; - return IOREQ_IO_RETRY; - } - } - - return IOREQ_IO_UNHANDLED; -} - -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) -{ - struct domain *d =3D current->domain; - struct hvm_ioreq_server *s; - unsigned int id, failed =3D 0; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( !s->enabled ) - continue; - - if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_IO_UNHANDLED ) - failed++; - } - - return failed; -} - static int hvm_access_cf8( int dir, unsigned int port, unsigned int bytes, uint32_t *val) { @@ -1552,13 +168,6 @@ void arch_hvm_ioreq_destroy(struct domain *d) =20 } =20 -void hvm_ioreq_init(struct domain *d) -{ - spin_lock_init(&d->arch.hvm.ioreq_server.lock); - - arch_hvm_ioreq_init(d); -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index e267513..fd7cadb 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -27,10 +27,10 @@ * can have side effects. */ =20 +#include #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 1e51689..50e4e6e 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -19,10 +19,11 @@ * */ =20 +#include + #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 638f6bf..776d2b6 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -100,6 +100,7 @@ */ =20 #include +#include #include #include #include @@ -141,7 +142,6 @@ #include #include #include -#include =20 #include #include diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/commo= n.c index 7c7204f..3893579 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -20,6 +20,7 @@ * along with this program; If not, see . */ =20 +#include #include #include #include @@ -34,7 +35,6 @@ #include #include #include -#include #include #include "private.h" =20 diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 15e3b79..fb6fb51 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -139,6 +139,9 @@ config HYPFS_CONFIG Disable this option in case you want to spare some memory or you want to hide the .config contents from dom0. =20 +config IOREQ_SERVER + bool + config KEXEC bool "kexec support" default y diff --git a/xen/common/Makefile b/xen/common/Makefile index 06881d0..8df2b6e 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -16,6 +16,7 @@ obj-$(CONFIG_GRANT_TABLE) +=3D grant_table.o obj-y +=3D guestcopy.o obj-bin-y +=3D gunzip.init.o obj-$(CONFIG_HYPFS) +=3D hypfs.o +obj-$(CONFIG_IOREQ_SERVER) +=3D ioreq.o obj-y +=3D irq.o obj-y +=3D kernel.o obj-y +=3D keyhandler.o diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c new file mode 100644 index 0000000..5017617 --- /dev/null +++ b/xen/common/ioreq.c @@ -0,0 +1,1410 @@ +/* + * common/ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +static void set_ioreq_server(struct domain *d, unsigned int id, + struct hvm_ioreq_server *s) +{ + ASSERT(id < MAX_NR_IOREQ_SERVERS); + ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + + d->arch.hvm.ioreq_server.server[id] =3D s; +} + +/* + * Iterate over all possible ioreq servers. + * + * NOTE: The iteration is backwards such that more recently created + * ioreq servers are favoured in hvm_select_ioreq_server(). + * This is a semantic that previously existed when ioreq servers + * were held in a linked list. + */ +#define FOR_EACH_IOREQ_SERVER(d, id, s) \ + for ( (id) =3D MAX_NR_IOREQ_SERVERS; (id) !=3D 0; ) \ + if ( !(s =3D GET_IOREQ_SERVER(d, --(id))) ) \ + continue; \ + else + +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +{ + shared_iopage_t *p =3D s->ioreq.va; + + ASSERT((v =3D=3D current) || !vcpu_runnable(v)); + ASSERT(p !=3D NULL); + + return &p->vcpu_ioreq[v->vcpu_id]; +} + +static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct hvm_ioreq_server **s= rvp) +{ + struct domain *d =3D v->domain; + struct hvm_ioreq_server *s; + unsigned int id; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct hvm_ioreq_vcpu *sv; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu =3D=3D v && sv->pending ) + { + if ( srvp ) + *srvp =3D s; + return sv; + } + } + } + + return NULL; +} + +bool hvm_io_pending(struct vcpu *v) +{ + return get_pending_vcpu(v, NULL); +} + +static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +{ + unsigned int prev_state =3D STATE_IOREQ_NONE; + unsigned int state =3D p->state; + uint64_t data =3D ~0; + + smp_rmb(); + + /* + * The only reason we should see this condition be false is when an + * emulator dying races with I/O being requested. + */ + while ( likely(state !=3D STATE_IOREQ_NONE) ) + { + if ( unlikely(state < prev_state) ) + { + gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %= u\n", + prev_state, state); + sv->pending =3D false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + switch ( prev_state =3D state ) + { + case STATE_IORESP_READY: /* IORESP_READY -> NONE */ + p->state =3D STATE_IOREQ_NONE; + data =3D p->data; + break; + + case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READ= Y */ + case STATE_IOREQ_INPROCESS: + wait_on_xen_event_channel(sv->ioreq_evtchn, + ({ state =3D p->state; + smp_rmb(); + state !=3D prev_state; })); + continue; + + default: + gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); + sv->pending =3D false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + break; + } + + p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; + if ( hvm_ioreq_needs_completion(p) ) + p->data =3D data; + + sv->pending =3D false; + + return true; +} + +bool handle_hvm_io_completion(struct vcpu *v) +{ + struct domain *d =3D v->domain; + struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; + struct hvm_ioreq_server *s; + struct hvm_ioreq_vcpu *sv; + enum hvm_io_completion io_completion; + + if ( has_vpci(d) && vpci_process_pending(v) ) + { + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } + + sv =3D get_pending_vcpu(v, &s); + if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + return false; + + vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) ? + STATE_IORESP_READY : STATE_IOREQ_NONE; + + msix_write_completion(v); + vcpu_end_shutdown_deferral(v); + + io_completion =3D vio->io_completion; + vio->io_completion =3D HVMIO_no_completion; + + switch ( io_completion ) + { + case HVMIO_no_completion: + break; + + case HVMIO_mmio_completion: + return handle_mmio(); + + case HVMIO_pio_completion: + return handle_pio(vio->io_req.addr, vio->io_req.size, + vio->io_req.dir); + + default: + return arch_handle_hvm_io_completion(io_completion); + } + + return true; +} + +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) +{ + struct domain *d =3D s->target; + unsigned int i; + + BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN !=3D HVM_PARAM_IOREQ_PFN + 1); + + for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) + { + if ( !test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) ) + return _gfn(d->arch.hvm.params[i]); + } + + return INVALID_GFN; +} + +static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) +{ + struct domain *d =3D s->target; + unsigned int i; + + for ( i =3D 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ ) + { + if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) ) + return _gfn(d->arch.hvm.ioreq_gfn.base + i); + } + + /* + * If we are out of 'normal' GFNs then we may still have a 'legacy' + * GFN available. + */ + return hvm_alloc_legacy_ioreq_gfn(s); +} + +static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, + gfn_t gfn) +{ + struct domain *d =3D s->target; + unsigned int i; + + for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; i++ ) + { + if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) ) + break; + } + if ( i > HVM_PARAM_BUFIOREQ_PFN ) + return false; + + set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask); + return true; +} + +static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) +{ + struct domain *d =3D s->target; + unsigned int i =3D gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; + + ASSERT(!gfn_eq(gfn, INVALID_GFN)); + + if ( !hvm_free_legacy_ioreq_gfn(s, gfn) ) + { + ASSERT(i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8); + set_bit(i, &d->arch.hvm.ioreq_gfn.mask); + } +} + +static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return; + + destroy_ring_for_helper(&iorp->va, iorp->page); + iorp->page =3D NULL; + + hvm_free_ioreq_gfn(s, iorp->gfn); + iorp->gfn =3D INVALID_GFN; +} + +static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +{ + struct domain *d =3D s->target; + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + int rc; + + if ( iorp->page ) + { + /* + * If a page has already been allocated (which will happen on + * demand if hvm_get_ioreq_server_frame() is called), then + * mapping a guest frame is not permitted. + */ + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + if ( d->is_dying ) + return -EINVAL; + + iorp->gfn =3D hvm_alloc_ioreq_gfn(s); + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -ENOMEM; + + rc =3D prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page, + &iorp->va); + + if ( rc ) + hvm_unmap_ioreq_gfn(s, buf); + + return rc; +} + +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct page_info *page; + + if ( iorp->page ) + { + /* + * If a guest frame has already been mapped (which may happen + * on demand if hvm_get_ioreq_server_info() is called), then + * allocating a page is not permitted. + */ + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + page =3D alloc_domheap_page(s->target, MEMF_no_refcount); + + if ( !page ) + return -ENOMEM; + + if ( !get_page_and_type(page, s->target, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so failure + * here is a clear indication of something fishy going on. + */ + domain_crash(s->emulator); + return -ENODATA; + } + + iorp->va =3D __map_domain_page_global(page); + if ( !iorp->va ) + goto fail; + + iorp->page =3D page; + clear_page(iorp->va); + return 0; + + fail: + put_page_alloc_ref(page); + put_page_and_type(page); + + return -ENOMEM; +} + +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + struct page_info *page =3D iorp->page; + + if ( !page ) + return; + + iorp->page =3D NULL; + + unmap_domain_page_global(iorp->va); + iorp->va =3D NULL; + + put_page_alloc_ref(page); + put_page_and_type(page); +} + +bool is_ioreq_server_page(struct domain *d, const struct page_info *page) +{ + const struct hvm_ioreq_server *s; + unsigned int id; + bool found =3D false; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( (s->ioreq.page =3D=3D page) || (s->bufioreq.page =3D=3D page)= ) + { + found =3D true; + break; + } + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return found; +} + +static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) + +{ + struct domain *d =3D s->target; + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return; + + if ( guest_physmap_remove_page(d, iorp->gfn, + page_to_mfn(iorp->page), 0) ) + domain_crash(d); + clear_page(iorp->va); +} + +static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +{ + struct domain *d =3D s->target; + struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq; + int rc; + + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return 0; + + clear_page(iorp->va); + + rc =3D guest_physmap_add_page(d, iorp->gfn, + page_to_mfn(iorp->page), 0); + if ( rc =3D=3D 0 ) + paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn))); + + return rc; +} + +static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, + struct hvm_ioreq_vcpu *sv) +{ + ASSERT(spin_is_locked(&s->lock)); + + if ( s->ioreq.va !=3D NULL ) + { + ioreq_t *p =3D get_ioreq(s, sv->vcpu); + + p->vp_eport =3D sv->ioreq_evtchn; + } +} + +#define HANDLE_BUFIOREQ(s) \ + ((s)->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) + +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + int rc; + + sv =3D xzalloc(struct hvm_ioreq_vcpu); + + rc =3D -ENOMEM; + if ( !sv ) + goto fail1; + + spin_lock(&s->lock); + + rc =3D alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail2; + + sv->ioreq_evtchn =3D rc; + + if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) + { + rc =3D alloc_unbound_xen_event_channel(v->domain, 0, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail3; + + s->bufioreq_evtchn =3D rc; + } + + sv->vcpu =3D v; + + list_add(&sv->list_entry, &s->ioreq_vcpu_list); + + if ( s->enabled ) + hvm_update_ioreq_evtchn(s, sv); + + spin_unlock(&s->lock); + return 0; + + fail3: + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + fail2: + spin_unlock(&s->lock); + xfree(sv); + + fail1: + return rc; +} + +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu !=3D v ) + continue; + + list_del(&sv->list_entry); + + if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + break; + } + + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv, *next; + + spin_lock(&s->lock); + + list_for_each_entry_safe ( sv, + next, + &s->ioreq_vcpu_list, + list_entry ) + { + struct vcpu *v =3D sv->vcpu; + + list_del(&sv->list_entry); + + if ( v->vcpu_id =3D=3D 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + } + + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) +{ + int rc; + + rc =3D hvm_map_ioreq_gfn(s, false); + + if ( !rc && HANDLE_BUFIOREQ(s) ) + rc =3D hvm_map_ioreq_gfn(s, true); + + if ( rc ) + hvm_unmap_ioreq_gfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +{ + hvm_unmap_ioreq_gfn(s, true); + hvm_unmap_ioreq_gfn(s, false); +} + +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +{ + int rc; + + rc =3D hvm_alloc_ioreq_mfn(s, false); + + if ( !rc && (s->bufioreq_handling !=3D HVM_IOREQSRV_BUFIOREQ_OFF) ) + rc =3D hvm_alloc_ioreq_mfn(s, true); + + if ( rc ) + hvm_free_ioreq_mfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +{ + hvm_free_ioreq_mfn(s, true); + hvm_free_ioreq_mfn(s, false); +} + +static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +{ + unsigned int i; + + for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) + rangeset_destroy(s->range[i]); +} + +static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, + ioservid_t id) +{ + unsigned int i; + int rc; + + for ( i =3D 0; i < NR_IO_RANGE_TYPES; i++ ) + { + char *name; + + rc =3D asprintf(&name, "ioreq_server %d %s", id, + (i =3D=3D XEN_DMOP_IO_RANGE_PORT) ? "port" : + (i =3D=3D XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : + (i =3D=3D XEN_DMOP_IO_RANGE_PCI) ? "pci" : + ""); + if ( rc ) + goto fail; + + s->range[i] =3D rangeset_new(s->target, name, + RANGESETF_prettyprint_hex); + + xfree(name); + + rc =3D -ENOMEM; + if ( !s->range[i] ) + goto fail; + + rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + } + + return 0; + + fail: + hvm_ioreq_server_free_rangesets(s); + + return rc; +} + +static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + if ( s->enabled ) + goto done; + + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); + + s->enabled =3D true; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + + done: + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +{ + spin_lock(&s->lock); + + if ( !s->enabled ) + goto done; + + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); + + s->enabled =3D false; + + done: + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) +{ + struct domain *currd =3D current->domain; + struct vcpu *v; + int rc; + + s->target =3D d; + + get_knownalive_domain(currd); + s->emulator =3D currd; + + spin_lock_init(&s->lock); + INIT_LIST_HEAD(&s->ioreq_vcpu_list); + spin_lock_init(&s->bufioreq_lock); + + s->ioreq.gfn =3D INVALID_GFN; + s->bufioreq.gfn =3D INVALID_GFN; + + rc =3D hvm_ioreq_server_alloc_rangesets(s, id); + if ( rc ) + return rc; + + s->bufioreq_handling =3D bufioreq_handling; + + for_each_vcpu ( d, v ) + { + rc =3D hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail_add; + } + + return 0; + + fail_add: + hvm_ioreq_server_remove_all_vcpus(s); + hvm_ioreq_server_unmap_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); + return rc; +} + +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +{ + ASSERT(!s->enabled); + hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latter. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ + hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); +} + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id) +{ + struct hvm_ioreq_server *s; + unsigned int i; + int rc; + + if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) + return -EINVAL; + + s =3D xzalloc(struct hvm_ioreq_server); + if ( !s ) + return -ENOMEM; + + domain_pause(d); + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) + { + if ( !GET_IOREQ_SERVER(d, i) ) + break; + } + + rc =3D -ENOSPC; + if ( i >=3D MAX_NR_IOREQ_SERVERS ) + goto fail; + + /* + * It is safe to call set_ioreq_server() prior to + * hvm_ioreq_server_init() since the target domain is paused. + */ + set_ioreq_server(d, i, s); + + rc =3D hvm_ioreq_server_init(s, d, bufioreq_handling, i); + if ( rc ) + { + set_ioreq_server(d, i, NULL); + goto fail; + } + + if ( id ) + *id =3D i; + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + return 0; + + fail: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + xfree(s); + return rc; +} + +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + domain_pause(d); + + arch_hvm_destroy_ioreq_server(s); + + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is paused. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + domain_unpause(d); + + xfree(s); + + rc =3D 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + if ( ioreq_gfn || bufioreq_gfn ) + { + rc =3D hvm_ioreq_server_map_pages(s); + if ( rc ) + goto out; + } + + if ( ioreq_gfn ) + *ioreq_gfn =3D gfn_x(s->ioreq.gfn); + + if ( HANDLE_BUFIOREQ(s) ) + { + if ( bufioreq_gfn ) + *bufioreq_gfn =3D gfn_x(s->bufioreq.gfn); + + if ( bufioreq_port ) + *bufioreq_port =3D s->bufioreq_evtchn; + } + + rc =3D 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) +{ + struct hvm_ioreq_server *s; + int rc; + + ASSERT(is_hvm_domain(d)); + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + rc =3D hvm_ioreq_server_alloc_pages(s); + if ( rc ) + goto out; + + switch ( idx ) + { + case XENMEM_resource_ioreq_server_frame_bufioreq: + rc =3D -ENOENT; + if ( !HANDLE_BUFIOREQ(s) ) + goto out; + + *mfn =3D page_to_mfn(s->bufioreq.page); + rc =3D 0; + break; + + case XENMEM_resource_ioreq_server_frame_ioreq(0): + *mfn =3D page_to_mfn(s->ioreq.page); + rc =3D 0; + break; + + default: + rc =3D -EINVAL; + break; + } + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r =3D s->range[type]; + break; + + default: + r =3D NULL; + break; + } + + rc =3D -EINVAL; + if ( !r ) + goto out; + + rc =3D -EEXIST; + if ( rangeset_overlaps_range(r, start, end) ) + goto out; + + rc =3D rangeset_add_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r =3D s->range[type]; + break; + + default: + r =3D NULL; + break; + } + + rc =3D -EINVAL; + if ( !r ) + goto out; + + rc =3D -ENOENT; + if ( !rangeset_contains_range(r, start, end) ) + goto out; + + rc =3D rangeset_remove_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s =3D get_ioreq_server(d, id); + + rc =3D -ENOENT; + if ( !s ) + goto out; + + rc =3D -EPERM; + if ( s->emulator !=3D current->domain ) + goto out; + + domain_pause(d); + + if ( enabled ) + hvm_ioreq_server_enable(s); + else + hvm_ioreq_server_disable(s); + + domain_unpause(d); + + rc =3D 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + return rc; +} + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + rc =3D hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail; + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return 0; + + fail: + while ( ++id !=3D MAX_NR_IOREQ_SERVERS ) + { + s =3D GET_IOREQ_SERVER(d, id); + + if ( !s ) + continue; + + hvm_ioreq_server_remove_vcpu(s, v); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + hvm_ioreq_server_remove_vcpu(s, v); + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +void hvm_destroy_all_ioreq_servers(struct domain *d) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + arch_hvm_ioreq_destroy(d); + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + /* No need to domain_pause() as the domain is being torn down */ + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is being destroyed. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + xfree(s); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( hvm_get_ioreq_server_range_type(d, p, &type, &addr) ) + return NULL; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct rangeset *r; + + if ( !s->enabled ) + continue; + + r =3D s->range[type]; + + switch ( type ) + { + unsigned long start, end; + + case XEN_DMOP_IO_RANGE_PORT: + start =3D addr; + end =3D start + p->size - 1; + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_MEMORY: + start =3D hvm_mmio_first_byte(p); + end =3D hvm_mmio_last_byte(p); + + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_PCI: + if ( rangeset_contains_singleton(r, addr >> 32) ) + { + p->type =3D IOREQ_TYPE_PCI_CONFIG; + p->addr =3D addr; + return s; + } + + break; + } + } + + return NULL; +} + +static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +{ + struct domain *d =3D current->domain; + struct hvm_ioreq_page *iorp; + buffered_iopage_t *pg; + buf_ioreq_t bp =3D { .data =3D p->data, + .addr =3D p->addr, + .type =3D p->type, + .dir =3D p->dir }; + /* Timeoffset sends 64b data, but no address. Use two consecutive slot= s. */ + int qw =3D 0; + + /* Ensure buffered_iopage fits in a page */ + BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); + + iorp =3D &s->bufioreq; + pg =3D iorp->va; + + if ( !pg ) + return IOREQ_IO_UNHANDLED; + + /* + * Return 0 for the cases we can't deal with: + * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB + * - we cannot buffer accesses to guest memory buffers, as the guest + * may expect the memory buffer to be synchronously accessed + * - the count field is usually used with data_is_ptr and since we do= n't + * support data_is_ptr we do not waste space for the count field ei= ther + */ + if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count !=3D 1) ) + return 0; + + switch ( p->size ) + { + case 1: + bp.size =3D 0; + break; + case 2: + bp.size =3D 1; + break; + case 4: + bp.size =3D 2; + break; + case 8: + bp.size =3D 3; + qw =3D 1; + break; + default: + gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); + return IOREQ_IO_UNHANDLED; + } + + spin_lock(&s->bufioreq_lock); + + if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=3D + (IOREQ_BUFFER_SLOT_NUM - qw) ) + { + /* The queue is full: send the iopacket through the normal path. */ + spin_unlock(&s->bufioreq_lock); + return IOREQ_IO_UNHANDLED; + } + + pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D bp; + + if ( qw ) + { + bp.data =3D p->data >> 32; + pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = =3D bp; + } + + /* Make the ioreq_t visible /before/ write_pointer. */ + smp_wmb(); + pg->ptrs.write_pointer +=3D qw ? 2 : 1; + + /* Canonicalize read/write pointers to prevent their overflow. */ + while ( (s->bufioreq_handling =3D=3D HVM_IOREQSRV_BUFIOREQ_ATOMIC) && + qw++ < IOREQ_BUFFER_SLOT_NUM && + pg->ptrs.read_pointer >=3D IOREQ_BUFFER_SLOT_NUM ) + { + union bufioreq_pointers old =3D pg->ptrs, new; + unsigned int n =3D old.read_pointer / IOREQ_BUFFER_SLOT_NUM; + + new.read_pointer =3D old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; + new.write_pointer =3D old.write_pointer - n * IOREQ_BUFFER_SLOT_NU= M; + cmpxchg(&pg->ptrs.full, old.full, new.full); + } + + notify_via_xen_event_channel(d, s->bufioreq_evtchn); + spin_unlock(&s->bufioreq_lock); + + return IOREQ_IO_HANDLED; +} + +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered) +{ + struct vcpu *curr =3D current; + struct domain *d =3D curr->domain; + struct hvm_ioreq_vcpu *sv; + + ASSERT(s); + + if ( buffered ) + return hvm_send_buffered_ioreq(s, proto_p); + + if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) + return IOREQ_IO_RETRY; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu =3D=3D curr ) + { + evtchn_port_t port =3D sv->ioreq_evtchn; + ioreq_t *p =3D get_ioreq(s, curr); + + if ( unlikely(p->state !=3D STATE_IOREQ_NONE) ) + { + gprintk(XENLOG_ERR, "device model set bad IO state %d\n", + p->state); + break; + } + + if ( unlikely(p->vp_eport !=3D port) ) + { + gprintk(XENLOG_ERR, "device model set bad event channel %d= \n", + p->vp_eport); + break; + } + + proto_p->state =3D STATE_IOREQ_NONE; + proto_p->vp_eport =3D port; + *p =3D *proto_p; + + prepare_wait_on_xen_event_channel(port); + + /* + * Following happens /after/ blocking and setting up ioreq + * contents. prepare_wait_on_xen_event_channel() is an implicit + * barrier. + */ + p->state =3D STATE_IOREQ_READY; + notify_via_xen_event_channel(d, port); + + sv->pending =3D true; + return IOREQ_IO_RETRY; + } + } + + return IOREQ_IO_UNHANDLED; +} + +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +{ + struct domain *d =3D current->domain; + struct hvm_ioreq_server *s; + unsigned int id, failed =3D 0; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( !s->enabled ) + continue; + + if ( hvm_send_ioreq(s, p, buffered) =3D=3D IOREQ_IO_UNHANDLED ) + failed++; + } + + return failed; +} + +void hvm_ioreq_init(struct domain *d) +{ + spin_lock_init(&d->arch.hvm.ioreq_server.lock); + + arch_hvm_ioreq_init(d); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index 151b92b..dec1e71 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,41 +19,12 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ =20 -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); -bool is_ioreq_server_page(struct domain *d, const struct page_info *page); +#include +#include +#include =20 -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); - -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); - -void hvm_ioreq_init(struct domain *d); =20 int arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *s); =20 diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h new file mode 100644 index 0000000..f846034 --- /dev/null +++ b/xen/include/xen/ioreq.h @@ -0,0 +1,82 @@ +/* + * ioreq.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#ifndef __IOREQ_H__ +#define __IOREQ_H__ + +#include + +#include + +#define GET_IOREQ_SERVER(d, id) \ + (d)->arch.hvm.ioreq_server.server[id] + +static inline struct hvm_ioreq_server *get_ioreq_server(const struct domai= n *d, + unsigned int id) +{ + if ( id >=3D MAX_NR_IOREQ_SERVERS ) + return NULL; + + return GET_IOREQ_SERVER(d, id); +} + +bool hvm_io_pending(struct vcpu *v); +bool handle_hvm_io_completion(struct vcpu *v); +bool is_ioreq_server_page(struct domain *d, const struct page_info *page); + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id); +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port); +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled); + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); +void hvm_destroy_all_ioreq_servers(struct domain *d); + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); + +void hvm_ioreq_init(struct domain *d); + +#endif /* __IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769374; cv=none; d=zohomail.com; s=zohoarc; b=PSNxzmy/iavtVx+s1v5bJA/3KBIdVGYv1OPVVYMxUP8uK4EChlPfudv+p8AkgWNSCmKWVAn98TLO4qzaD5vHycKqjpAUPwbmaC7J9xwfTDlEj9WPIWKFODb4aNMne08bzxIZPMixPDGT4q0cEiYtsJjOWKKw4FbXazkVbi0LKvg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769374; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=DqUjBinZS4E0rsQSBH7fpI0azvlrep7MUViq+OD7Ja0=; b=d8a88tjVCbGkS5SYoQIex4AdDekwkRNuVn6k2jr7qDX/nirlm7Kl8uDYlsq315Ut5bjxsD7oJnRkISrBoDwFQnlkP3F5pDguINjFd6f7hmJvrXYAidOhk/jSs2YdTG2WQQv387c6h4zQyD7BQTuEPj7gssFMQGkQ++Vc80NNHP8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769374723842.4383388889471; Thu, 10 Sep 2020 13:22:54 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5b-0004L6-38; Thu, 10 Sep 2020 20:22:39 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5Z-0004JK-SQ for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:22:37 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 615a7b06-77e9-434c-adba-99eba638ef79; Thu, 10 Sep 2020 20:22:31 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id q8so4313259lfb.6 for ; Thu, 10 Sep 2020 13:22:30 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:28 -0700 (PDT) X-Inumbo-ID: 615a7b06-77e9-434c-adba-99eba638ef79 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DqUjBinZS4E0rsQSBH7fpI0azvlrep7MUViq+OD7Ja0=; b=p2J5PgA0VKYSaAWskJjrOiDMfu4GqtgJ5kr1KtqfJrQJW0dvmJT3hwLPZIkhRNCX57 X82FVpO+oPxrZQAuYVoKC/2PYGULqTU7rsID+RLm87nanwVwc06QR/jN74xx6AcK4n9C /UTEyl/kE33Dp0W1B90KbBT0btX5eluV6JrhFxNRPgjZt+zrciMg/JlhGrJ6oh4Z2G8X 1z07HxAa/DHtp2/e8EtKqgIcQvWPhutp+AensBFmAkvotUM8wknJw4G9qklJWE/QGwbf xmX+QbZCmIhzSQqb9bZn6nBDz/uXM2UfsXMX1mqj+ylNPiKJHyEzG7DkgFTSd2zEQSvw yHVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DqUjBinZS4E0rsQSBH7fpI0azvlrep7MUViq+OD7Ja0=; b=mHkhianTOni2ep2cwkTCnftnRJUKLXh1s6MJPAPnhmrMaPZ1pDpa7iUJVY5o09XAn8 sI3nNprso47o9FOV7sOmdihArfAOEL7O64m2RXRHrUzW6fy8UAnc4vhkRS83NyuumCTH 5dDer/7uN3t6dUS/o6ytDdv16A/DCyJKFcJSyYUomjBc6Ucq9qLceymQrhXBoC4lCu+Q yuQtwEMVmM1dyELGgGdeyzs+pIkwwko35Wtvc3NE/6hSEslc5m3G9RVBvN4aBWfTBFOX ijVPdHSYG0YBk3pegR9FB+JoW+3FhPGi5gGgFTCG3WbnFjbIaTsuAmaLmccbllQQAu3p 6B8w== X-Gm-Message-State: AOAM530BfZa/6VHgvViZeFuruy06cL56xxqctBzHWEuJuzZMQixMhZpK YNp43fwGuyVBjo8aFxQpwD1qa69BI08DwQ== X-Google-Smtp-Source: ABdhPJzuCtduuUSIUZ5blQGPVg91+wAqNwvPMKuyZZ+vbehi5FkdWY0yliC32mFAQFakCVZmyJ/nBw== X-Received: by 2002:a05:6512:207:: with SMTP id a7mr5033608lfo.127.1599769349574; Thu, 10 Sep 2020 13:22:29 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jun Nakajima , Kevin Tian , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Paul Durrant , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V1 03/16] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common Date: Thu, 10 Sep 2020 23:21:57 +0300 Message-Id: <1599769330-17656-4-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and this helper will be used on Arm as is. Move it to include/xen/ioreq.h Although PIO handling on Arm is not introduced with the current series (it will be implemented when we add support for vPCI), technically the PIOs exist on Arm (however they are accessed the same way as MMIO) and it would be better not to diverge now. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" --- --- xen/arch/x86/hvm/vmx/realmode.c | 1 + xen/include/asm-x86/hvm/vcpu.h | 7 ------- xen/include/xen/ioreq.h | 7 +++++++ 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmod= e.c index bdbd9cb..292a7c3 100644 --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -10,6 +10,7 @@ */ =20 #include +#include #include #include #include diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 5ccd075..6c1feda 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -91,13 +91,6 @@ struct hvm_vcpu_io { const struct g2m_ioport *g2m_ioport; }; =20 -static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) -{ - return ioreq->state =3D=3D STATE_IOREQ_READY && - !ioreq->data_is_ptr && - (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); -} - struct nestedvcpu { bool_t nv_guestmode; /* vcpu in guestmode? */ void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index f846034..2240a73 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -35,6 +35,13 @@ static inline struct hvm_ioreq_server *get_ioreq_server(= const struct domain *d, return GET_IOREQ_SERVER(d, id); } =20 +static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) +{ + return ioreq->state =3D=3D STATE_IOREQ_READY && + !ioreq->data_is_ptr && + (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); +} + bool hvm_io_pending(struct vcpu *v); bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769378; cv=none; d=zohomail.com; s=zohoarc; b=kI2O1IkwPeasTY6enpqjWCwqV/Rw6kn4z9MF+Zt2GPdZQ14M8mC/riJWd37BY2QHfly3W5Jt9TFWR7K2X7E5MLRe64Rl/OTGaG14yOzBVO2W6/2V7tPHzvvK3/IlJyzs3/Ww4drWrM4wXwVusXivY6bl0o18rwcOvaALtgo1/HU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769378; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=Y/dYcio/hiAAprYUa9lh090XJMa0/CH+ZJW7bHpIZ34=; b=MmI03NOZ0wr+bUIhNI/6jQ50PKD5l3qdMRy7t2tvB2RIbgJ8DA55Z/mY1ACRKZajVQNEKrNstam/f7IZyzdMS1iPI8UoTodOXSLhxlYExavu/SMS963w6pdXy2C12R+v9Hbb0Nci4mQc1CukE4m0oxPAACtHnPqMfXE1DBwg2K4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769378037882.0017103043652; Thu, 10 Sep 2020 13:22:58 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5g-0004Mw-Cu; Thu, 10 Sep 2020 20:22:44 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5e-0004JK-SX for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:22:42 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5230dacd-e276-4803-a78d-db96a4e52b79; Thu, 10 Sep 2020 20:22:32 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id y11so4304810lfl.5 for ; Thu, 10 Sep 2020 13:22:32 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.29 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:30 -0700 (PDT) X-Inumbo-ID: 5230dacd-e276-4803-a78d-db96a4e52b79 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Y/dYcio/hiAAprYUa9lh090XJMa0/CH+ZJW7bHpIZ34=; b=nZ8HDgkAIIx+upAgZp0X+xoXwx7hJrqfZSHJlFwEWqCrE3OUz4MqM2pOQDnYpjWwwL kJrg7/vqr3YctzJ7nPv2Ou8MI9F+Qt8D1eWrxr09VhANjzA60jNaBoLuYXICEJlaa1jH zXB+b3Ili7ltIkb0UYKyeO8WceGy15FlIMAi9knwfhz070Ftx4K6kkocdTIqbNleergB A8MDOG2HmOyPFJpLnclI+XKoyRGPhOUUpkJ9tbWZGtafhXsLMNGJ08E8g+7s7jz2mC6v FHeE0/MvmLRuJ3OW3DxGyfQ+XixFzxwuk4u6zPoJ1Xkz4D7SlaVn/22l/qlrE6OAxMG9 2nJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Y/dYcio/hiAAprYUa9lh090XJMa0/CH+ZJW7bHpIZ34=; b=L6OGn4SWI8UJb7H3pIHjsXUNokjkbTJ6rukC9iOxe/U6uBmIK8jHs5lsoyv8wTmVyx G/XaJjEWWBvjT9sdMaScI6hcd87p3rKc7zLwiioHAOcSD4wzJlMcCmUzm2IkCKIFk6U6 slkkh89v1R6KpARJpJy6JKSbWLfLkgU3HRg5myud0mCRmJSnfj/cLMFQ3Ze6zDe7qz0b 3tR4MQy2sb5RvUDoAW8/VtosD78Ek1dIE/VQSkgMmFgc4WqWooeqqWgW7ZWue31MWeLw m6PgqcdHXIxSjWiiMFmm/vobINNWbulw1bFetrOhRVSS4JA/MGaShOOZAybEU6WamLIT G2cA== X-Gm-Message-State: AOAM5305UKSoXvnogXn7M/C85lA8NVprKgNseVtuUVsoh1l9ozflqam+ G5MU5arllxPb/rfaUiSEZXhHEkSCl+HVGQ== X-Google-Smtp-Source: ABdhPJzUqgm1rO8y2Aeydf4eju+R7S0/pfa7z/4I1N8IU9krqwh61G0YOPQIIVB8U06Pt//GCkmLUQ== X-Received: by 2002:a19:674f:: with SMTP id e15mr5092725lfj.50.1599769350926; Thu, 10 Sep 2020 13:22:30 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V1 04/16] xen/ioreq: Provide alias for the handle_mmio() Date: Thu, 10 Sep 2020 23:21:58 +0300 Message-Id: <1599769330-17656-5-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and Arm will have its own implementation. But the name of the function is pretty generic and can be confusing on Arm (we already have a try_handle_mmio()). In order not to rename the function (which is used for a varying set of purposes on x86) globally and get non-confusing variant on Arm provide an alias ioreq_handle_complete_mmio() to be used on common and Arm code. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch --- --- xen/common/ioreq.c | 2 +- xen/include/asm-x86/hvm/ioreq.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 5017617..ce12751 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -189,7 +189,7 @@ bool handle_hvm_io_completion(struct vcpu *v) break; =20 case HVMIO_mmio_completion: - return handle_mmio(); + return ioreq_handle_complete_mmio(); =20 case HVMIO_pio_completion: return handle_pio(vio->io_req.addr, vio->io_req.size, diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/iore= q.h index dec1e71..43afdee 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -42,6 +42,8 @@ void arch_hvm_ioreq_destroy(struct domain *d); #define IOREQ_IO_UNHANDLED X86EMUL_UNHANDLEABLE #define IOREQ_IO_RETRY X86EMUL_RETRY =20 +#define ioreq_handle_complete_mmio handle_mmio + #endif /* __ASM_X86_HVM_IOREQ_H__ */ =20 /* --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769391; cv=none; d=zohomail.com; s=zohoarc; b=gB3C9Kw2VDqVAnssVkjpMBVWM19avjQeDQYvlDOIdvAAg0eDO+kfIZzuxX0Ph+7r/C92Gb6/BuiHVhnsOMVDXndLwvWXJnkQqwbAOEgZNuLGtlpeXgz4f477eHFxh8Fc/16uRrerJHXiw447ibk1n3P/txfyInyVwFCAq5xT+pQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769391; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=F6QWDalSsEZqTRlINIogmaX7+ZyzDHTCG9cMtOtEtMc=; b=WN9c/NXO2fvDRbWARzE81KAC5OEWdyZ94PC9az9SdI06kRcz2bFIP4NHXa9+PxYitS+3kuSk+90L/+ZUU7yFQEUqNrA0I1QU2nD8dXk8BiGm0aZcB8ygSUrjaFW3iUWKjFpccI6Ewlb2dPFu6OUWApxsBKaDo2d3FXJ9Ezs9grk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769391303304.3800191057203; Thu, 10 Sep 2020 13:23:11 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5q-0004Rv-7H; Thu, 10 Sep 2020 20:22:54 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5o-0004JK-Su for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:22:52 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1696cf62-9a76-4809-bd20-4d87dfc09198; Thu, 10 Sep 2020 20:22:33 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id m5so4299563lfp.7 for ; Thu, 10 Sep 2020 13:22:33 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:31 -0700 (PDT) X-Inumbo-ID: 1696cf62-9a76-4809-bd20-4d87dfc09198 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=F6QWDalSsEZqTRlINIogmaX7+ZyzDHTCG9cMtOtEtMc=; b=YujSj4+aoQA5vQF48bLihwn9LM7NiDastVOAJGS7Nnl9WtylvKGwSzQM/Jp+Fj4K/P bSWH2HyGu9S4sYiDCuHXiqBPlebuqCyPmGF9GfHhrli7N90yqr2xTGAKVjCc4CTwWPqU leRIgL5WrHgA4B8kI71U442v5crb39G0Rc0069GidZXjymxqIdZYi6yfZGh3si55WVUY cUOPiMqxbfD7MJajWmyZCfN1gHDGbt+MOmD8h68/4Ki4Qp7JZW1dFTiBoh8h1o9SLUsJ 2cUnIPR4CphoJvIvpAomP5IZUD+mZOzwa0nqcddFbljN9hUIeyvlGbJR+t8uSdqfERDL 9X7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=F6QWDalSsEZqTRlINIogmaX7+ZyzDHTCG9cMtOtEtMc=; b=BL0mfiZHJSSlXMqpDqazIQ4TAv5+R3Y3Z5gViwM+HgLZa+4rgvT+l24WcFmIn2VUoH ZATiq9ZcYB0Tj3xeGv9hwBPP+iCGRunwgPV0Ereq02eF2wQswBqeRss4KJ+G4/Yxd/DU uoR1Gj1ityy6QXmjTzYGrbudTBeuOkOCW6gKeEkYRNl0pzi2a365UeKN6uFc0HQ23NTS df5hjTzb+ZVLpSzDEv+AiQG9RNKWQ3Vy/S+QyVOzOw/B+PWlN8uPqkBW02VWT66rJSP3 eZkyQzedo7u9h8PcyDoNjxKVNljLWIWEzACwEa+SomVtBXyvMP8fJtLLhFjocm4+1E+8 nc+Q== X-Gm-Message-State: AOAM530pzW2JBLWapkVVdlEDy5XvGav766gr8GE/xJa+jr6I/DhSLWhl rGVX/Ge34NRQ+/S9TCUzFe04OLnhH22dBQ== X-Google-Smtp-Source: ABdhPJyTbkNlNibKINAgzwYl6JWQ5C6zWZAEhtUThXYGMnPTg0ezEJm/8XmSQKKAjiDeVcglSbCeBQ== X-Received: by 2002:a05:6512:512:: with SMTP id o18mr4994141lfb.98.1599769352032; Thu, 10 Sep 2020 13:22:32 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V1 05/16] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common Date: Thu, 10 Sep 2020 23:21:59 +0300 Message-Id: <1599769330-17656-6-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these helpers will be used on Arm as is. Move them to include/xen/ioreq.h Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch --- --- xen/arch/x86/hvm/intercept.c | 1 + xen/include/asm-x86/hvm/io.h | 16 ---------------- xen/include/xen/ioreq.h | 16 ++++++++++++++++ 3 files changed, 17 insertions(+), 16 deletions(-) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index cd4c4c1..891e497 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -17,6 +17,7 @@ * this program; If not, see . */ =20 +#include #include #include #include diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 558426b..fb64294 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -40,22 +40,6 @@ struct hvm_mmio_ops { hvm_mmio_write_t write; }; =20 -static inline paddr_t hvm_mmio_first_byte(const ioreq_t *p) -{ - return unlikely(p->df) ? - p->addr - (p->count - 1ul) * p->size : - p->addr; -} - -static inline paddr_t hvm_mmio_last_byte(const ioreq_t *p) -{ - unsigned long size =3D p->size; - - return unlikely(p->df) ? - p->addr + size - 1: - p->addr + (p->count * size) - 1; -} - typedef int (*portio_action_t)( int dir, unsigned int port, unsigned int bytes, uint32_t *val); =20 diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 2240a73..9521170 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -35,6 +35,22 @@ static inline struct hvm_ioreq_server *get_ioreq_server(= const struct domain *d, return GET_IOREQ_SERVER(d, id); } =20 +static inline paddr_t hvm_mmio_first_byte(const ioreq_t *p) +{ + return unlikely(p->df) ? + p->addr - (p->count - 1ul) * p->size : + p->addr; +} + +static inline paddr_t hvm_mmio_last_byte(const ioreq_t *p) +{ + unsigned long size =3D p->size; + + return unlikely(p->df) ? + p->addr + size - 1: + p->addr + (p->count * size) - 1; +} + static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) { return ioreq->state =3D=3D STATE_IOREQ_READY && --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769395; cv=none; d=zohomail.com; s=zohoarc; b=P0TTCh4QyPOU/zSd8/xj61LpY957uoDkcw3yPBAX2Fpl+EW0VTzakDMUEmKuvjOMkvsVXPWLkRZ/n9g4Ng2H21o2JlGKIHuKiyH+gujhUSDwbpq+VNY647HTpPlu0aKvEmTSmscV/fnHgQLRiswxUKUoWqGccgBP094tJv8Ovg8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769395; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=Xd9iPDbckbDh0wvQoJOmY8nE/E6rj2WKAm7YYXahUG8=; b=W+d7nIo3QEN0W0tnQw6GFVtSAk2rxludqhWuOduUyZTjNxMt7pDuD+Jex+36c/o+5O9X/fYZ6VnW8QEPuf7VBZFDyoXe22679UlPu+chwM03ncVN9EE63p1bZ7nlBejp0pwOmNBNIrxLe45RCSBUde3xcqGVTTSveMwnBXY0YTM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769395965372.9247923326491; Thu, 10 Sep 2020 13:23:15 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5v-0004Ug-Hp; Thu, 10 Sep 2020 20:22:59 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5t-0004JK-T8 for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:22:57 +0000 Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 63e7381b-d19f-4728-9ca9-4a91c3d046c1; Thu, 10 Sep 2020 20:22:34 +0000 (UTC) Received: by mail-lj1-x243.google.com with SMTP id u4so9846844ljd.10 for ; Thu, 10 Sep 2020 13:22:34 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.32 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:32 -0700 (PDT) X-Inumbo-ID: 63e7381b-d19f-4728-9ca9-4a91c3d046c1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Xd9iPDbckbDh0wvQoJOmY8nE/E6rj2WKAm7YYXahUG8=; b=kUHzfdbeYX+1fNweXVyqWPoIQCYSHnDosSe2bk/LjiIm2cCs8Ik0klq9GF+3f40dit ousNQsT54x+QnUUQliKOHytjh72T8r5X0S5TIjRaPNncQ7YgXXowWSP1JRNvR4eZN6Es Bb5KmjWCLZ69+5T+tL+BjSyaaqiE8sBY2UnX9/nydrHY5JR8LKI1rKh1N0JfewPbxLeD 4WgpMqQumdoihhS4RpCC2P/NvmxtH/ALXqc3/HOQSTeSXP3sGlFkMrUe1WKCPbV/BETt f8ZxKA+l62lX7j5EIJ9ROdBaJqU1Tvfv0tEhfdi9d2zil4IXPubYQhz0MeTTq20NtUMS fYPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Xd9iPDbckbDh0wvQoJOmY8nE/E6rj2WKAm7YYXahUG8=; b=Q3SvBP32rgkVhSPdZ0X3W0vaKKBN+SXr+/8p1dEw0sNEj3rXcKRYB/eYMn3zmyrKO6 Om6sbza+3j9GosBCoWjGPjFXIvYtOk6931CuMyhz7SjW4rYajoJ5QuilCzOhsPhKPwEu 8s2Dk4seexSL5oZZ1Z9q6tAcyohE1LWgbf0PxI0WLwCluDEuPev4zuS77GKUkGLBME3f ahUlUl9r3GPNf176OJbUx0cpGE2GqeBXPEs3L/TmlFweAheMwYFBBOCHr7qj9qtiNJgE cb7cEXps85UPX4o18Vb4WXbuPxDKYK721ol94wHtrp1XCJdz+e5oczqjGFNwY4Wg4hgS qo8g== X-Gm-Message-State: AOAM533wOZmitqeAoB33EyriA16CX96uFTMRd6Z/bxBlgRYkQVzZP8tT On47pS0nqO79kx1Q4stPtNL1HiFGn+eU1A== X-Google-Smtp-Source: ABdhPJzHfYDsXcMyXftFqHXtCuD4MsFu2fE2g9xUrx/wZ9z3CdaN7IrEpKZ0zson/Yi+fAm+PRbyFw== X-Received: by 2002:a2e:b0d2:: with SMTP id g18mr5569976ljl.198.1599769353414; Thu, 10 Sep 2020 13:22:33 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Paul Durrant , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V1 06/16] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common Date: Thu, 10 Sep 2020 23:22:00 +0300 Message-Id: <1599769330-17656-7-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these structs will be used on Arm as is. Move them to xen/ioreq.h Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch --- --- xen/include/asm-x86/hvm/domain.h | 34 ---------------------------------- xen/include/xen/ioreq.h | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+), 34 deletions(-) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 9d247ba..765f35c 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -30,40 +30,6 @@ =20 #include =20 -struct hvm_ioreq_page { - gfn_t gfn; - struct page_info *page; - void *va; -}; - -struct hvm_ioreq_vcpu { - struct list_head list_entry; - struct vcpu *vcpu; - evtchn_port_t ioreq_evtchn; - bool pending; -}; - -#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) -#define MAX_NR_IO_RANGES 256 - -struct hvm_ioreq_server { - struct domain *target, *emulator; - - /* Lock to serialize toolstack modifications */ - spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; - struct rangeset *range[NR_IO_RANGE_TYPES]; - bool enabled; - uint8_t bufioreq_handling; -}; - #ifdef CONFIG_MEM_SHARING struct mem_sharing_domain { diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 9521170..102f7e8 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -23,6 +23,40 @@ =20 #include =20 +struct hvm_ioreq_page { + gfn_t gfn; + struct page_info *page; + void *va; +}; + +struct hvm_ioreq_vcpu { + struct list_head list_entry; + struct vcpu *vcpu; + evtchn_port_t ioreq_evtchn; + bool pending; +}; + +#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) +#define MAX_NR_IO_RANGES 256 + +struct hvm_ioreq_server { + struct domain *target, *emulator; + + /* Lock to serialize toolstack modifications */ + spinlock_t lock; + + struct hvm_ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct hvm_ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + struct rangeset *range[NR_IO_RANGE_TYPES]; + bool enabled; + uint8_t bufioreq_handling; +}; + #define GET_IOREQ_SERVER(d, id) \ (d)->arch.hvm.ioreq_server.server[id] =20 --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769403; cv=none; d=zohomail.com; s=zohoarc; b=MdEd4LDFs4HDQTU1Py37fdpwm8j5dy3n1CUa0ZM2b8ta5Wa2fwDPYOEBFmcdCpJ3SFlTg2nE3X9eE3rcoTOxQ0Ia5YTqK1n/SjEzOe4Dly6JkjAlJWb/EEZKefaCvL9GSihWL5G/2nisxNlacGfjbVAsJIdpyQXKplVP1OfFlcM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769403; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=bu1J/F0NHXKl2Tb4gSoQJVz0UaPT+FGQeedT/aukqso=; b=HaV2Sc8qA8Orfub4ZEY733ycFn4rGCPw2EL3JmKUquwwQxFVrBHsjj3y3ry+91shC8wJcjTpLBnrrTSyYRhlpc1FSayaH8Kk0wSj4vZQ9B5RgyDvLdvTTp2iUaZF2PJciBbnHzlqObNrTvuHMZUKODklOvZqfwgrgy4DNB73Usk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 159976940381188.2007741906242; Thu, 10 Sep 2020 13:23:23 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT60-0004Xk-Rk; Thu, 10 Sep 2020 20:23:04 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT5y-0004JK-TJ for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:02 +0000 Received: from mail-lj1-x233.google.com (unknown [2a00:1450:4864:20::233]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 14069bda-2f1a-43ac-b402-eec6382d30c3; Thu, 10 Sep 2020 20:22:37 +0000 (UTC) Received: by mail-lj1-x233.google.com with SMTP id y4so9844540ljk.8 for ; Thu, 10 Sep 2020 13:22:37 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.33 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:34 -0700 (PDT) X-Inumbo-ID: 14069bda-2f1a-43ac-b402-eec6382d30c3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bu1J/F0NHXKl2Tb4gSoQJVz0UaPT+FGQeedT/aukqso=; b=HS4p/XNlFgowylhpiurzPY2HUs2tj+/flsH2PA7jPdahCZL+A8J1SPhoqgbAhFJkz/ CaC7wWsRE7bm/zhsNxgvKmOjhSXA3uLDN3BfAkLHIofqo0oZtePxAB5OIfWirH6uwlZ+ bPI3Rz68T6JyOegPb4n4OP7387kALhDYUujz6CJ3zsOiLpQAHtrTYz3lBn7zO3yREWLD g9ZZUJb0Agy0JgL+P+XNyxBxCtgrXv8JrKVWYghMhtNoa2u9buBmN6mNvR75Dn3kNoFa 8Cg5I1QTHYuviX8Scfk/EHFnk4gOP0lO0o3vSSqvg343ym0eRwj3gpjM5i5/apHIWA9t Ry2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bu1J/F0NHXKl2Tb4gSoQJVz0UaPT+FGQeedT/aukqso=; b=W5uyOhsvMoLXen1dO2zeagC0qjNdemRRkmKCM6KpjZvPytnGXlQWNT84jo7anCdIUV H0xqy4BbpiTDXt3b82hqg1v8+4Vsza2YLh8gIX9ye2CfG49Zbb+3xkSIt+gZI7IK3x6g nMNf652PtbVPpBUBVoOE/PhTtgt4ap1hQ9diG7qOEITYuy/1QbdzllMbFIQxvqgtEiDM sp2ECrAjcB+wFtQnSEeXzJp9m4+xZPrGyzacpI6Z18tp4W8bSf+c0xkVRuT8YtAi2V4S 4J5vpgahdhGyN9ksRJH3WbeX6Ty+mbrNgNX8amsUTTomX2rQSWGyZqLUR5HS+ec58iWK KH6A== X-Gm-Message-State: AOAM533s+j48w/Cq+rO2PY/HyQqIbNIQqxS9YKlo7Ltx3dIgGWZtaYNN jHyx9keyarhKGGt1ACI0j2NWzr+ue0Ndlg== X-Google-Smtp-Source: ABdhPJzjJTrD9DiPENFOFtoUsveC3BhBJGd/W1ic00ceSkoB9C1mjFXmzdYVinrNcbfPbvbkKKzJgw== X-Received: by 2002:a2e:a175:: with SMTP id u21mr5127696ljl.7.1599769355790; Thu, 10 Sep 2020 13:22:35 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Daniel De Graaf , Julien Grall Subject: [PATCH V1 07/16] xen/dm: Make x86's DM feature common Date: Thu, 10 Sep 2020 23:22:01 +0300 Message-Id: <1599769330-17656-8-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch splits devicemodel support into common and arch specific parts. Also update XSM code a bit to let DM op be used on Arm. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - update XSM, related changes were pulled from: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/D= M features --- --- xen/arch/x86/hvm/dm.c | 287 +++-------------------------------------= ---- xen/common/Makefile | 1 + xen/common/dm.c | 287 ++++++++++++++++++++++++++++++++++++++++= ++++ xen/include/xen/hypercall.h | 12 ++ xen/include/xsm/dummy.h | 4 +- xen/include/xsm/xsm.h | 6 +- xen/xsm/dummy.c | 2 +- xen/xsm/flask/hooks.c | 5 +- 8 files changed, 327 insertions(+), 277 deletions(-) create mode 100644 xen/common/dm.c diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 5ce484a..6ae535e 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -29,13 +29,6 @@ =20 #include =20 -struct dmop_args { - domid_t domid; - unsigned int nr_bufs; - /* Reserve enough buf elements for all current hypercalls. */ - struct xen_dm_op_buf buf[2]; -}; - static bool _raw_copy_from_guest_buf_offset(void *dst, const struct dmop_args *args, unsigned int buf_idx, @@ -338,148 +331,20 @@ static int inject_event(struct domain *d, return 0; } =20 -static int dm_op(const struct dmop_args *op_args) +int arch_dm_op(struct xen_dm_op *op, struct domain *d, + const struct dmop_args *op_args, bool *const_op) { - struct domain *d; - struct xen_dm_op op; - bool const_op =3D true; long rc; - size_t offset; - - static const uint8_t op_size[] =3D { - [XEN_DMOP_create_ioreq_server] =3D sizeof(struct xen_= dm_op_create_ioreq_server), - [XEN_DMOP_get_ioreq_server_info] =3D sizeof(struct xen_= dm_op_get_ioreq_server_info), - [XEN_DMOP_map_io_range_to_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), - [XEN_DMOP_unmap_io_range_from_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), - [XEN_DMOP_set_ioreq_server_state] =3D sizeof(struct xen_= dm_op_set_ioreq_server_state), - [XEN_DMOP_destroy_ioreq_server] =3D sizeof(struct xen_= dm_op_destroy_ioreq_server), - [XEN_DMOP_track_dirty_vram] =3D sizeof(struct xen_= dm_op_track_dirty_vram), - [XEN_DMOP_set_pci_intx_level] =3D sizeof(struct xen_= dm_op_set_pci_intx_level), - [XEN_DMOP_set_isa_irq_level] =3D sizeof(struct xen_= dm_op_set_isa_irq_level), - [XEN_DMOP_set_pci_link_route] =3D sizeof(struct xen_= dm_op_set_pci_link_route), - [XEN_DMOP_modified_memory] =3D sizeof(struct xen_= dm_op_modified_memory), - [XEN_DMOP_set_mem_type] =3D sizeof(struct xen_= dm_op_set_mem_type), - [XEN_DMOP_inject_event] =3D sizeof(struct xen_= dm_op_inject_event), - [XEN_DMOP_inject_msi] =3D sizeof(struct xen_= dm_op_inject_msi), - [XEN_DMOP_map_mem_type_to_ioreq_server] =3D sizeof(struct xen_= dm_op_map_mem_type_to_ioreq_server), - [XEN_DMOP_remote_shutdown] =3D sizeof(struct xen_= dm_op_remote_shutdown), - [XEN_DMOP_relocate_memory] =3D sizeof(struct xen_= dm_op_relocate_memory), - [XEN_DMOP_pin_memory_cacheattr] =3D sizeof(struct xen_= dm_op_pin_memory_cacheattr), - }; - - rc =3D rcu_lock_remote_domain_by_id(op_args->domid, &d); - if ( rc ) - return rc; - - if ( !is_hvm_domain(d) ) - goto out; - - rc =3D xsm_dm_op(XSM_DM_PRIV, d); - if ( rc ) - goto out; - - offset =3D offsetof(struct xen_dm_op, u); - - rc =3D -EFAULT; - if ( op_args->buf[0].size < offset ) - goto out; - - if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset)= ) - goto out; - - if ( op.op >=3D ARRAY_SIZE(op_size) ) - { - rc =3D -EOPNOTSUPP; - goto out; - } - - op.op =3D array_index_nospec(op.op, ARRAY_SIZE(op_size)); - - if ( op_args->buf[0].size < offset + op_size[op.op] ) - goto out; - - if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset, - op_size[op.op]) ) - goto out; - - rc =3D -EINVAL; - if ( op.pad ) - goto out; - - switch ( op.op ) - { - case XEN_DMOP_create_ioreq_server: - { - struct xen_dm_op_create_ioreq_server *data =3D - &op.u.create_ioreq_server; - - const_op =3D false; - - rc =3D -EINVAL; - if ( data->pad[0] || data->pad[1] || data->pad[2] ) - break; - - rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); - break; - } =20 - case XEN_DMOP_get_ioreq_server_info: + switch ( op->op ) { - struct xen_dm_op_get_ioreq_server_info *data =3D - &op.u.get_ioreq_server_info; - const uint16_t valid_flags =3D XEN_DMOP_no_gfns; - - const_op =3D false; - - rc =3D -EINVAL; - if ( data->flags & ~valid_flags ) - break; - - rc =3D hvm_get_ioreq_server_info(d, data->id, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->ioreq_gfn, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->bufioreq_gfn, - &data->bufioreq_port); - break; - } - - case XEN_DMOP_map_io_range_to_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data =3D - &op.u.map_io_range_to_ioreq_server; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); - break; - } - - case XEN_DMOP_unmap_io_range_from_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data =3D - &op.u.unmap_io_range_from_ioreq_server; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, - data->start, data->end); - break; - } - case XEN_DMOP_map_mem_type_to_ioreq_server: { struct xen_dm_op_map_mem_type_to_ioreq_server *data =3D - &op.u.map_mem_type_to_ioreq_server; + &op->u.map_mem_type_to_ioreq_server; unsigned long first_gfn =3D data->opaque; =20 - const_op =3D false; + *const_op =3D false; =20 rc =3D -EOPNOTSUPP; if ( !hap_enabled(d) ) @@ -523,36 +388,10 @@ static int dm_op(const struct dmop_args *op_args) break; } =20 - case XEN_DMOP_set_ioreq_server_state: - { - const struct xen_dm_op_set_ioreq_server_state *data =3D - &op.u.set_ioreq_server_state; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); - break; - } - - case XEN_DMOP_destroy_ioreq_server: - { - const struct xen_dm_op_destroy_ioreq_server *data =3D - &op.u.destroy_ioreq_server; - - rc =3D -EINVAL; - if ( data->pad ) - break; - - rc =3D hvm_destroy_ioreq_server(d, data->id); - break; - } - case XEN_DMOP_track_dirty_vram: { const struct xen_dm_op_track_dirty_vram *data =3D - &op.u.track_dirty_vram; + &op->u.track_dirty_vram; =20 rc =3D -EINVAL; if ( data->pad ) @@ -568,7 +407,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_set_pci_intx_level: { const struct xen_dm_op_set_pci_intx_level *data =3D - &op.u.set_pci_intx_level; + &op->u.set_pci_intx_level; =20 rc =3D set_pci_intx_level(d, data->domain, data->bus, data->device, data->intx, @@ -579,7 +418,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_set_isa_irq_level: { const struct xen_dm_op_set_isa_irq_level *data =3D - &op.u.set_isa_irq_level; + &op->u.set_isa_irq_level; =20 rc =3D set_isa_irq_level(d, data->isa_irq, data->level); break; @@ -588,7 +427,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_set_pci_link_route: { const struct xen_dm_op_set_pci_link_route *data =3D - &op.u.set_pci_link_route; + &op->u.set_pci_link_route; =20 rc =3D hvm_set_pci_link_route(d, data->link, data->isa_irq); break; @@ -597,19 +436,19 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_modified_memory: { struct xen_dm_op_modified_memory *data =3D - &op.u.modified_memory; + &op->u.modified_memory; =20 rc =3D modified_memory(d, op_args, data); - const_op =3D !rc; + *const_op =3D !rc; break; } =20 case XEN_DMOP_set_mem_type: { struct xen_dm_op_set_mem_type *data =3D - &op.u.set_mem_type; + &op->u.set_mem_type; =20 - const_op =3D false; + *const_op =3D false; =20 rc =3D -EINVAL; if ( data->pad ) @@ -622,7 +461,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_inject_event: { const struct xen_dm_op_inject_event *data =3D - &op.u.inject_event; + &op->u.inject_event; =20 rc =3D -EINVAL; if ( data->pad0 || data->pad1 ) @@ -635,7 +474,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_inject_msi: { const struct xen_dm_op_inject_msi *data =3D - &op.u.inject_msi; + &op->u.inject_msi; =20 rc =3D -EINVAL; if ( data->pad ) @@ -648,7 +487,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_remote_shutdown: { const struct xen_dm_op_remote_shutdown *data =3D - &op.u.remote_shutdown; + &op->u.remote_shutdown; =20 domain_shutdown(d, data->reason); rc =3D 0; @@ -657,7 +496,7 @@ static int dm_op(const struct dmop_args *op_args) =20 case XEN_DMOP_relocate_memory: { - struct xen_dm_op_relocate_memory *data =3D &op.u.relocate_memory; + struct xen_dm_op_relocate_memory *data =3D &op->u.relocate_memory; struct xen_add_to_physmap xatp =3D { .domid =3D op_args->domid, .size =3D data->size, @@ -680,7 +519,7 @@ static int dm_op(const struct dmop_args *op_args) data->size -=3D rc; data->src_gfn +=3D rc; data->dst_gfn +=3D rc; - const_op =3D false; + *const_op =3D false; rc =3D -ERESTART; } break; @@ -689,7 +528,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_pin_memory_cacheattr: { const struct xen_dm_op_pin_memory_cacheattr *data =3D - &op.u.pin_memory_cacheattr; + &op->u.pin_memory_cacheattr; =20 if ( data->pad ) { @@ -707,94 +546,6 @@ static int dm_op(const struct dmop_args *op_args) break; } =20 - if ( (!rc || rc =3D=3D -ERESTART) && - !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, - (void *)&op.u, op_size[op.op]) ) - rc =3D -EFAULT; - - out: - rcu_unlock_domain(d); - - return rc; -} - -CHECK_dm_op_create_ioreq_server; -CHECK_dm_op_get_ioreq_server_info; -CHECK_dm_op_ioreq_server_range; -CHECK_dm_op_set_ioreq_server_state; -CHECK_dm_op_destroy_ioreq_server; -CHECK_dm_op_track_dirty_vram; -CHECK_dm_op_set_pci_intx_level; -CHECK_dm_op_set_isa_irq_level; -CHECK_dm_op_set_pci_link_route; -CHECK_dm_op_modified_memory; -CHECK_dm_op_set_mem_type; -CHECK_dm_op_inject_event; -CHECK_dm_op_inject_msi; -CHECK_dm_op_remote_shutdown; -CHECK_dm_op_relocate_memory; -CHECK_dm_op_pin_memory_cacheattr; - -int compat_dm_op(domid_t domid, - unsigned int nr_bufs, - XEN_GUEST_HANDLE_PARAM(void) bufs) -{ - struct dmop_args args; - unsigned int i; - int rc; - - if ( nr_bufs > ARRAY_SIZE(args.buf) ) - return -E2BIG; - - args.domid =3D domid; - args.nr_bufs =3D array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); - - for ( i =3D 0; i < args.nr_bufs; i++ ) - { - struct compat_dm_op_buf cmp; - - if ( copy_from_guest_offset(&cmp, bufs, i, 1) ) - return -EFAULT; - -#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \ - guest_from_compat_handle((_d_)->h, (_s_)->h) - - XLAT_dm_op_buf(&args.buf[i], &cmp); - -#undef XLAT_dm_op_buf_HNDL_h - } - - rc =3D dm_op(&args); - - if ( rc =3D=3D -ERESTART ) - rc =3D hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", - domid, nr_bufs, bufs); - - return rc; -} - -long do_dm_op(domid_t domid, - unsigned int nr_bufs, - XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) -{ - struct dmop_args args; - int rc; - - if ( nr_bufs > ARRAY_SIZE(args.buf) ) - return -E2BIG; - - args.domid =3D domid; - args.nr_bufs =3D array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); - - if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) ) - return -EFAULT; - - rc =3D dm_op(&args); - - if ( rc =3D=3D -ERESTART ) - rc =3D hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", - domid, nr_bufs, bufs); - return rc; } =20 diff --git a/xen/common/Makefile b/xen/common/Makefile index 8df2b6e..5cf7208 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -6,6 +6,7 @@ obj-$(CONFIG_CORE_PARKING) +=3D core_parking.o obj-y +=3D cpu.o obj-$(CONFIG_DEBUG_TRACE) +=3D debugtrace.o obj-$(CONFIG_HAS_DEVICE_TREE) +=3D device_tree.o +obj-$(CONFIG_IOREQ_SERVER) +=3D dm.o obj-y +=3D domctl.o obj-y +=3D domain.o obj-y +=3D event_2l.o diff --git a/xen/common/dm.c b/xen/common/dm.c new file mode 100644 index 0000000..060731d --- /dev/null +++ b/xen/common/dm.c @@ -0,0 +1,287 @@ +/* + * Copyright (c) 2016 Citrix Systems Inc. + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include +#include +#include + +static int dm_op(const struct dmop_args *op_args) +{ + struct domain *d; + struct xen_dm_op op; + long rc; + bool const_op =3D true; + const size_t offset =3D offsetof(struct xen_dm_op, u); + + static const uint8_t op_size[] =3D { + [XEN_DMOP_create_ioreq_server] =3D sizeof(struct xen_= dm_op_create_ioreq_server), + [XEN_DMOP_get_ioreq_server_info] =3D sizeof(struct xen_= dm_op_get_ioreq_server_info), + [XEN_DMOP_map_io_range_to_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), + [XEN_DMOP_unmap_io_range_from_ioreq_server] =3D sizeof(struct xen_= dm_op_ioreq_server_range), + [XEN_DMOP_set_ioreq_server_state] =3D sizeof(struct xen_= dm_op_set_ioreq_server_state), + [XEN_DMOP_destroy_ioreq_server] =3D sizeof(struct xen_= dm_op_destroy_ioreq_server), + [XEN_DMOP_track_dirty_vram] =3D sizeof(struct xen_= dm_op_track_dirty_vram), + [XEN_DMOP_set_pci_intx_level] =3D sizeof(struct xen_= dm_op_set_pci_intx_level), + [XEN_DMOP_set_isa_irq_level] =3D sizeof(struct xen_= dm_op_set_isa_irq_level), + [XEN_DMOP_set_pci_link_route] =3D sizeof(struct xen_= dm_op_set_pci_link_route), + [XEN_DMOP_modified_memory] =3D sizeof(struct xen_= dm_op_modified_memory), + [XEN_DMOP_set_mem_type] =3D sizeof(struct xen_= dm_op_set_mem_type), + [XEN_DMOP_inject_event] =3D sizeof(struct xen_= dm_op_inject_event), + [XEN_DMOP_inject_msi] =3D sizeof(struct xen_= dm_op_inject_msi), + [XEN_DMOP_map_mem_type_to_ioreq_server] =3D sizeof(struct xen_= dm_op_map_mem_type_to_ioreq_server), + [XEN_DMOP_remote_shutdown] =3D sizeof(struct xen_= dm_op_remote_shutdown), + [XEN_DMOP_relocate_memory] =3D sizeof(struct xen_= dm_op_relocate_memory), + [XEN_DMOP_pin_memory_cacheattr] =3D sizeof(struct xen_= dm_op_pin_memory_cacheattr), + }; + + rc =3D rcu_lock_remote_domain_by_id(op_args->domid, &d); + if ( rc ) + return rc; + + if ( !is_hvm_domain(d) ) + goto out; + + rc =3D xsm_dm_op(XSM_DM_PRIV, d); + if ( rc ) + goto out; + + rc =3D -EFAULT; + if ( op_args->buf[0].size < offset ) + goto out; + + if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset)= ) + goto out; + + if ( op.op >=3D ARRAY_SIZE(op_size) ) + { + rc =3D -EOPNOTSUPP; + goto out; + } + + op.op =3D array_index_nospec(op.op, ARRAY_SIZE(op_size)); + + if ( op_args->buf[0].size < offset + op_size[op.op] ) + goto out; + + if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset, + op_size[op.op]) ) + goto out; + + rc =3D -EINVAL; + if ( op.pad ) + goto out; + + switch ( op.op ) + { + case XEN_DMOP_create_ioreq_server: + { + struct xen_dm_op_create_ioreq_server *data =3D + &op.u.create_ioreq_server; + + const_op =3D false; + + rc =3D -EINVAL; + if ( data->pad[0] || data->pad[1] || data->pad[2] ) + break; + + rc =3D hvm_create_ioreq_server(d, data->handle_bufioreq, + &data->id); + break; + } + + case XEN_DMOP_get_ioreq_server_info: + { + struct xen_dm_op_get_ioreq_server_info *data =3D + &op.u.get_ioreq_server_info; + const uint16_t valid_flags =3D XEN_DMOP_no_gfns; + + const_op =3D false; + + rc =3D -EINVAL; + if ( data->flags & ~valid_flags ) + break; + + rc =3D hvm_get_ioreq_server_info(d, data->id, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->iore= q_gfn, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->bufi= oreq_gfn, + &data->bufioreq_port); + break; + } + + case XEN_DMOP_map_io_range_to_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data =3D + &op.u.map_io_range_to_ioreq_server; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_map_io_range_to_ioreq_server(d, data->id, data->type, + data->start, data->end); + break; + } + + case XEN_DMOP_unmap_io_range_from_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data =3D + &op.u.unmap_io_range_from_ioreq_server; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_unmap_io_range_from_ioreq_server(d, data->id, data->typ= e, + data->start, data->end); + break; + } + + case XEN_DMOP_set_ioreq_server_state: + { + const struct xen_dm_op_set_ioreq_server_state *data =3D + &op.u.set_ioreq_server_state; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + break; + } + + case XEN_DMOP_destroy_ioreq_server: + { + const struct xen_dm_op_destroy_ioreq_server *data =3D + &op.u.destroy_ioreq_server; + + rc =3D -EINVAL; + if ( data->pad ) + break; + + rc =3D hvm_destroy_ioreq_server(d, data->id); + break; + } + + default: + rc =3D arch_dm_op(&op, d, op_args, &const_op); + } + + if ( (!rc || rc =3D=3D -ERESTART) && + !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, + (void *)&op.u, op_size[op.op]) ) + rc =3D -EFAULT; + + out: + rcu_unlock_domain(d); + + return rc; +} + +#ifdef CONFIG_COMPAT +CHECK_dm_op_create_ioreq_server; +CHECK_dm_op_get_ioreq_server_info; +CHECK_dm_op_ioreq_server_range; +CHECK_dm_op_set_ioreq_server_state; +CHECK_dm_op_destroy_ioreq_server; +CHECK_dm_op_track_dirty_vram; +CHECK_dm_op_set_pci_intx_level; +CHECK_dm_op_set_isa_irq_level; +CHECK_dm_op_set_pci_link_route; +CHECK_dm_op_modified_memory; +CHECK_dm_op_set_mem_type; +CHECK_dm_op_inject_event; +CHECK_dm_op_inject_msi; +CHECK_dm_op_remote_shutdown; +CHECK_dm_op_relocate_memory; +CHECK_dm_op_pin_memory_cacheattr; + +int compat_dm_op(domid_t domid, + unsigned int nr_bufs, + XEN_GUEST_HANDLE_PARAM(void) bufs) +{ + struct dmop_args args; + unsigned int i; + int rc; + + if ( nr_bufs > ARRAY_SIZE(args.buf) ) + return -E2BIG; + + args.domid =3D domid; + args.nr_bufs =3D array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); + + for ( i =3D 0; i < args.nr_bufs; i++ ) + { + struct compat_dm_op_buf cmp; + + if ( copy_from_guest_offset(&cmp, bufs, i, 1) ) + return -EFAULT; + +#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \ + guest_from_compat_handle((_d_)->h, (_s_)->h) + + XLAT_dm_op_buf(&args.buf[i], &cmp); + +#undef XLAT_dm_op_buf_HNDL_h + } + + rc =3D dm_op(&args); + + if ( rc =3D=3D -ERESTART ) + rc =3D hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", + domid, nr_bufs, bufs); + + return rc; +} +#endif + +long do_dm_op(domid_t domid, + unsigned int nr_bufs, + XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) +{ + struct dmop_args args; + int rc; + + if ( nr_bufs > ARRAY_SIZE(args.buf) ) + return -E2BIG; + + args.domid =3D domid; + args.nr_bufs =3D array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); + + if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) ) + return -EFAULT; + + rc =3D dm_op(&args); + + if ( rc =3D=3D -ERESTART ) + rc =3D hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", + domid, nr_bufs, bufs); + + return rc; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h index 655acc7..19f509f 100644 --- a/xen/include/xen/hypercall.h +++ b/xen/include/xen/hypercall.h @@ -150,6 +150,18 @@ do_dm_op( unsigned int nr_bufs, XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs); =20 +struct dmop_args { + domid_t domid; + unsigned int nr_bufs; + /* Reserve enough buf elements for all current hypercalls. */ + struct xen_dm_op_buf buf[2]; +}; + +int arch_dm_op(struct xen_dm_op *op, + struct domain *d, + const struct dmop_args *op_args, + bool *const_op); + #ifdef CONFIG_HYPFS extern long do_hypfs_op( diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 5f6f842..c0813c0 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -723,14 +723,14 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG str= uct domain *d, unsigned int } } =20 +#endif /* CONFIG_X86 */ + static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d) { XSM_ASSERT_ACTION(XSM_DM_PRIV); return xsm_default_action(action, current->domain, d); } =20 -#endif /* CONFIG_X86 */ - #ifdef CONFIG_ARGO static XSM_INLINE int xsm_argo_enable(const struct domain *d) { diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index a80bcf3..2a9b39d 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -177,8 +177,8 @@ struct xsm_operations { int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, ui= nt8_t allow); int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8= _t allow); int (*pmu_op) (struct domain *d, unsigned int op); - int (*dm_op) (struct domain *d); #endif + int (*dm_op) (struct domain *d); int (*xen_version) (uint32_t cmd); int (*domain_resource_map) (struct domain *d); #ifdef CONFIG_ARGO @@ -688,13 +688,13 @@ static inline int xsm_pmu_op (xsm_default_t def, stru= ct domain *d, unsigned int return xsm_ops->pmu_op(d, op); } =20 +#endif /* CONFIG_X86 */ + static inline int xsm_dm_op(xsm_default_t def, struct domain *d) { return xsm_ops->dm_op(d); } =20 -#endif /* CONFIG_X86 */ - static inline int xsm_xen_version (xsm_default_t def, uint32_t op) { return xsm_ops->xen_version(op); diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index d4cce68..e3afd06 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -148,8 +148,8 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, ioport_permission); set_to_dummy_if_null(ops, ioport_mapping); set_to_dummy_if_null(ops, pmu_op); - set_to_dummy_if_null(ops, dm_op); #endif + set_to_dummy_if_null(ops, dm_op); set_to_dummy_if_null(ops, xen_version); set_to_dummy_if_null(ops, domain_resource_map); #ifdef CONFIG_ARGO diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index a314bf8..645192a 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1662,14 +1662,13 @@ static int flask_pmu_op (struct domain *d, unsigned= int op) return -EPERM; } } +#endif /* CONFIG_X86 */ =20 static int flask_dm_op(struct domain *d) { return current_has_perm(d, SECCLASS_HVM, HVM__DM); } =20 -#endif /* CONFIG_X86 */ - static int flask_xen_version (uint32_t op) { u32 dsid =3D domain_sid(current->domain); @@ -1872,8 +1871,8 @@ static struct xsm_operations flask_ops =3D { .ioport_permission =3D flask_ioport_permission, .ioport_mapping =3D flask_ioport_mapping, .pmu_op =3D flask_pmu_op, - .dm_op =3D flask_dm_op, #endif + .dm_op =3D flask_dm_op, .xen_version =3D flask_xen_version, .domain_resource_map =3D flask_domain_resource_map, #ifdef CONFIG_ARGO --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769404; cv=none; d=zohomail.com; s=zohoarc; b=WReTwEcfHu0d4pkXjBuPG6mHuOlQmJrePVlDOiteQxYyecnnmzk74HYyqMxV7Un95sB35exsuMFcrk2eP9HaUruTbcOUNYiIxYjAZv10QxkY0YPDkKTUPAjGgiQo+aJ9Jr9fD39VDfm0CK1dD1O71OpPqvhSsXfLUIicSpqsg94= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769404; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=x2TvnkSHZ6vsjE4G3HR67k0zN5hdZf0KT4MdD6dTRn4=; b=hCaCVvHohifM2bBJsYezWqvugCmm326Ia0BwLIKQdwRfOiGxRU0RMNB5c3dyK+AcVo1hiq7i1jrQwF2xcYNsDW4o3Xg6x9C9eN89xQt+CIC2vCIH6xAM534GxR2wjpARlhqwhLC+zpRVFpCmHcYakjifwtb8n8bfXKgbxn06uyM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769404439247.0996423178675; Thu, 10 Sep 2020 13:23:24 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT64-0004ai-AW; Thu, 10 Sep 2020 20:23:08 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT63-0004JK-TS for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:07 +0000 Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 665f8bca-6a63-4b95-9739-3d38d92b83c7; Thu, 10 Sep 2020 20:22:39 +0000 (UTC) Received: by mail-lj1-x244.google.com with SMTP id a22so9836931ljp.13 for ; Thu, 10 Sep 2020 13:22:38 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.35 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:36 -0700 (PDT) X-Inumbo-ID: 665f8bca-6a63-4b95-9739-3d38d92b83c7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=x2TvnkSHZ6vsjE4G3HR67k0zN5hdZf0KT4MdD6dTRn4=; b=d7c0+ECti+KZNjYH5yLtCpMrSFmsaQQcmQwqcNjPTjCVB7SbdeLBJKfwJUiSdnV7Px cKN6utAnZDobb9G703G6qNUG7fc/uZL77NtV8TRMz6ybhQlWZcWYZphDChDuJ/Nbuw/w FhiQSVf79omn+imxS8xbs+01yAYUgm5Xg5wCWDGuDLa46TaSxDtsiMjznBYi8u5wBE4Y RJAln6b9uSNfXft7q1oQJ3vm+VK5Y5aMO7Bpj6C4lKA3ewV1+Hp6CWqXFVj3IT4S+7jx Fy/9Lhv/LvDhyMLB+ly7w5Eu3vGi14AmTzW/6tAeUVlqk/sdE+cvHBaRwpiVxSYJWCOA fzBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=x2TvnkSHZ6vsjE4G3HR67k0zN5hdZf0KT4MdD6dTRn4=; b=mDEbSbdjkJXF2uun43SxuO2LIeG/nfrl0kQEiTeBlgclf2y/+0dL0F7BJBvBLvEMJn l529453TtUMS6K0OyUPNICm1UBFiB60+oSKJ3VB/FZh6EGzcUcoMeuXL4U/8noO+7ws5 yLgYvSAYTnqNrTn8vpKxtJS8pmvxlGfO8n35DT8209q1YTFaK7TjEdwZ5ZlAHY7I+Bt8 45Ua9Mhhxpech0FIrwTklyrcrI31P3TgppSpVRBCMnIfG9FM5RUKxsM3U/yejbwGKL9+ yt8J9ItoM+RkJxHQf3m+ZXy0eZCo//rEd9rMMxxNUoJ5f6tmx2ZeHIQtGa0snrzSnYFA nYbQ== X-Gm-Message-State: AOAM530ZwNUSI1L3P7RzWeRjjaUvCtAiVCNdiDgrn8ly8cMXUk57wfPf f2OezEyJ3hkQ3hj//Uo17XC1rug16Z5Fiw== X-Google-Smtp-Source: ABdhPJxiCtwpmQ8E6CVwNVMH8pPcmbXi82Q8uLR+4j6pmLOijaXJM9EpcRxnzEtvWOkBTSIjNHIJCQ== X-Received: by 2002:a2e:8182:: with SMTP id e2mr5356957ljg.142.1599769357523; Thu, 10 Sep 2020 13:22:37 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , Julien Grall Subject: [PATCH V1 08/16] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Date: Thu, 10 Sep 2020 23:22:02 +0300 Message-Id: <1599769330-17656-9-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko As x86 implementation of XENMEM_resource_ioreq_server can be re-used on Arm later on, this patch makes it common and removes arch_acquire_resource as unneeded. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - no changes --- --- xen/arch/x86/mm.c | 44 -------------------------------------------- xen/common/memory.c | 45 +++++++++++++++++++++++++++++++++++++++++++-- xen/include/asm-arm/mm.h | 8 -------- xen/include/asm-x86/mm.h | 4 ---- 4 files changed, 43 insertions(+), 58 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 776d2b6..a5f6f12 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4594,50 +4594,6 @@ int xenmem_add_to_physmap_one( return rc; } =20 -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]) -{ - int rc; - - switch ( type ) - { -#ifdef CONFIG_HVM - case XENMEM_resource_ioreq_server: - { - ioservid_t ioservid =3D id; - unsigned int i; - - rc =3D -EINVAL; - if ( !is_hvm_domain(d) ) - break; - - if ( id !=3D (unsigned int)ioservid ) - break; - - rc =3D 0; - for ( i =3D 0; i < nr_frames; i++ ) - { - mfn_t mfn; - - rc =3D hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); - if ( rc ) - break; - - mfn_list[i] =3D mfn_x(mfn); - } - break; - } -#endif - - default: - rc =3D -EOPNOTSUPP; - break; - } - - return rc; -} - long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { int rc; diff --git a/xen/common/memory.c b/xen/common/memory.c index 714077c..e551fa6 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -30,6 +30,10 @@ #include #include =20 +#ifdef CONFIG_IOREQ_SERVER +#include +#endif + #ifdef CONFIG_X86 #include #endif @@ -1045,6 +1049,38 @@ static int acquire_grant_table(struct domain *d, uns= igned int id, return 0; } =20 +#ifdef CONFIG_IOREQ_SERVER +static int acquire_ioreq_server(struct domain *d, + unsigned int id, + unsigned long frame, + unsigned int nr_frames, + xen_pfn_t mfn_list[]) +{ + ioservid_t ioservid =3D id; + unsigned int i; + int rc; + + if ( !is_hvm_domain(d) ) + return -EINVAL; + + if ( id !=3D (unsigned int)ioservid ) + return -EINVAL; + + for ( i =3D 0; i < nr_frames; i++ ) + { + mfn_t mfn; + + rc =3D hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + if ( rc ) + return rc; + + mfn_list[i] =3D mfn_x(mfn); + } + + return 0; +} +#endif + static int acquire_resource( XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg) { @@ -1095,9 +1131,14 @@ static int acquire_resource( mfn_list); break; =20 +#ifdef CONFIG_IOREQ_SERVER + case XENMEM_resource_ioreq_server: + rc =3D acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames, + mfn_list); + break; +#endif default: - rc =3D arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame, - xmar.nr_frames, mfn_list); + rc =3D -EOPNOTSUPP; break; } =20 diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index f8ba49b..0b7de31 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -358,14 +358,6 @@ static inline void put_page_and_type(struct page_info = *page) =20 void clear_and_clean_page(struct page_info *page); =20 -static inline -int arch_acquire_resource(struct domain *d, unsigned int type, unsigned in= t id, - unsigned long frame, unsigned int nr_frames, - xen_pfn_t mfn_list[]) -{ - return -EOPNOTSUPP; -} - unsigned int arch_get_dma_bitsize(void); =20 #endif /* __ARCH_ARM_MM__ */ diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 7e74996..2e111ad 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -649,8 +649,4 @@ static inline bool arch_mfn_in_directmap(unsigned long = mfn) return mfn <=3D (virt_to_mfn(eva - 1) + 1); } =20 -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]); - #endif /* __ASM_X86_MM_H__ */ --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769411; cv=none; d=zohomail.com; s=zohoarc; b=HVUoVuJKMviJ49mzXOQ6BLjj4ULyT8/b5U4UMzq0xl7oJGOPmcZMDAuG1FhaMb+HoTomECepl11Z38O6HmPplserkKAJt7y812STPuQnSQHL+0T8XXCR+IZH4oEGlfp1faOm93NAUZuTQYFVTp9IyBFGJxqRLI2aVqC8Dudg92Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769411; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=0AtIGA45dPzpwGTPTn9t1nBIiSSHdr95t+OpAf4Cgq0=; b=YzH4Exa6Y25Ef5rCGzJTrMmafibtOtkM5Yxj0vRmKAtMSFhazR5suYxWpVTFpfDSuyobJ0/GZiDllCwA1C4meqFA0/1c00+dbB+83gWwOWpAycGjHILv0x1mr2ftarbpuPdWdM2x+KC9flLE0R22+/H5sU7jFlTySFR0CXnATvo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769411657892.9879860380315; Thu, 10 Sep 2020 13:23:31 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT69-0004eF-Kw; Thu, 10 Sep 2020 20:23:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT68-0004JK-Tf for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:12 +0000 Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1c1b7a3f-7fc0-4042-af50-4a7389492074; Thu, 10 Sep 2020 20:22:41 +0000 (UTC) Received: by mail-lj1-x244.google.com with SMTP id a22so9837052ljp.13 for ; Thu, 10 Sep 2020 13:22:41 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:38 -0700 (PDT) X-Inumbo-ID: 1c1b7a3f-7fc0-4042-af50-4a7389492074 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0AtIGA45dPzpwGTPTn9t1nBIiSSHdr95t+OpAf4Cgq0=; b=Fbg2C1lQQHMub319oNFt7Fud0L09Zb7TcCPEYv4wI62gaTqzeYFNv+K/TTzPTLeEIE sEH1Yub1OZy4WHcGz84jbXrWyUFfy2fV/nkYLk3o/1FiOBXVVLnlxf6c8rezvt+WpjE5 rjJoDBXQ03n1HmeqjspEmoBeniSVxNj6D+ax3bIbaeNmHmpo2pmRQwJe1QMXfwPEoqKb jXdVxrAHZJCvU0GkeX+1+x3tJpxF4mif2ElCnXhudbNbC1VIrrcbQh3nuamx3OuvB25L wLcjkZ53UMsq3ztk7cMn6YTvCQqAsAG7uOvZ/tvw+NRMpdT5uV5+Wt/VPGX5l2HR1Dxk /NcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0AtIGA45dPzpwGTPTn9t1nBIiSSHdr95t+OpAf4Cgq0=; b=ttcG4EVDWpsF1mOVX1F3ysbPz8tC/Uy5EGfnpfm8rG8+yEwZvfxfl84HOaIGHLaxdP iMXXsPqbrRRfC9VizJkw7PsdxhhKrJ994fiGS/c4EBCoiWSogjDYfLszKIKHKLn+YVTe PcUkm7wwMLIThH8dvF9uBMosWkO5m7Nil//YIb3V63vMV/9r89/IVkDX39EE6iQrCQFH 6bL2SAnvNtmiJKS2HT8rwgGVAGnf2itUIatxgKFNGQBX9ThL1envaa2iYjOa0LF0qIra rMPLCEz9VWpH4D1Shee6ImVZ54A9lv102adfRcxUc65NObmj2eyFSEGa/ZWlbQpZe7LW QtKw== X-Gm-Message-State: AOAM532OWCemXSmLNtRdwoQMk+me0gM/kccqb4STBHk5t3HVlSxf/C2R jthtguMr6WWef2z9GMdsAW8UFh35Wb+Xsw== X-Google-Smtp-Source: ABdhPJxClCODib2or4R1LXAHjJAqZTzfGrwbrz56vYJh860a9aP5OwtKFmNOd/tfwXJ2wOVwsN2aow== X-Received: by 2002:a05:651c:327:: with SMTP id b7mr5234887ljp.140.1599769359183; Thu, 10 Sep 2020 13:22:39 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V1 09/16] arm/ioreq: Introduce arch specific bits for IOREQ/DM features Date: Thu, 10 Sep 2020 23:22:03 +0300 Message-Id: <1599769330-17656-10-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch adds basic IOREQ/DM support on Arm. The subsequent patches will improve functionality, add remaining bits as well as address several TODOs. Please note, the "PIO handling" TODO is expected to left unaddressed for the current series. It is not an big issue for now while Xen doesn't have support for vPCI on Arm. On Arm64 they are only used for PCI IO Bar and we would probably want to expose them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling" should be implemented when we add support for vPCI. Please note, at the moment build on Arm32 is broken (see cmpxchg usage in hvm_send_buffered_ioreq()) due to the lack of cmpxchg_64 support on Arm32. There is a patch on review to address this issue: https://patchwork.kernel.org/patch/11715559/ Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was split into: - arm/ioreq: Introduce arch specific bits for IOREQ/DM features - xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm - update patch description - update asm-arm/hvm/ioreq.h according to the newly introduced arch func= tions: - arch_hvm_destroy_ioreq_server() - arch_handle_hvm_io_completion() - update arch files to include xen/ioreq.h - remove HVMOP plumbing - rewrite a logic to handle properly case when hvm_send_ioreq() returns = IO_RETRY - add a logic to handle properly handle_hvm_io_completion() return value - rename handle_mmio() to ioreq_handle_complete_mmio() - move paging_mark_pfn_dirty() to asm-arm/paging.h - remove forward declaration for hvm_ioreq_server in asm-arm/paging.h - move try_fwd_ioserv() to ioreq.c, provide stubs if !CONFIG_IOREQ_SERVER - do not remove #ifdef CONFIG_IOREQ_SERVER in memory.c for guarding xen/= ioreq.h - use gdprintk in try_fwd_ioserv(), remove unneeded prints - update list of #include-s - move has_vpci() to asm-arm/domain.h - add a comment (TODO) to unimplemented yet handle_pio() - remove hvm_mmio_first(last)_byte() and hvm_ioreq_(page/vcpu/server) st= ructs from the arch files, they were already moved to the common code - remove set_foreign_p2m_entry() changes, they will be properly implemen= ted in the follow-up patch - select IOREQ_SERVER for Arm instead of Arm64 in Kconfig - remove x86's realmode and other unneeded stubs from xen/ioreq.h - clafify ioreq_t p.df usage in try_fwd_ioserv() - set ioreq_t p.count to 1 in try_fwd_ioserv() --- --- xen/arch/arm/Kconfig | 1 + xen/arch/arm/Makefile | 2 + xen/arch/arm/dm.c | 33 ++++++++++ xen/arch/arm/domain.c | 9 +++ xen/arch/arm/io.c | 11 +++- xen/arch/arm/ioreq.c | 142 ++++++++++++++++++++++++++++++++++++= ++++ xen/arch/arm/traps.c | 32 +++++++-- xen/include/asm-arm/domain.h | 46 +++++++++++++ xen/include/asm-arm/hvm/ioreq.h | 108 ++++++++++++++++++++++++++++++ xen/include/asm-arm/mmio.h | 1 + xen/include/asm-arm/paging.h | 4 ++ 11 files changed, 384 insertions(+), 5 deletions(-) create mode 100644 xen/arch/arm/dm.c create mode 100644 xen/arch/arm/ioreq.c create mode 100644 xen/include/asm-arm/hvm/ioreq.h diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 2777388..8264cd6 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -21,6 +21,7 @@ config ARM select HAS_PASSTHROUGH select HAS_PDX select IOMMU_FORCE_PT_SHARE + select IOREQ_SERVER =20 config ARCH_DEFCONFIG string diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 7e82b21..617fa3e 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -13,6 +13,7 @@ obj-y +=3D cpuerrata.o obj-y +=3D cpufeature.o obj-y +=3D decode.o obj-y +=3D device.o +obj-$(CONFIG_IOREQ_SERVER) +=3D dm.o obj-y +=3D domain.o obj-y +=3D domain_build.init.o obj-y +=3D domctl.o @@ -27,6 +28,7 @@ obj-y +=3D guest_atomics.o obj-y +=3D guest_walk.o obj-y +=3D hvm.o obj-y +=3D io.o +obj-$(CONFIG_IOREQ_SERVER) +=3D ioreq.o obj-y +=3D irq.o obj-y +=3D kernel.init.o obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c new file mode 100644 index 0000000..eb20344 --- /dev/null +++ b/xen/arch/arm/dm.c @@ -0,0 +1,33 @@ +/* + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include + +int arch_dm_op(struct xen_dm_op *op, struct domain *d, + const struct dmop_args *op_args, bool *const_op) +{ + return -EOPNOTSUPP; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 3116932..043db3f 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -681,6 +682,10 @@ int arch_domain_create(struct domain *d, =20 ASSERT(config !=3D NULL); =20 +#ifdef CONFIG_IOREQ_SERVER + hvm_ioreq_init(d); +#endif + /* p2m_init relies on some value initialized by the IOMMU subsystem */ if ( (rc =3D iommu_domain_init(d, config->iommu_opts)) !=3D 0 ) goto fail; @@ -999,6 +1004,10 @@ int domain_relinquish_resources(struct domain *d) if (ret ) return ret; =20 +#ifdef CONFIG_IOREQ_SERVER + hvm_destroy_all_ioreq_servers(d); +#endif + PROGRESS(xen): ret =3D relinquish_memory(d, &d->xenpage_list); if ( ret ) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index ae7ef96..adc9de7 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -16,6 +16,7 @@ * GNU General Public License for more details. */ =20 +#include #include #include #include @@ -123,7 +124,15 @@ enum io_state try_handle_mmio(struct cpu_user_regs *re= gs, =20 handler =3D find_mmio_handler(v->domain, info.gpa); if ( !handler ) - return IO_UNHANDLED; + { + int rc; + + rc =3D try_fwd_ioserv(regs, v, &info); + if ( rc =3D=3D IO_HANDLED ) + return handle_ioserv(regs, v); + + return rc; + } =20 /* All the instructions used on emulated MMIO region should be valid */ if ( !dabt.valid ) diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c new file mode 100644 index 0000000..e493c5b --- /dev/null +++ b/xen/arch/arm/ioreq.c @@ -0,0 +1,142 @@ +/* + * arm/ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include + +#include + +#include + +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v) +{ + const union hsr hsr =3D { .bits =3D regs->hsr }; + const struct hsr_dabt dabt =3D hsr.dabt; + /* Code is similar to handle_read */ + uint8_t size =3D (1 << dabt.size) * 8; + register_t r =3D v->arch.hvm.hvm_io.io_req.data; + + /* We are done with the IO */ + v->arch.hvm.hvm_io.io_req.state =3D STATE_IOREQ_NONE; + + /* XXX: Do we need to take care of write here ? */ + if ( dabt.write ) + return IO_HANDLED; + + /* + * Sign extend if required. + * Note that we expect the read handler to have zeroed the bits + * outside the requested access size. + */ + if ( dabt.sign && (r & (1UL << (size - 1))) ) + { + /* + * We are relying on register_t using the same as + * an unsigned long in order to keep the 32-bit assembly + * code smaller. + */ + BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long)); + r |=3D (~0UL) << size; + } + + set_user_reg(regs, dabt.reg, r); + + return IO_HANDLED; +} + +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info) +{ + struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; + ioreq_t p =3D { + .type =3D IOREQ_TYPE_COPY, + .addr =3D info->gpa, + .size =3D 1 << info->dabt.size, + .count =3D 1, + .dir =3D !info->dabt.write, + /* + * On x86, df is used by 'rep' instruction to tell the direction + * to iterate (forward or backward). + * On Arm, all the accesses to MMIO region will do a single + * memory access. So for now, we can safely always set to 0. + */ + .df =3D 0, + .data =3D get_user_reg(regs, info->dabt.reg), + .state =3D STATE_IOREQ_READY, + }; + struct hvm_ioreq_server *s =3D NULL; + enum io_state rc; + + switch ( vio->io_req.state ) + { + case STATE_IOREQ_NONE: + break; + + case STATE_IORESP_READY: + return IO_HANDLED; + + default: + gdprintk(XENLOG_ERR, "wrong state %u\n", vio->io_req.state); + return IO_ABORT; + } + + s =3D hvm_select_ioreq_server(v->domain, &p); + if ( !s ) + return IO_UNHANDLED; + + if ( !info->dabt.valid ) + return IO_ABORT; + + vio->io_req =3D p; + + rc =3D hvm_send_ioreq(s, &p, 0); + if ( rc !=3D IO_RETRY || v->domain->is_shutting_down ) + vio->io_req.state =3D STATE_IOREQ_NONE; + else if ( !hvm_ioreq_needs_completion(&vio->io_req) ) + rc =3D IO_HANDLED; + else + vio->io_completion =3D HVMIO_mmio_completion; + + return rc; +} + +bool ioreq_handle_complete_mmio(void) +{ + struct vcpu *v =3D current; + struct cpu_user_regs *regs =3D guest_cpu_user_regs(); + const union hsr hsr =3D { .bits =3D regs->hsr }; + paddr_t addr =3D v->arch.hvm.hvm_io.io_req.addr; + + if ( try_handle_mmio(regs, hsr, addr) =3D=3D IO_HANDLED ) + { + advance_pc(regs, hsr); + return true; + } + + return false; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 8f40d0e..121942c 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -1384,6 +1385,9 @@ static arm_hypercall_t arm_hypercall_table[] =3D { #ifdef CONFIG_HYPFS HYPERCALL(hypfs_op, 5), #endif +#ifdef CONFIG_IOREQ_SERVER + HYPERCALL(dm_op, 3), +#endif }; =20 #ifndef NDEBUG @@ -1955,9 +1959,14 @@ static void do_trap_stage2_abort_guest(struct cpu_us= er_regs *regs, case IO_HANDLED: advance_pc(regs, hsr); return; + case IO_RETRY: + /* finish later */ + return; case IO_UNHANDLED: /* IO unhandled, try another way to handle it. */ break; + default: + ASSERT_UNREACHABLE(); } } =20 @@ -2249,12 +2258,23 @@ static void check_for_pcpu_work(void) * Process pending work for the vCPU. Any call should be fast or * implement preemption. */ -static void check_for_vcpu_work(void) +static bool check_for_vcpu_work(void) { struct vcpu *v =3D current; =20 +#ifdef CONFIG_IOREQ_SERVER + bool handled; + + local_irq_enable(); + handled =3D handle_hvm_io_completion(v); + local_irq_disable(); + + if ( !handled ) + return true; +#endif + if ( likely(!v->arch.need_flush_to_ram) ) - return; + return false; =20 /* * Give a chance for the pCPU to process work before handling the vCPU @@ -2265,6 +2285,8 @@ static void check_for_vcpu_work(void) local_irq_enable(); p2m_flush_vm(v); local_irq_disable(); + + return false; } =20 /* @@ -2277,8 +2299,10 @@ void leave_hypervisor_to_guest(void) { local_irq_disable(); =20 - check_for_vcpu_work(); - check_for_pcpu_work(); + do + { + check_for_pcpu_work(); + } while ( check_for_vcpu_work() ); =20 vgic_sync_to_lrs(); =20 diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 6819a3b..d1c48d7 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -11,10 +11,27 @@ #include #include #include +#include +#include + +#define MAX_NR_IOREQ_SERVERS 8 =20 struct hvm_domain { uint64_t params[HVM_NR_PARAMS]; + + /* Guest page range used for non-default ioreq servers */ + struct { + unsigned long base; + unsigned long mask; + unsigned long legacy_mask; /* indexed by HVM param number */ + } ioreq_gfn; + + /* Lock protects all other values in the sub-struct and the default */ + struct { + spinlock_t lock; + struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + } ioreq_server; }; =20 #ifdef CONFIG_ARM_64 @@ -91,6 +108,28 @@ struct arch_domain #endif } __cacheline_aligned; =20 +enum hvm_io_completion { + HVMIO_no_completion, + HVMIO_mmio_completion, + HVMIO_pio_completion +}; + +struct hvm_vcpu_io { + /* I/O request in flight to device model. */ + enum hvm_io_completion io_completion; + ioreq_t io_req; + + /* + * HVM emulation: + * Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn. + * The latter is known to be an MMIO frame (not RAM). + * This translation is only valid for accesses as per @mmio_access. + */ + struct npfec mmio_access; + unsigned long mmio_gla; + unsigned long mmio_gpfn; +}; + struct arch_vcpu { struct { @@ -204,6 +243,11 @@ struct arch_vcpu */ bool need_flush_to_ram; =20 + struct hvm_vcpu + { + struct hvm_vcpu_io hvm_io; + } hvm; + } __cacheline_aligned; =20 void vcpu_show_execution_state(struct vcpu *); @@ -262,6 +306,8 @@ static inline void arch_vcpu_block(struct vcpu *v) {} =20 #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_f= lag) =20 +#define has_vpci(d) ({ (void)(d); false; }) + #endif /* __ASM_DOMAIN_H__ */ =20 /* diff --git a/xen/include/asm-arm/hvm/ioreq.h b/xen/include/asm-arm/hvm/iore= q.h new file mode 100644 index 0000000..1c34df0 --- /dev/null +++ b/xen/include/asm-arm/hvm/ioreq.h @@ -0,0 +1,108 @@ +/* + * hvm.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#ifndef __ASM_ARM_HVM_IOREQ_H__ +#define __ASM_ARM_HVM_IOREQ_H__ + +#include +#include + +#ifdef CONFIG_IOREQ_SERVER +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v); +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info); +#else +static inline enum io_state handle_ioserv(struct cpu_user_regs *regs, + struct vcpu *v) +{ + return IO_UNHANDLED; +} + +static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *in= fo) +{ + return IO_UNHANDLED; +} +#endif + +bool ioreq_handle_complete_mmio(void); + +static inline bool handle_pio(uint16_t port, unsigned int size, int dir) +{ + /* + * TODO: For Arm64, the main user will be PCI. So this should be + * implemented when we add support for vPCI. + */ + BUG(); + return true; +} + +static inline int arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *s) +{ + return 0; +} + +static inline void msix_write_completion(struct vcpu *v) +{ +} + +static inline bool arch_handle_hvm_io_completion( + enum hvm_io_completion io_completion) +{ + ASSERT_UNREACHABLE(); +} + +static inline int hvm_get_ioreq_server_range_type(struct domain *d, + ioreq_t *p, + uint8_t *type, + uint64_t *addr) +{ + if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO ) + return -EINVAL; + + *type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr =3D p->addr; + + return 0; +} + +static inline void arch_hvm_ioreq_init(struct domain *d) +{ +} + +static inline void arch_hvm_ioreq_destroy(struct domain *d) +{ +} + +#define IOREQ_IO_HANDLED IO_HANDLED +#define IOREQ_IO_UNHANDLED IO_UNHANDLED +#define IOREQ_IO_RETRY IO_RETRY + +#endif /* __ASM_ARM_HVM_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h index 8dbfb27..7ab873c 100644 --- a/xen/include/asm-arm/mmio.h +++ b/xen/include/asm-arm/mmio.h @@ -37,6 +37,7 @@ enum io_state IO_ABORT, /* The IO was handled by the helper and led to an abor= t. */ IO_HANDLED, /* The IO was successfully handled by the helper. */ IO_UNHANDLED, /* The IO was not handled by the helper. */ + IO_RETRY, /* Retry the emulation for some reason */ }; =20 typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info, diff --git a/xen/include/asm-arm/paging.h b/xen/include/asm-arm/paging.h index 6d1a000..0550c55 100644 --- a/xen/include/asm-arm/paging.h +++ b/xen/include/asm-arm/paging.h @@ -4,6 +4,10 @@ #define paging_mode_translate(d) (1) #define paging_mode_external(d) (1) =20 +static inline void paging_mark_pfn_dirty(struct domain *d, pfn_t pfn) +{ +} + #endif /* XEN_PAGING_H */ =20 /* --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769606; cv=none; d=zohomail.com; s=zohoarc; b=nf4abqkvuiAlWC+fKDWmRj18SB2hetDa765/VLlo830X20JDbDvHVDTOKdm4Yq4H/mHdQRFy7Zrrf8etfO6MoqTNhpLfx/7YeyhKEvuLBNCjdLyf6mkEhjS9CfyAaevgE7/rjqlWlGUoc2EIY3UppWBQ9VoMvv68NIj97e0e2DA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769606; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=yxgZFLntZkraDf9xLCHvPwNiW2e9Dwj0UjwcNkgA0sQ=; b=XepsauqtKoW7JwWTbgSUK6tlDD5yzK6D8NFPD7urjsK59l/HyWL0eB04CJsP4rSrsg3HkhpsyaBsVUPxFszJ9VANt0Dl4KeymTH5nUcjFKkStYDclLntgEYKtyhzhOVLXAESyV5tGMryUw2/iaKV9duYyo73L6Vbwvl9LchTKSg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769606485496.9811759530527; Thu, 10 Sep 2020 13:26:46 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT9I-0005I8-Ta; Thu, 10 Sep 2020 20:26:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT6D-0004JK-Tr for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:17 +0000 Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 990c3b5d-edac-40e4-8ae9-b0a74ea99e0e; Thu, 10 Sep 2020 20:22:42 +0000 (UTC) Received: by mail-lj1-x243.google.com with SMTP id k25so9836122ljg.9 for ; Thu, 10 Sep 2020 13:22:42 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:39 -0700 (PDT) X-Inumbo-ID: 990c3b5d-edac-40e4-8ae9-b0a74ea99e0e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yxgZFLntZkraDf9xLCHvPwNiW2e9Dwj0UjwcNkgA0sQ=; b=Akun53CWZb6BBO5kJrsAqNY/zZ5Laqym5XCZd0+LlPvURtDOBgvCMsDK9lJqmxJE1i W0YEb8daD3TwMJgsuxLl7RQVBGi2yUnC2TWuOiCMclcL/49LRsk8banvIe95LBsydupw daVS/DNt4jjKbwai+KQC7uP4WvqLt2RJHxHccIO2YnlFvQisDv2ZNe9nee83KW9HHJNR W+7UDKGTpGhyNAV4uMEicgsNLJMLclw5WYglydei8wwsUp5qQUiRsAe2yReoCAruvMDB hpxbnMT2cPhMjZRqMVJKoKibolPasc9mO7kAz4kWU+BJ0BERHbBW0LcGVpfEM2EC+Wxm lY7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yxgZFLntZkraDf9xLCHvPwNiW2e9Dwj0UjwcNkgA0sQ=; b=TGbxDfmXMWE8kVbnb/V+xouxL3FbXEBrG25ePcwrN+PshzGYwDi1iUzmx9/F2MXOFF CEnuKgyX+VLysP+1DKw1HylcHx1d9lOU1OTC5+Ylbpp4ekypHSVU4VYwwbYDWMo6Fq1l EDEP47tyhy9hRQkeGjXO3e89n6BPPfJU2DbKRLk5Qq2Min/XBMI7ZqUVyUqNnwJI31rB 9j5kcC4wLLsObJgoZmKdRLGzCGL3bVgrs5uznE4BIi/e995NZYk6omTOBrJLAd4BXe4R MUpraDZjjZSZW7fMDEdFKko6jzhzS740twhfHC7QW3XKKe4yuBaU6CW+bdDxAfVFtg7C NDOA== X-Gm-Message-State: AOAM531gDzGdGXOMZy4JBqiZhNpkF8B4DyF8yIrIWlFBAfApZCuKX3QL ChzvH+kFjBzmXMfHIQcP/rvrqo2aRRflbg== X-Google-Smtp-Source: ABdhPJzUiir/GYZesZdfpkEiBhXiAqO7zdE/BrOF0S8yL5Jw0rnc3Ir9R+2SKVJxxarKl1BV6QaaHg== X-Received: by 2002:a2e:a588:: with SMTP id m8mr5240675ljp.210.1599769361044; Thu, 10 Sep 2020 13:22:41 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall Subject: [PATCH V1 10/16] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm Date: Thu, 10 Sep 2020 23:22:04 +0300 Message-Id: <1599769330-17656-11-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch implements reference counting of foreign entries in in set_foreign_p2m_entry() on Arm. This is a mandatory action if we want to run emulator (IOREQ server) in other than dom0 domain, as we can't trust it to do the right thing if it is not running in dom0. So we need to grab a reference on the page to avoid it disappearing. It was tested with IOREQ feature to confirm that all the pages given to this function belong to a domain, so we can use the same approach as for XENMAPSPACE_gmfn_foreign handling in xenmem_add_to_physmap_one(). This involves adding an extra parameter for the foreign domain to set_foreign_p2m_entry(). Also remove restriction for the hardware domain in the common code if we run on Arm. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/= DM features" - rewrite a logic to handle properly reference in set_foreign_p2m_entry() instead of treating foreign entries as p2m_ram_rw --- --- xen/arch/arm/p2m.c | 16 ++++++++++++++++ xen/arch/x86/mm/p2m.c | 5 +++-- xen/common/memory.c | 4 +++- xen/include/asm-arm/p2m.h | 11 ++--------- xen/include/asm-x86/p2m.h | 3 ++- 5 files changed, 26 insertions(+), 13 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index ce59f2b..cb64fc5 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1385,6 +1385,22 @@ int guest_physmap_remove_page(struct domain *d, gfn_= t gfn, mfn_t mfn, return p2m_remove_mapping(d, gfn, (1 << page_order), mfn); } =20 +int set_foreign_p2m_entry(struct domain *d, struct domain *fd, + unsigned long gfn, mfn_t mfn) +{ + struct page_info *page =3D mfn_to_page(mfn); + int rc; + + if ( !get_page(page, fd) ) + return -EINVAL; + + rc =3D guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_r= w); + if ( rc ) + put_page(page); + + return 0; +} + static struct page_info *p2m_allocate_root(void) { struct page_info *page; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index db7bde0..f27f8a4 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1320,7 +1320,8 @@ static int set_typed_p2m_entry(struct domain *d, unsi= gned long gfn_l, } =20 /* Set foreign mfn in the given guest's p2m table. */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +int set_foreign_p2m_entry(struct domain *d, struct domain *fd, + unsigned long gfn, mfn_t mfn) { return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign, p2m_get_hostp2m(d)->default_access); @@ -2619,7 +2620,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned lon= g fgfn, * will update the m2p table which will result in mfn -> gpfn of dom0 * and not fgfn of domU. */ - rc =3D set_foreign_p2m_entry(tdom, gpfn, mfn); + rc =3D set_foreign_p2m_entry(tdom, fdom, gpfn, mfn); if ( rc ) gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. " "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n", diff --git a/xen/common/memory.c b/xen/common/memory.c index e551fa6..78781f1 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1155,6 +1155,7 @@ static int acquire_resource( xen_pfn_t gfn_list[ARRAY_SIZE(mfn_list)]; unsigned int i; =20 +#ifndef CONFIG_ARM /* * FIXME: Until foreign pages inserted into the P2M are properly * reference counted, it is unsafe to allow mapping of @@ -1162,13 +1163,14 @@ static int acquire_resource( */ if ( !is_hardware_domain(currd) ) return -EACCES; +#endif =20 if ( copy_from_guest(gfn_list, xmar.frame_list, xmar.nr_frames) ) rc =3D -EFAULT; =20 for ( i =3D 0; !rc && i < xmar.nr_frames; i++ ) { - rc =3D set_foreign_p2m_entry(currd, gfn_list[i], + rc =3D set_foreign_p2m_entry(currd, d, gfn_list[i], _mfn(mfn_list[i])); /* rc should be -EIO for any iteration other than the first */ if ( rc && i ) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 5fdb6e8..53ce373 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -381,15 +381,8 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsig= ned int order) return gfn_add(gfn, 1UL << order); } =20 -static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gf= n, - mfn_t mfn) -{ - /* - * NOTE: If this is implemented then proper reference counting of - * foreign entries will need to be implemented. - */ - return -EOPNOTSUPP; -} +int set_foreign_p2m_entry(struct domain *d, struct domain *fd, + unsigned long gfn, mfn_t mfn); =20 /* * A vCPU has cache enabled only when the MMU is enabled and data cache diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 8abae34..23bdca1 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -635,7 +635,8 @@ int p2m_is_logdirty_range(struct p2m_domain *, unsigned= long start, unsigned long end); =20 /* Set foreign entry in the p2m table (for priv-mapping) */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn); +int set_foreign_p2m_entry(struct domain *d, struct domain *fd, + unsigned long gfn, mfn_t mfn); =20 /* Set mmio addresses in the p2m table (for pass-through) */ int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn, --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769606; cv=none; d=zohomail.com; s=zohoarc; b=DLyMfmoHz79uh5L0O60rnFAyY3Gbr0dbvmuuvHu+b52fB8zHgkh3rMIaCCGwO9AcndJ/AbxTm1RC8Zx+ITlTFKscdgZts/si60KB8utrcklg1/ZoiNyMsZX9hjtPfhZyAfpXtBXgoKTrTDIIq2IqDaSG5/ijHi/YT5PcWLK/XWc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769606; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=6Ks15DjS0DDqzRnab3kDGJifccvvIXZZROzSdME2Ru0=; b=AkvXZ64WK0Fuc4uTrKVXjNuNK307z2YbER8S4vHjwDHzX8qepxF0zgD8jvS1I/vO171PgWINvWaWT5ABPzwmc4NT0hmgbtryC/4NMvl31F04G1oXH+KE8WBnWCOsUolY5GfBGJhkXpeBPZgeGOAT/rfxcX6WoRiT7PA7IhDyqu0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769606437637.3593137826749; Thu, 10 Sep 2020 13:26:46 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT9K-0005Is-0K; Thu, 10 Sep 2020 20:26:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT6I-0004JK-U0 for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:22 +0000 Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d5b9db3b-c25d-4e40-8632-104471869275; Thu, 10 Sep 2020 20:22:44 +0000 (UTC) Received: by mail-lj1-x243.google.com with SMTP id a22so9837220ljp.13 for ; Thu, 10 Sep 2020 13:22:44 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:41 -0700 (PDT) X-Inumbo-ID: d5b9db3b-c25d-4e40-8632-104471869275 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6Ks15DjS0DDqzRnab3kDGJifccvvIXZZROzSdME2Ru0=; b=pQLnLxd6HijBw6T2/4r0fqA8xBIRT9t6cVrxHnIV9gMVt17VKbu3Zsxxka27qSmdBw 5HrMqzO1V6mNh7eJGjj+GwUnNpFx/I73iJTzIzUblVtY4Y91FBhdhPwn8f/fXBmOrDPP o4B3x9w/h5t8TuhvaMEDr3W7xLqVRDnfTd9hOpq+yZv9dOL/i7Fw6C+TEOUsjgMJ/Kvw 0nprpaQK3NwC7T7uIqIjfNPECDsKDr45zdgFvnpOLYIk89jbwJx+lE8PBUz75g2ZgN1D +wX0DeApF+jna8PMT0c7SRd34WEeuhcvDyuIWZn2o6EwV+mtOrCrkj91XQEIH5PLLKSw xe0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6Ks15DjS0DDqzRnab3kDGJifccvvIXZZROzSdME2Ru0=; b=IkxSv5HVJdMBbwUdgI0YpOAZ3kMX2clrnOgy8HMetdDElMuhc7vQ8EeH7M7U98Ajvn Oyp4rZ/b5zEsgf094gfBDqeYUuZX9YOad5yTJJRD0Zpq3zq+AjS4hlvV3a9iHmW1gjyz IfPHUly1cu8HjGKTEs04FfEoExJ2+l7I0glNMb94co3F3/n+NDf2vs4dFgZ4UY5o7e8O Vl29WhDiQV0AZTkCcSu/CJWKbVHnoR82vJRe/0NWNona62nlOPjD96hPwrglfzKzFZ3s DLHiG4dfmdAb9OxUpjvZJQ6vIt4fpgGpIDWRwCPUAb1bUF1fzarJl5ko9NKRDR4BMLGZ G7CA== X-Gm-Message-State: AOAM533idwie7/50DSPypeh9rhSUyUtDotyrv6JT9qeNJw7EubtbCH3D +HXhJ6idC06aBymIqPQ9UbbZw5iFkZOBiA== X-Google-Smtp-Source: ABdhPJz9AmufEaILMskmWAfw+1Yby0aOeEljv9qOECw1anxy1bJsicUW6PMmbqgvYlENXsnfIcElZg== X-Received: by 2002:a2e:a16e:: with SMTP id u14mr5889108ljl.464.1599769362623; Thu, 10 Sep 2020 13:22:42 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Paul Durrant , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall Subject: [PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server() Date: Thu, 10 Sep 2020 23:22:05 +0300 Message-Id: <1599769330-17656-12-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch introduces a helper the main purpose of which is to check if a domain is using IOREQ server(s). On Arm the benefit is to avoid calling handle_hvm_io_completion() (which implies iterating over all possible IOREQ servers anyway) on every return in leave_hypervisor_to_guest() if there is no active servers for the particular domain. This involves adding an extra per-domain variable to store the count of servers in use. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch --- --- xen/arch/arm/traps.c | 15 +++++++++------ xen/common/ioreq.c | 9 ++++++++- xen/include/asm-arm/domain.h | 1 + xen/include/asm-x86/hvm/domain.h | 1 + xen/include/xen/ioreq.h | 5 +++++ 5 files changed, 24 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 121942c..6b37ae1 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2263,14 +2263,17 @@ static bool check_for_vcpu_work(void) struct vcpu *v =3D current; =20 #ifdef CONFIG_IOREQ_SERVER - bool handled; + if ( hvm_domain_has_ioreq_server(v->domain) ) + { + bool handled; =20 - local_irq_enable(); - handled =3D handle_hvm_io_completion(v); - local_irq_disable(); + local_irq_enable(); + handled =3D handle_hvm_io_completion(v); + local_irq_disable(); =20 - if ( !handled ) - return true; + if ( !handled ) + return true; + } #endif =20 if ( likely(!v->arch.need_flush_to_ram) ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index ce12751..4c3a835 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -38,9 +38,15 @@ static void set_ioreq_server(struct domain *d, unsigned = int id, struct hvm_ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + ASSERT((!s && d->arch.hvm.ioreq_server.server[id]) || + (s && !d->arch.hvm.ioreq_server.server[id])); =20 d->arch.hvm.ioreq_server.server[id] =3D s; + + if ( s ) + d->arch.hvm.ioreq_server.nr_servers ++; + else + d->arch.hvm.ioreq_server.nr_servers --; } =20 /* @@ -1395,6 +1401,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buf= fered) void hvm_ioreq_init(struct domain *d) { spin_lock_init(&d->arch.hvm.ioreq_server.lock); + d->arch.hvm.ioreq_server.nr_servers =3D 0; =20 arch_hvm_ioreq_init(d); } diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index d1c48d7..0c0506a 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -31,6 +31,7 @@ struct hvm_domain struct { spinlock_t lock; struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + unsigned int nr_servers; } ioreq_server; }; =20 diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 765f35c..79e0afb 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -77,6 +77,7 @@ struct hvm_domain { struct { spinlock_t lock; struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + unsigned int nr_servers; } ioreq_server; =20 /* Cached CF8 for guest PCI config cycles */ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 102f7e8..25ce4c2 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -57,6 +57,11 @@ struct hvm_ioreq_server { uint8_t bufioreq_handling; }; =20 +static inline bool hvm_domain_has_ioreq_server(const struct domain *d) +{ + return (d->arch.hvm.ioreq_server.nr_servers > 0); +} + #define GET_IOREQ_SERVER(d, id) \ (d)->arch.hvm.ioreq_server.server[id] =20 --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769606; cv=none; d=zohomail.com; s=zohoarc; b=ZZ6Xj+SOY945BSfGQ4NgI1rQFjjRn8Eql56HV8eLAVF5TXBorezvXBpd7nl48Wqd/oonBd8G8GKoPEezfKjbgZVP1CshhN5hrNffTzKnW1tfl5KWSUS3PjmruuVbw9tXRdSlBVeOyvJzVWNqmVebrAoZqqLBeokaT40ipiYwKzY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769606; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=D6YgogdWC/7PF1kXFZ7Tsu3HScerYpi4Qm+jKi/WJJk=; b=b+BGYPvJ9c9wdA1fPFZC9/hhfKGdYxejzTw3RUOlEqvdK74IPs6F79x8XM2p68xAqIyECrIVXjmx+SnPFPggjKlhB+C5zTFkEyy9JPZVfEtbAvhkaKGT5fo09UwD/8br6zDtklhIafNEbBtvuF8NpkV3gFHzh1jxc89/+xuiMPQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769606271204.3316616256626; Thu, 10 Sep 2020 13:26:46 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT9J-0005IE-5J; Thu, 10 Sep 2020 20:26:29 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT6N-0004JK-UE for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:27 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 562f6348-18ea-4f35-8394-1c3c3864a96d; Thu, 10 Sep 2020 20:22:45 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id u4so9847445ljd.10 for ; Thu, 10 Sep 2020 13:22:45 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:43 -0700 (PDT) X-Inumbo-ID: 562f6348-18ea-4f35-8394-1c3c3864a96d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=D6YgogdWC/7PF1kXFZ7Tsu3HScerYpi4Qm+jKi/WJJk=; b=aH8XddBQe7AClA/qlAKcgTavnynWzfWu0brAb6E2EdSZVVLeIUpGWuX+NFgQJinIq9 jHpCF+aroS38SZg3N9KCKwX+Fub3oD5mUEhpu3emZw5MnTQu+NV65NTvCtt7ydUrZwMD /g08AepNYOB1fKGVXtzyqllPfZpgs/fI+qZoCo6rk/l7f6ghwCnP1Ugrsx7PQdaSscYU jYNWt+v6lGOxjRSOWcBJY//ujkVSFILxybFFmeXVYdww0X4mZg9eYb5US7xIyYSXBi5R CAS+GavHeog47L6cltZ2WfWheBC3dRTQZOdcFGvI57hHn2xiPrVoIU0EmOeu2AsGXadc va+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=D6YgogdWC/7PF1kXFZ7Tsu3HScerYpi4Qm+jKi/WJJk=; b=Iugtb88MjYzQmHt8BHdNmVojzgO1aJ58nMy6s7ac+V4WxyqDAQIYDGZUg3g3ySA72p b0knLfM4zMwDGOA43tPMCx2uFBCAEbJmCQv9DLUMXVWMR1Tx9sCFxjuyeEY1gocTSbml 6G3Gs6qnfYxL84U0kjxLWdtnbsLR+4s57n/iXw8vUGDsP9EA9GeiP1MX8yAIHHnVuFuP CItCbDXTdsIROfazLVznKsEQJAxF3TiZldaJ/w6FDQ31QYOX1Yq7AdMFmkbi42r73dFX /l8zmlbCxAAKcUmELRiXSWWOUi3TPpAckZwPXH5J78sSVTnMfglL8A9GYt4FYst1cqsD uVRQ== X-Gm-Message-State: AOAM5307FMMITL3cE1tJ/eYsByWU3ZyH+rzfPIk5WbkzxZm2uhvaxnCp F/GA7Hsx52A+5UxpjABkFbaziLAXtsl9eQ== X-Google-Smtp-Source: ABdhPJx/gOUa7VYpaB2iQbqnTp5qSWqcEfF0AFwrGFWvuu73V3Ed3TUz3vcsjPkG7sqifQVBK6/lBw== X-Received: by 2002:a2e:b531:: with SMTP id z17mr5837398ljm.30.1599769363993; Thu, 10 Sep 2020 13:22:43 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Ian Jackson , Wei Liu , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , Julien Grall Subject: [PATCH V1 12/16] xen/dm: Introduce xendevicemodel_set_irq_level DM op Date: Thu, 10 Sep 2020 23:22:06 +0300 Message-Id: <1599769330-17656-13-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch adds ability to the device emulator to notify otherend (some entity running in the guest) using a SPI and implements Arm specific bits for it. Proposed interface allows emulator to set the logical level of a one of a domain's IRQ lines. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Please note, I left interface untouched since there is still an open discussion what interface to use/what information to pass to the hypervisor. The question whether we should abstract away the state of the line or not. Changes RFC -> V1: - check incoming parameters in arch_dm_op() - add explicit padding to struct xen_dm_op_set_irq_level --- --- tools/libs/devicemodel/core.c | 18 +++++++++++++ tools/libs/devicemodel/include/xendevicemodel.h | 4 +++ tools/libs/devicemodel/libxendevicemodel.map | 1 + xen/arch/arm/dm.c | 36 +++++++++++++++++++++= +++- xen/common/dm.c | 1 + xen/include/public/hvm/dm_op.h | 15 +++++++++++ 6 files changed, 74 insertions(+), 1 deletion(-) diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c index 4d40639..30bd79f 100644 --- a/tools/libs/devicemodel/core.c +++ b/tools/libs/devicemodel/core.c @@ -430,6 +430,24 @@ int xendevicemodel_set_isa_irq_level( return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); } =20 +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, uint32_t irq, + unsigned int level) +{ + struct xen_dm_op op; + struct xen_dm_op_set_irq_level *data; + + memset(&op, 0, sizeof(op)); + + op.op =3D XEN_DMOP_set_irq_level; + data =3D &op.u.set_irq_level; + + data->irq =3D irq; + data->level =3D level; + + return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); +} + int xendevicemodel_set_pci_link_route( xendevicemodel_handle *dmod, domid_t domid, uint8_t link, uint8_t irq) { diff --git a/tools/libs/devicemodel/include/xendevicemodel.h b/tools/libs/d= evicemodel/include/xendevicemodel.h index e877f5c..c06b3c8 100644 --- a/tools/libs/devicemodel/include/xendevicemodel.h +++ b/tools/libs/devicemodel/include/xendevicemodel.h @@ -209,6 +209,10 @@ int xendevicemodel_set_isa_irq_level( xendevicemodel_handle *dmod, domid_t domid, uint8_t irq, unsigned int level); =20 +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, unsigned int irq, + unsigned int level); + /** * This function maps a PCI INTx line to a an IRQ line. * diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devi= cemodel/libxendevicemodel.map index 561c62d..a0c3012 100644 --- a/tools/libs/devicemodel/libxendevicemodel.map +++ b/tools/libs/devicemodel/libxendevicemodel.map @@ -32,6 +32,7 @@ VERS_1.2 { global: xendevicemodel_relocate_memory; xendevicemodel_pin_memory_cacheattr; + xendevicemodel_set_irq_level; } VERS_1.1; =20 VERS_1.3 { diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c index eb20344..428ef98 100644 --- a/xen/arch/arm/dm.c +++ b/xen/arch/arm/dm.c @@ -15,11 +15,45 @@ */ =20 #include +#include =20 int arch_dm_op(struct xen_dm_op *op, struct domain *d, const struct dmop_args *op_args, bool *const_op) { - return -EOPNOTSUPP; + int rc; + + switch ( op->op ) + { + case XEN_DMOP_set_irq_level: + { + const struct xen_dm_op_set_irq_level *data =3D + &op->u.set_irq_level; + + /* Only SPIs are supported */ + if ( (data->irq < NR_LOCAL_IRQS) || (data->irq >=3D vgic_num_irqs(= d)) ) + { + rc =3D -EINVAL; + break; + } + + if ( data->level !=3D 0 && data->level !=3D 1 ) + { + rc =3D -EINVAL; + break; + } + + + vgic_inject_irq(d, NULL, data->irq, data->level); + rc =3D 0; + break; + } + + default: + rc =3D -EOPNOTSUPP; + break; + } + + return rc; } =20 /* diff --git a/xen/common/dm.c b/xen/common/dm.c index 060731d..c55e042 100644 --- a/xen/common/dm.c +++ b/xen/common/dm.c @@ -47,6 +47,7 @@ static int dm_op(const struct dmop_args *op_args) [XEN_DMOP_remote_shutdown] =3D sizeof(struct xen_= dm_op_remote_shutdown), [XEN_DMOP_relocate_memory] =3D sizeof(struct xen_= dm_op_relocate_memory), [XEN_DMOP_pin_memory_cacheattr] =3D sizeof(struct xen_= dm_op_pin_memory_cacheattr), + [XEN_DMOP_set_irq_level] =3D sizeof(struct xen_= dm_op_set_irq_level), }; =20 rc =3D rcu_lock_remote_domain_by_id(op_args->domid, &d); diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index fd00e9d..39567bf 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -417,6 +417,20 @@ struct xen_dm_op_pin_memory_cacheattr { uint32_t pad; }; =20 +/* + * XEN_DMOP_set_irq_level: Set the logical level of a one of a domain's + * IRQ lines. + * XXX Handle PPIs. + */ +#define XEN_DMOP_set_irq_level 19 + +struct xen_dm_op_set_irq_level { + uint32_t irq; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; + uint8_t pad[3]; +}; + struct xen_dm_op { uint32_t op; uint32_t pad; @@ -430,6 +444,7 @@ struct xen_dm_op { struct xen_dm_op_track_dirty_vram track_dirty_vram; struct xen_dm_op_set_pci_intx_level set_pci_intx_level; struct xen_dm_op_set_isa_irq_level set_isa_irq_level; + struct xen_dm_op_set_irq_level set_irq_level; struct xen_dm_op_set_pci_link_route set_pci_link_route; struct xen_dm_op_modified_memory modified_memory; struct xen_dm_op_set_mem_type set_mem_type; --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769606; cv=none; d=zohomail.com; s=zohoarc; b=R1gsScYUDSfI/4N9wNcdzfOkqydQgGedhsoP6HXMWq4Uf0zgE0Jb5W+k2x08rTN9vlJ/WeT91Qhzu3oy+NK/XahuOdK4TrF3k7d9j2IAgXh1DYWG6GfqvglFZnVK5+DWNO7vQzhERHonLRpDb6fR734Y8xJpiwvXgaasQ/m/y1k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769606; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=J2dfVVSK6L2DPERLhcMwioW/97FP+DCrobN0HJFQMKI=; b=AMPU89oxeXwhz4UmrnwMzIc+DbwMwlHG27AUp6sENYRpl9uQVXLoRhx02lc5u3CRvE+PyUiwMqgoSsz6Vj+Ib2ix7naTHKpIVChoAMPq9COC7Shyf/y4hft/Mmj8Q/EWyXRux5haGt9PpJx6XysGcNw88N7EYJTBAA+BDbOSRyo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769606978129.9539354894157; Thu, 10 Sep 2020 13:26:46 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT9I-0005Hw-CL; Thu, 10 Sep 2020 20:26:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT6S-0004JK-UE for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:32 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 82ff4e5a-294d-4afc-9b16-30ee735275fa; Thu, 10 Sep 2020 20:22:47 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id u21so9882751ljl.6 for ; Thu, 10 Sep 2020 13:22:47 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.44 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:44 -0700 (PDT) X-Inumbo-ID: 82ff4e5a-294d-4afc-9b16-30ee735275fa DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=J2dfVVSK6L2DPERLhcMwioW/97FP+DCrobN0HJFQMKI=; b=MlpZt1CAdaTxZTnsDSZXUyuGYNhfMc0hJRwOwM+CO1QZdO44ow6KUheZehGwwlxQ3L TjKXM910G4vD0G+eo6DiDtDCg+As5U72BifOXyNGAcwNgyHAM5Lh57Kc/VraIBzQ8SVC oStwaIKMU3KQEHuqQ8Cwe2mS6+/EkhyLx1UJHD+G0dQdAPiZQCImwYB0Zb85DDCJwzpw KgOK62qckAzdc47IVf9Y4zMQ7fGzpOojePeE76HXHbdCAUWG8x/xm1tq+I4CNlsONPUG 7Eg/PM75sCTRyv78au8dPvgGVKlQMbKB2JTDNTUGgQoVij8o2EMYQMZAozIuFCRgOtWQ 681g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=J2dfVVSK6L2DPERLhcMwioW/97FP+DCrobN0HJFQMKI=; b=aPirv+cfnggnYN0DWi0tZX0zhftY0ZMp9jz+kC9jNGbekpIk2yPuS86n3Trwpj+Obw qm9ehxgao5o9OwyISMdPbsBjz3NfapOwd9oh37Oa0K8B5Q69RAYp8yOc425qncz6mYOR On40AHm0Wn1uwgcl9wSHTvF+RVMH00jBQCF35tR1HquTGhEQipUYYA5hZJft0b48nSqw iv5l43ou0K93pzZNB7coR+phlC77hfEc6q6/4GGiCKDdBGnOcBirvVpCaCuqcWD9NIsW YKzYsrJLAN4O6uQPsUrTYJxW3PXk0NxbbzR/D2apBec2hVakX+L1PHWsW7HsAzSB0Nd8 l7Rw== X-Gm-Message-State: AOAM530XTNQB023U0GHoGoTM4wEqku/7nX5iqjm/+YMbVeBYdW11Gmrr qZPRMR3tFSnlBcvXd6IDQxUOENC/W9IPDA== X-Google-Smtp-Source: ABdhPJyo/L4X1QcGsobDbkVBc4LBWeVR9hAwEC6S1MM00p1pHMYcwGkABO7bedyF03Xjwu8oyKeYGw== X-Received: by 2002:a05:651c:28b:: with SMTP id b11mr5392197ljo.228.1599769365496; Thu, 10 Sep 2020 13:22:45 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Paul Durrant , Julien Grall Subject: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache handling common Date: Thu, 10 Sep 2020 23:22:07 +0300 Message-Id: <1599769330-17656-14-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko As the IOREQ is a common feature now and we also need to invalidate qemu mapcache on XENMEM_decrease_reservation on Arm this patch moves this handling to the common code and move per-domain qemu_mapcache_invalidate variable out of the arch sub-struct. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - move send_invalidate_req() to the common code - update patch subject/description - move qemu_mapcache_invalidate out of the arch sub-struct, update checks - remove #if defined(CONFIG_ARM64) from the common code --- --- xen/arch/arm/traps.c | 6 ++++++ xen/arch/x86/hvm/hypercall.c | 9 ++++----- xen/arch/x86/hvm/io.c | 14 -------------- xen/common/ioreq.c | 14 ++++++++++++++ xen/common/memory.c | 5 +++++ xen/include/asm-x86/hvm/domain.h | 1 - xen/include/asm-x86/hvm/io.h | 1 - xen/include/xen/ioreq.h | 2 ++ xen/include/xen/sched.h | 2 ++ 9 files changed, 33 insertions(+), 21 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 6b37ae1..de48b2f 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1490,6 +1490,12 @@ static void do_trap_hypercall(struct cpu_user_regs *= regs, register_t *nr, /* Ensure the hypercall trap instruction is re-executed. */ if ( current->hcall_preempted ) regs->pc -=3D 4; /* re-execute 'hvc #XEN_HYPERCALL_TAG' */ + +#ifdef CONFIG_IOREQ_SERVER + if ( unlikely(current->domain->qemu_mapcache_invalidate) && + test_and_clear_bool(current->domain->qemu_mapcache_invalidate) ) + send_invalidate_req(); +#endif } =20 void arch_hypercall_tasklet_result(struct vcpu *v, long res) diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c index b6ccaf4..45fc20b 100644 --- a/xen/arch/x86/hvm/hypercall.c +++ b/xen/arch/x86/hvm/hypercall.c @@ -18,8 +18,10 @@ * * Copyright (c) 2017 Citrix Systems Ltd. */ + #include #include +#include #include =20 #include @@ -46,9 +48,6 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM= (void) arg) else rc =3D compat_memory_op(cmd, arg); =20 - if ( (cmd & MEMOP_CMD_MASK) =3D=3D XENMEM_decrease_reservation ) - curr->domain->arch.hvm.qemu_mapcache_invalidate =3D true; - return rc; } =20 @@ -329,8 +328,8 @@ int hvm_hypercall(struct cpu_user_regs *regs) if ( curr->hcall_preempted ) return HVM_HCALL_preempted; =20 - if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) && - test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) ) + if ( unlikely(currd->qemu_mapcache_invalidate) && + test_and_clear_bool(currd->qemu_mapcache_invalidate) ) send_invalidate_req(); =20 return HVM_HCALL_completed; diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 14f8c89..e659a53 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -64,20 +64,6 @@ void send_timeoffset_req(unsigned long timeoff) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } =20 -/* Ask ioemu mapcache to invalidate mappings. */ -void send_invalidate_req(void) -{ - ioreq_t p =3D { - .type =3D IOREQ_TYPE_INVALIDATE, - .size =3D 4, - .dir =3D IOREQ_WRITE, - .data =3D ~0UL, /* flush all */ - }; - - if ( hvm_broadcast_ioreq(&p, false) !=3D 0 ) - gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); -} - bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *de= scr) { struct hvm_emulate_ctxt ctxt; diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 4c3a835..e24a481 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -34,6 +34,20 @@ #include #include =20 +/* Ask ioemu mapcache to invalidate mappings. */ +void send_invalidate_req(void) +{ + ioreq_t p =3D { + .type =3D IOREQ_TYPE_INVALIDATE, + .size =3D 4, + .dir =3D IOREQ_WRITE, + .data =3D ~0UL, /* flush all */ + }; + + if ( hvm_broadcast_ioreq(&p, false) !=3D 0 ) + gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); +} + static void set_ioreq_server(struct domain *d, unsigned int id, struct hvm_ioreq_server *s) { diff --git a/xen/common/memory.c b/xen/common/memory.c index 78781f1..9d98252 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDL= E_PARAM(void) arg) break; } =20 +#ifdef CONFIG_IOREQ_SERVER + if ( op =3D=3D XENMEM_decrease_reservation ) + curr_d->qemu_mapcache_invalidate =3D true; +#endif + return rc; } =20 diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 79e0afb..11d5cc1 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -131,7 +131,6 @@ struct hvm_domain { =20 struct viridian_domain *viridian; =20 - bool_t qemu_mapcache_invalidate; bool_t is_s3_suspended; =20 /* diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index fb64294..3da0136 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -97,7 +97,6 @@ bool relocate_portio_handler( unsigned int size); =20 void send_timeoffset_req(unsigned long timeoff); -void send_invalidate_req(void); bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec); bool handle_pio(uint16_t port, unsigned int size, int dir); diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 25ce4c2..5ade9b0 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -97,6 +97,8 @@ static inline bool hvm_ioreq_needs_completion(const ioreq= _t *ioreq) (ioreq->type !=3D IOREQ_TYPE_PIO || ioreq->dir !=3D IOREQ_WRITE= ); } =20 +void send_invalidate_req(void); + bool hvm_io_pending(struct vcpu *v); bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index ac53519..4c52a04 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -512,6 +512,8 @@ struct domain /* Argo interdomain communication support */ struct argo_domain *argo; #endif + + bool_t qemu_mapcache_invalidate; }; =20 static inline struct page_list_head *page_to_list( --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769606; cv=none; d=zohomail.com; s=zohoarc; b=XgQHwzS5A4G0ETnox1P9My4ziRd2ym4qzMMayQ7xhcbqHya2j5BiD4o2zyll2aYA6yQfb8gDwnjvdXSi59nktp5kWluuL97x2q9ru9jJa/cBvOTaIZFjW8CoAkIvryI5Ta55m3c0U6NgtSYUd2G7PlZXvufmY7JuGDrwGNZk488= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769606; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=wMz8YwG4hLPOu/VkOCrkUjbnYq7kx2pzPFeHLacOB04=; b=KzJkTHm8vjGkUwGOuzN2v+jPqzt9xr1FeHqaQGK5AzzMJBltUsbVI/KCgEi0Q7/Puo30Hpe8GjlpzFt7F8Nkiq+eddql5nMEdUlpO3Qsh38sgcMjEV+oLEX17t/OEAFDJWwtZrK8eTdloMtJ6vi6omBXt2R/x9LWmnnjfzFl3lo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 15997696068981010.1193205101482; Thu, 10 Sep 2020 13:26:46 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT9J-0005IK-El; Thu, 10 Sep 2020 20:26:29 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT6X-0004JK-UJ for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:37 +0000 Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 7eb5e5d9-1390-4a5a-b0da-896028325392; Thu, 10 Sep 2020 20:22:48 +0000 (UTC) Received: by mail-lj1-x243.google.com with SMTP id y4so9845206ljk.8 for ; Thu, 10 Sep 2020 13:22:48 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:46 -0700 (PDT) X-Inumbo-ID: 7eb5e5d9-1390-4a5a-b0da-896028325392 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wMz8YwG4hLPOu/VkOCrkUjbnYq7kx2pzPFeHLacOB04=; b=S9htPGkIJziemFEpfWHTP154Y3fr9m7c8NCDDEGbWZ7hbLg1v/vm9Po/hePEQdCQ2l chAcCWTcDQxrmTGQK858OMCg6p6+lDn9/SxqlYyXq04JJy+B70TQ/4rCw55oGkp96Wyu 7I47U5iIq1IkBndlTqNlpSvGqzy4+zGhLC6Nl4083JoY2dcRMfHzyw5Tg6Syooc7z6zq 9s7X5MLfFGTBrZfYG1qb5BKAZU9XaPq2zXAEeKfJDb4h0gmrXXUVNxSzmSo3dcIc/O// l4jWN0S8TZDr5I3i5qzXZnMRHpm1fPfLNnZhSCPYCAIpUZW3XP9TDsZO0O3Les9KUuEI xdvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wMz8YwG4hLPOu/VkOCrkUjbnYq7kx2pzPFeHLacOB04=; b=GPUu5m3QElkAbi93VkBrmyIY8bmVz76mUNOhKNIBzwhqsuGR7Cu5XtVw6LuG8YcXok s/23UKFH2b2acPrMbWCDLjGJo0qCxS+QTO+jjq2Fwyv8+Ea36nJMZ3xHcbr0VNYNuUxW 3Tq1ct27SjxNSG2MUbszxaLhiu+BVTXWjyDkAuSeTBKZY1lLV6qpC5QLP5WXPjjHnkbk KsU91Sp8OA6r3DgNs5C19Kb+7SwzKQG28dg2BcMIce49XlD/fGOgxRbrJ7FGlFweJOxo 0/vsGpgqaTS9V0g9+FUY2sfoedp8yhrX4hO+E742SCmdwdex6TAIoWPEGaWzx3N35/y/ Av4A== X-Gm-Message-State: AOAM531pkS/MwowWsJpv2qNvkL02xuqcCxTLCmB9rsPieOP1fEBJe/rp GYduTG9x521E54MVF4eiJE+fzjOu7Peeiw== X-Google-Smtp-Source: ABdhPJwLVRkdcLqQoTKA4OdD4dzufexmHm7MPhAolwgcsdlgJDcUfZ0fTdqT40V0S70R1EFM2q5HEA== X-Received: by 2002:a2e:7307:: with SMTP id o7mr5227551ljc.323.1599769367224; Thu, 10 Sep 2020 13:22:47 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V1 14/16] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() Date: Thu, 10 Sep 2020 23:22:08 +0300 Message-Id: <1599769330-17656-15-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The cmpxchg() in hvm_send_buffered_ioreq() operates on memory shared with the emulator. In order to be on the safe side we need to switch to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm. CC: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this patch depends on the following patch on a review: https://patchwork.kernel.org/patch/11715559/ Changes RFC -> V1: - new patch --- --- xen/common/ioreq.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index e24a481..645d8a1 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -30,6 +30,8 @@ #include #include =20 +#include + #include #include #include @@ -1325,7 +1327,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_s= erver *s, ioreq_t *p) =20 new.read_pointer =3D old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; new.write_pointer =3D old.write_pointer - n * IOREQ_BUFFER_SLOT_NU= M; - cmpxchg(&pg->ptrs.full, old.full, new.full); + guest_cmpxchg64(d, &pg->ptrs.full, old.full, new.full); } =20 notify_via_xen_event_channel(d, s->bufioreq_evtchn); --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769606; cv=none; d=zohomail.com; s=zohoarc; b=fLxlMlRyAV6trjZMhImWvRajOyStV8RH+PgkkG7uof1KiHGY7Y4IhOJbTymVYhVE1y8GHLX+hdQegYMFoTJr4fL7w9Te3AJztABkE3vhfhF/eaumDAZvWR4VYf7RRwJg88oIU9B0rx+TFdFTES9iiY3hdwS1MEX2qG8bbfC5RG8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769606; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=vIh85/sY7iA5XbRTrm6qXjVZ3yochyyyRjYvVf5mUh4=; b=FqZS93mrX92bQVPALGeSYN9BGQYRhjEsToOQ5riHkIcV3i3+NyQrm7V9E75M82UZkHHd6kOax/vSPae0c5eiKHWtiJLxzo9z4iWG0+O5i59ydO1CGrj2+1ox0E2MyzOl9Hus/m882Eqwfl7CyzecgKPRh1fzW5TMLGLrGgcOE9s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769606445592.0152517888891; Thu, 10 Sep 2020 13:26:46 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT9I-0005I2-Kn; Thu, 10 Sep 2020 20:26:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT6c-0004JK-UX for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:42 +0000 Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 74d81ee3-8538-48d3-8bad-d83b221620a9; Thu, 10 Sep 2020 20:22:50 +0000 (UTC) Received: by mail-lj1-x243.google.com with SMTP id b19so9841742lji.11 for ; Thu, 10 Sep 2020 13:22:50 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.47 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:47 -0700 (PDT) X-Inumbo-ID: 74d81ee3-8538-48d3-8bad-d83b221620a9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vIh85/sY7iA5XbRTrm6qXjVZ3yochyyyRjYvVf5mUh4=; b=sVYFNClxwi23fyBQ9weHBSlHWXYZxbctgqBfVMnldvla5fLZKRU4J5WDjA9aze7GG9 i2qFOojewtu8gVPfuUSBrzSE2lWRSlFS6kZXSnvGEbFcz4sU5f865pdKulEeC3h4NFms SFxx6Y0sUW8JPjV15Z0R1iJ7GSf7b2xWg8QvKkKjRcG0gcbELihm0L5jO5EZ8s9NjSAQ WWvjOme5Gp8tICiMLPAb3duiqcKh+L/KEzdpiA14tg8t6xhWWj7+IWjxiBjXUZlNc4RZ xQd0oUKU3JwNzmeKsHAPBo8OfmzquB9JBCLlX2K2YKibGkppFLtuEjMlvBqvV49xzm51 JCvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vIh85/sY7iA5XbRTrm6qXjVZ3yochyyyRjYvVf5mUh4=; b=GHzqtxgbqPUsqo/MAnsfp6tMGuSxNnat/6ror3zsMw5kC57g2fPmq+QkAQXRa9vdOo wbdRzXNSSj9BohFSo7L9d+T1RYrPJfzZ+vIe1bfFY6CGp5nwvsXtFSYH4OykxillS5us Ce5heerUD5Gb/FkEPVCRv78bcwq2iNT0SP6ToHTH/S0uFrurbTVhgCLkB5kID0Fp89wK Box4vIYkluacTQra+8fMVkEuoXMZrA3tpHyb3TiD0X/vSl3ispKk+Las12tj0FOVYPjn +nAB3alNwB/2alFVMwEJfB1KN/cgqVFC0tuZ0E8lVfe5WAbYPcsaLphyif5c148aqdOq KcoQ== X-Gm-Message-State: AOAM532yIQFKHQVigOuAp3UeDrNX3SAMcnUszoVWkqhCBk1Q3wjR041Z cCpvPJb3tmoDlahk+/Dw3AW1g6cUiPEU+Q== X-Google-Smtp-Source: ABdhPJxTXyT4hkM39TOk7RHO6ponOIk03ssk3lVpKqX/u2jpSPbe2InFlycXGrb7r2LgNZz9zMfC7w== X-Received: by 2002:a2e:9410:: with SMTP id i16mr5635457ljh.443.1599769368798; Thu, 10 Sep 2020 13:22:48 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Ian Jackson , Wei Liu , Anthony PERARD , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V1 15/16] libxl: Introduce basic virtio-mmio support on Arm Date: Thu, 10 Sep 2020 23:22:09 +0300 Message-Id: <1599769330-17656-16-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch creates specific device node in the Guest device-tree with allocated MMIO range and SPI interrupt if specific 'virtio' property is present in domain config. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was squashed with: "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct wa= y" "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virti= o-mmio device node" "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT" - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h --- --- tools/libxl/libxl_arm.c | 58 +++++++++++++++++++++++++++++++++++++++= ++-- tools/libxl/libxl_types.idl | 1 + tools/xl/xl_parse.c | 1 + xen/include/public/arch-arm.h | 5 ++++ 4 files changed, 63 insertions(+), 2 deletions(-) diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c index 34f8a29..36139d9 100644 --- a/tools/libxl/libxl_arm.c +++ b/tools/libxl/libxl_arm.c @@ -27,8 +27,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, { uint32_t nr_spis =3D 0; unsigned int i; - uint32_t vuart_irq; - bool vuart_enabled =3D false; + uint32_t vuart_irq, virtio_irq; + bool vuart_enabled =3D false, virtio_enabled =3D false; =20 /* * If pl011 vuart is enabled then increment the nr_spis to allow alloc= ation @@ -40,6 +40,17 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled =3D true; } =20 + /* + * XXX: Handle properly virtio + * A proper solution would be the toolstack to allocate the interrupts + * used by each virtio backend and let the backend now which one is us= ed + */ + if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) { + nr_spis +=3D (GUEST_VIRTIO_MMIO_SPI - 32) + 1; + virtio_irq =3D GUEST_VIRTIO_MMIO_SPI; + virtio_enabled =3D true; + } + for (i =3D 0; i < d_config->b_info.num_irqs; i++) { uint32_t irq =3D d_config->b_info.irqs[i]; uint32_t spi; @@ -59,6 +70,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, return ERROR_FAIL; } =20 + /* The same check as for vpl011 */ + if (virtio_enabled && irq =3D=3D virtio_irq) { + LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", ir= q); + return ERROR_FAIL; + } + if (irq < 32) continue; =20 @@ -659,6 +676,39 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *= fdt, return 0; } =20 +static int make_virtio_mmio_node(libxl__gc *gc, void *fdt, + uint64_t base, uint32_t irq) +{ + int res; + gic_interrupt intr; + /* Placeholder for virtio@ + a 64-bit number + \0 */ + char buf[24]; + + snprintf(buf, sizeof(buf), "virtio@%"PRIx64, base); + res =3D fdt_begin_node(fdt, buf); + if (res) return res; + + res =3D fdt_property_compat(gc, fdt, 1, "virtio,mmio"); + if (res) return res; + + res =3D fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROO= T_SIZE_CELLS, + 1, base, GUEST_VIRTIO_MMIO_SIZE); + if (res) return res; + + set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING); + res =3D fdt_property_interrupts(gc, fdt, &intr, 1); + if (res) return res; + + res =3D fdt_property(fdt, "dma-coherent", NULL, 0); + if (res) return res; + + res =3D fdt_end_node(fdt); + if (res) return res; + + return 0; + +} + static const struct arch_info *get_arch_info(libxl__gc *gc, const struct xc_dom_image *do= m) { @@ -962,6 +1012,9 @@ next_resize: if (info->tee =3D=3D LIBXL_TEE_TYPE_OPTEE) FDT( make_optee_node(gc, fdt) ); =20 + if (libxl_defbool_val(info->arch_arm.virtio)) + FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GU= EST_VIRTIO_MMIO_SPI) ); + if (pfdt) FDT( copy_partial_fdt(gc, fdt, pfdt) ); =20 @@ -1179,6 +1232,7 @@ void libxl__arch_domain_build_info_setdefault(libxl__= gc *gc, { /* ACPI is disabled by default */ libxl_defbool_setdefault(&b_info->acpi, false); + libxl_defbool_setdefault(&b_info->arch_arm.virtio, false); =20 if (b_info->type !=3D LIBXL_DOMAIN_TYPE_PV) return; diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 9d3f05f..b054bf9 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -639,6 +639,7 @@ libxl_domain_build_info =3D Struct("domain_build_info",[ =20 =20 ("arch_arm", Struct(None, [("gic_version", libxl_gic_version), + ("virtio", libxl_defbool), ("vuart", libxl_vuart_type), ])), # Alternate p2m is not bound to any architecture or guest type, as it = is diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index 61b4ef7..b8306aa 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -2579,6 +2579,7 @@ skip_usbdev: } =20 xlu_cfg_get_defbool(config, "dm_restrict", &b_info->dm_restrict, 0); + xlu_cfg_get_defbool(config, "virtio", &b_info->arch_arm.virtio, 0); =20 if (c_info->type =3D=3D LIBXL_DOMAIN_TYPE_HVM) { if (!xlu_cfg_get_string (config, "vga", &buf, 0)) { diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index c365b1b..be7595f 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -464,6 +464,11 @@ typedef uint64_t xen_callback_t; #define PSCI_cpu_on 2 #define PSCI_migrate 3 =20 +/* VirtIO MMIO definitions */ +#define GUEST_VIRTIO_MMIO_BASE xen_mk_ullong(0x02000000) +#define GUEST_VIRTIO_MMIO_SIZE xen_mk_ullong(0x200) +#define GUEST_VIRTIO_MMIO_SPI 33 + #endif =20 #ifndef __ASSEMBLY__ --=20 2.7.4 From nobody Tue Apr 23 11:30:41 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1599769607; cv=none; d=zohomail.com; s=zohoarc; b=DYJfwuaHSszlhxOkhsA02lH1jeyjS5u1LJ57IGoi6KtTBLtbkNqUrni6rszQ7dOZcEtRImhuIbKA8SRj0s1IWtX+Q3IluFqosC7/FjhE9GabeksN1Pjsbah3Uee/4MyyD3l9iAyTSahjCGSpiZuq0M5g7UhcmQtZ4mNFN5oUzU4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599769607; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=xBnuOKaINd4y0NxCYv1+YTNfaCfRQuogaAfkjmdfVZo=; b=Q2MiBu6Gt4O7SUFSuD06sHOgz3GGHAeX+fPcorjJw+GRYQkrEugvudmtHCZOpFbRKmTtv3Pca5tv8AYCvrrkOg+SnmpqvXpoHHkWfJeAHTpElOQSfbll09JO/5bZ5J6glsdzs3WaZKjclmFIXkO7MysO6iz8K4VD+GZjvCtSZ0E= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599769607628700.7224883054239; Thu, 10 Sep 2020 13:26:47 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT9J-0005Ib-NK; Thu, 10 Sep 2020 20:26:29 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGT6h-0004JK-Uc for xen-devel@lists.xenproject.org; Thu, 10 Sep 2020 20:23:48 +0000 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9ff2e4dc-e7d7-4edc-a5a4-9d89a36a96d8; Thu, 10 Sep 2020 20:22:52 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id d15so4277740lfq.11 for ; Thu, 10 Sep 2020 13:22:51 -0700 (PDT) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id u5sm1584375lfq.17.2020.09.10.13.22.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 13:22:49 -0700 (PDT) X-Inumbo-ID: 9ff2e4dc-e7d7-4edc-a5a4-9d89a36a96d8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xBnuOKaINd4y0NxCYv1+YTNfaCfRQuogaAfkjmdfVZo=; b=muKzij5C2n9SzH3jWfTlYB7tQuIcAUdmE9Q+hRGVbI64YBuCdxkPTCzMuNg8pdUTvf LTmPMnlVSM73h9jzGNb2kQ2dQlLjcq2CnS+Z3pI07GMpx7V3v5+o5RYj6sF91Xn3ib6B iR7z14FZ2SoaJTYf3HgG2DFvAieIncFp928wY0g2yocd7CHYHJIl8Rac4+5q6uLPeGco p6O4Dc0UWVUuV2cKCLD+9uZ+PGh6JWj6gTFBnhIpYXC3bUv/KSVLMVy9K9ZgV2DfF+Lz S2FHiXeLfrj8TqO7I+NuRoTQsKyb4rYSQuxy5l0702YQhRNvLeo2ynqCX9vOVBiDUtgg jnTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xBnuOKaINd4y0NxCYv1+YTNfaCfRQuogaAfkjmdfVZo=; b=m52GsmPruTzYlC7p8StQelFyJaOd+DX72GUB3WNajlIn0S3eDE4ZKTMQFLCUUUMv/g k7sij1LnA9u4+pa/oYcoZPQvsAAWMI22RtF3fU7p4+t/o047CAXIISAfDj7SRe5Vj3/C px7VmwzCtvxYcVXSOtWFDnrHywwF+YymqzarvIRn/iPfUijuJ+0dkhkuhcyrxcV/65n5 iqkzV9MDdMY8XWWEFC4+FFuLTmGo0ieXMYWXpDZbJN96veJzSgFZjGPWTQ48UXHab3vu fGkyzXvfgZ/G4cgXVj6I8OjOTEzMHHLsqNzz/+jmJQJsi3ocmqkV5YKc7bTHJe4B+MH5 6M6Q== X-Gm-Message-State: AOAM530HjHAi4PusHccgW+Yt1ot21r34u7v9vXmjAwfmBC3pNAz9tL16 NLqY6K1hDTRroQ3aI0tkC7AzufXYc2WuwQ== X-Google-Smtp-Source: ABdhPJwVr+puH1O4rySwdXPplUSUeiFJDDDHXc/lppR2uCJe2IonEVxfZhW1gSUAnVBPykiDVQTDww== X-Received: by 2002:a19:ed13:: with SMTP id y19mr5009686lfy.187.1599769370269; Thu, 10 Sep 2020 13:22:50 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Ian Jackson , Wei Liu , Anthony PERARD , Julien Grall , Stefano Stabellini Subject: [PATCH V1 16/16] [RFC] libxl: Add support for virtio-disk configuration Date: Thu, 10 Sep 2020 23:22:10 +0300 Message-Id: <1599769330-17656-17-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko This patch adds basic support for configuring and assisting virtio-disk backend (emualator) which is intended to run out of Qemu and could be run in any domain. Xenstore was chosen as a communication interface for the emulator running in non-toolstack domain to be able to get configuration either by reading Xenstore directly or by receiving command line parameters (an updated 'xl d= evd' running in the same domain would read Xenstore beforehand and call backend executable with the required arguments). An example of domain configuration (two disks are assigned to the guest, the latter is in readonly mode): vdisk =3D [ 'backend=3DDomD, disks=3Drw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ] Where per-disk Xenstore entries are: - filename and readonly flag (configured via "vdisk" property) - base and irq (allocated dynamically) Besides handling 'visible' params described in configuration file, patch also allocates virtio-mmio specific ones for each device and writes them into Xenstore. virtio-mmio params (irq and base) are unique per guest domain, they allocated at the domain creation time and passed through to the emulator. Each VirtIO device has at least one pair of these params. TODO: 1. An extra "virtio" property could be removed. 2. Update documentation. Signed-off-by: Oleksandr Tyshchenko --- Changes RFC -> V1: - no changes Please note, there is a real concern about VirtIO interrupts allocation. Just copy here what Stefano said in RFC thread. So, if we end up allocating let's say 6 virtio interrupts for a domain, the chance of a clash with a physical interrupt of a passthrough device is = real. I am not entirely sure how to solve it, but these are a few ideas: - choosing virtio interrupts that are less likely to conflict (maybe > 1000) - make the virtio irq (optionally) configurable so that a user could override the default irq and specify one that doesn't conflict - implementing support for virq !=3D pirq (even the xl interface doesn't allow to specify the virq number for passthrough devices, see "irqs") --- --- tools/libxl/Makefile | 4 +- tools/libxl/libxl_arm.c | 56 ++++++++++++++--- tools/libxl/libxl_create.c | 1 + tools/libxl/libxl_internal.h | 1 + tools/libxl/libxl_types.idl | 15 +++++ tools/libxl/libxl_types_internal.idl | 1 + tools/libxl/libxl_virtio_disk.c | 109 +++++++++++++++++++++++++++++++= ++ tools/xl/Makefile | 2 +- tools/xl/xl.h | 3 + tools/xl/xl_cmdtable.c | 15 +++++ tools/xl/xl_parse.c | 115 +++++++++++++++++++++++++++++++= ++++ tools/xl/xl_virtio_disk.c | 46 ++++++++++++++ 12 files changed, 356 insertions(+), 12 deletions(-) create mode 100644 tools/libxl/libxl_virtio_disk.c create mode 100644 tools/xl/xl_virtio_disk.c diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile index 0e8dfc6..8ab6c41 100644 --- a/tools/libxl/Makefile +++ b/tools/libxl/Makefile @@ -141,7 +141,9 @@ LIBXL_OBJS =3D flexarray.o libxl.o libxl_create.o libxl= _dm.o libxl_pci.o \ libxl_vtpm.o libxl_nic.o libxl_disk.o libxl_console.o \ libxl_cpupool.o libxl_mem.o libxl_sched.o libxl_tmem.o \ libxl_9pfs.o libxl_domain.o libxl_vdispl.o \ - libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o $(LIBXL_OBJS-y) + libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o \ + libxl_virtio_disk.o $(LIBXL_OBJS-y) + LIBXL_OBJS +=3D libxl_genid.o LIBXL_OBJS +=3D _libxl_types.o libxl_flask.o _libxl_types_internal.o =20 diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c index 36139d9..442b3b9 100644 --- a/tools/libxl/libxl_arm.c +++ b/tools/libxl/libxl_arm.c @@ -9,6 +9,12 @@ #include #include =20 +#ifndef container_of +#define container_of(ptr, type, member) ({ \ + typeof( ((type *)0)->member ) *__mptr =3D (ptr); \ + (type *)( (char *)__mptr - offsetof(type,member) );}) +#endif + static const char *gicv_to_string(libxl_gic_version gic_version) { switch (gic_version) { @@ -40,14 +46,32 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled =3D true; } =20 - /* - * XXX: Handle properly virtio - * A proper solution would be the toolstack to allocate the interrupts - * used by each virtio backend and let the backend now which one is us= ed - */ if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) { - nr_spis +=3D (GUEST_VIRTIO_MMIO_SPI - 32) + 1; + uint64_t virtio_base; + libxl_device_virtio_disk *virtio_disk; + + virtio_base =3D GUEST_VIRTIO_MMIO_BASE; virtio_irq =3D GUEST_VIRTIO_MMIO_SPI; + + if (!d_config->num_virtio_disks) { + LOG(ERROR, "Virtio is enabled, but no Virtio devices present\n= "); + return ERROR_FAIL; + } + virtio_disk =3D &d_config->virtio_disks[0]; + + for (i =3D 0; i < virtio_disk->num_disks; i++) { + virtio_disk->disks[i].base =3D virtio_base; + virtio_disk->disks[i].irq =3D virtio_irq; + + LOG(DEBUG, "Allocate Virtio MMIO params: IRQ %u BASE 0x%"PRIx6= 4, + virtio_irq, virtio_base); + + virtio_irq ++; + virtio_base +=3D GUEST_VIRTIO_MMIO_SIZE; + } + virtio_irq --; + + nr_spis +=3D (virtio_irq - 32) + 1; virtio_enabled =3D true; } =20 @@ -71,8 +95,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, } =20 /* The same check as for vpl011 */ - if (virtio_enabled && irq =3D=3D virtio_irq) { - LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", ir= q); + if (virtio_enabled && + (irq >=3D GUEST_VIRTIO_MMIO_SPI && irq <=3D virtio_irq)) { + LOG(ERROR, "Physical IRQ %u conflicting with Virtio IRQ range\= n", irq); return ERROR_FAIL; } =20 @@ -1012,8 +1037,19 @@ next_resize: if (info->tee =3D=3D LIBXL_TEE_TYPE_OPTEE) FDT( make_optee_node(gc, fdt) ); =20 - if (libxl_defbool_val(info->arch_arm.virtio)) - FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GU= EST_VIRTIO_MMIO_SPI) ); + if (libxl_defbool_val(info->arch_arm.virtio)) { + libxl_domain_config *d_config =3D + container_of(info, libxl_domain_config, b_info); + libxl_device_virtio_disk *virtio_disk =3D &d_config->virtio_di= sks[0]; + unsigned int i; + + for (i =3D 0; i < virtio_disk->num_disks; i++) { + uint64_t base =3D virtio_disk->disks[i].base; + uint32_t irq =3D virtio_disk->disks[i].irq; + + FDT( make_virtio_mmio_node(gc, fdt, base, irq) ); + } + } =20 if (pfdt) FDT( copy_partial_fdt(gc, fdt, pfdt) ); diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c index 2814818..8a0651e 100644 --- a/tools/libxl/libxl_create.c +++ b/tools/libxl/libxl_create.c @@ -1817,6 +1817,7 @@ const libxl__device_type *device_type_tbl[] =3D { &libxl__dtdev_devtype, &libxl__vdispl_devtype, &libxl__vsnd_devtype, + &libxl__virtio_disk_devtype, NULL }; =20 diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h index 94a2317..4e2024d 100644 --- a/tools/libxl/libxl_internal.h +++ b/tools/libxl/libxl_internal.h @@ -3988,6 +3988,7 @@ extern const libxl__device_type libxl__vdispl_devtype; extern const libxl__device_type libxl__p9_devtype; extern const libxl__device_type libxl__pvcallsif_devtype; extern const libxl__device_type libxl__vsnd_devtype; +extern const libxl__device_type libxl__virtio_disk_devtype; =20 extern const libxl__device_type *device_type_tbl[]; =20 diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index b054bf9..5f8a3ff 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -935,6 +935,20 @@ libxl_device_vsnd =3D Struct("device_vsnd", [ ("pcms", Array(libxl_vsnd_pcm, "num_vsnd_pcms")) ]) =20 +libxl_virtio_disk_param =3D Struct("virtio_disk_param", [ + ("filename", string), + ("readonly", bool), + ("irq", uint32), + ("base", uint64), + ]) + +libxl_device_virtio_disk =3D Struct("device_virtio_disk", [ + ("backend_domid", libxl_domid), + ("backend_domname", string), + ("devid", libxl_devid), + ("disks", Array(libxl_virtio_disk_param, "num_disks")), + ]) + libxl_domain_config =3D Struct("domain_config", [ ("c_info", libxl_domain_create_info), ("b_info", libxl_domain_build_info), @@ -951,6 +965,7 @@ libxl_domain_config =3D Struct("domain_config", [ ("pvcallsifs", Array(libxl_device_pvcallsif, "num_pvcallsifs")), ("vdispls", Array(libxl_device_vdispl, "num_vdispls")), ("vsnds", Array(libxl_device_vsnd, "num_vsnds")), + ("virtio_disks", Array(libxl_device_virtio_disk, "num_virtio_disks")), # a channel manifests as a console with a name, # see docs/misc/channels.txt ("channels", Array(libxl_device_channel, "num_channels")), diff --git a/tools/libxl/libxl_types_internal.idl b/tools/libxl/libxl_types= _internal.idl index 3593e21..8f71980 100644 --- a/tools/libxl/libxl_types_internal.idl +++ b/tools/libxl/libxl_types_internal.idl @@ -32,6 +32,7 @@ libxl__device_kind =3D Enumeration("device_kind", [ (14, "PVCALLS"), (15, "VSND"), (16, "VINPUT"), + (17, "VIRTIO_DISK"), ]) =20 libxl__console_backend =3D Enumeration("console_backend", [ diff --git a/tools/libxl/libxl_virtio_disk.c b/tools/libxl/libxl_virtio_dis= k.c new file mode 100644 index 0000000..25e7f1a --- /dev/null +++ b/tools/libxl/libxl_virtio_disk.c @@ -0,0 +1,109 @@ +/* + * Copyright (C) 2020 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include "libxl_internal.h" + +static int libxl__device_virtio_disk_setdefault(libxl__gc *gc, uint32_t do= mid, + libxl_device_virtio_disk *= virtio_disk, + bool hotplug) +{ + return libxl__resolve_domid(gc, virtio_disk->backend_domname, + &virtio_disk->backend_domid); +} + +static int libxl__virtio_disk_from_xenstore(libxl__gc *gc, const char *lib= xl_path, + libxl_devid devid, + libxl_device_virtio_disk *virt= io_disk) +{ + const char *be_path; + int rc; + + virtio_disk->devid =3D devid; + rc =3D libxl__xs_read_mandatory(gc, XBT_NULL, + GCSPRINTF("%s/backend", libxl_path), + &be_path); + if (rc) return rc; + + rc =3D libxl__backendpath_parse_domid(gc, be_path, &virtio_disk->backe= nd_domid); + if (rc) return rc; + + return 0; +} + +static void libxl__update_config_virtio_disk(libxl__gc *gc, + libxl_device_virtio_disk *dst, + libxl_device_virtio_disk *src) +{ + dst->devid =3D src->devid; +} + +static int libxl_device_virtio_disk_compare(libxl_device_virtio_disk *d1, + libxl_device_virtio_disk *d2) +{ + return COMPARE_DEVID(d1, d2); +} + +static void libxl__device_virtio_disk_add(libxl__egc *egc, uint32_t domid, + libxl_device_virtio_disk *virtio= _disk, + libxl__ao_device *aodev) +{ + libxl__device_add_async(egc, domid, &libxl__virtio_disk_devtype, virti= o_disk, aodev); +} + +static int libxl__set_xenstore_virtio_disk(libxl__gc *gc, uint32_t domid, + libxl_device_virtio_disk *virti= o_disk, + flexarray_t *back, flexarray_t = *front, + flexarray_t *ro_front) +{ + int rc; + unsigned int i; + + for (i =3D 0; i < virtio_disk->num_disks; i++) { + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/filename", i), + GCSPRINTF("%s", virtio_disk->disks[i].f= ilename)); + if (rc) return rc; + + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/readonly", i), + GCSPRINTF("%d", virtio_disk->disks[i].r= eadonly)); + if (rc) return rc; + + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/base", i), + GCSPRINTF("%lu", virtio_disk->disks[i].= base)); + if (rc) return rc; + + rc =3D flexarray_append_pair(ro_front, GCSPRINTF("%d/irq", i), + GCSPRINTF("%u", virtio_disk->disks[i].i= rq)); + if (rc) return rc; + } + + return 0; +} + +static LIBXL_DEFINE_UPDATE_DEVID(virtio_disk) +static LIBXL_DEFINE_DEVICE_FROM_TYPE(virtio_disk) +static LIBXL_DEFINE_DEVICES_ADD(virtio_disk) + +DEFINE_DEVICE_TYPE_STRUCT(virtio_disk, VIRTIO_DISK, + .update_config =3D (device_update_config_fn_t) libxl__update_config_vi= rtio_disk, + .from_xenstore =3D (device_from_xenstore_fn_t) libxl__virtio_disk_from= _xenstore, + .set_xenstore_config =3D (device_set_xenstore_config_fn_t) libxl__set_= xenstore_virtio_disk +); + +/* + * Local variables: + * mode: C + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/tools/xl/Makefile b/tools/xl/Makefile index af4912e..38e4701 100644 --- a/tools/xl/Makefile +++ b/tools/xl/Makefile @@ -22,7 +22,7 @@ XL_OBJS +=3D xl_vtpm.o xl_block.o xl_nic.o xl_usb.o XL_OBJS +=3D xl_sched.o xl_pci.o xl_vcpu.o xl_cdrom.o xl_mem.o XL_OBJS +=3D xl_info.o xl_console.o xl_misc.o XL_OBJS +=3D xl_vmcontrol.o xl_saverestore.o xl_migrate.o -XL_OBJS +=3D xl_vdispl.o xl_vsnd.o xl_vkb.o +XL_OBJS +=3D xl_vdispl.o xl_vsnd.o xl_vkb.o xl_virtio_disk.o =20 $(XL_OBJS): CFLAGS +=3D $(CFLAGS_libxentoollog) $(XL_OBJS): CFLAGS +=3D $(CFLAGS_XL) diff --git a/tools/xl/xl.h b/tools/xl/xl.h index 06569c6..3d26f19 100644 --- a/tools/xl/xl.h +++ b/tools/xl/xl.h @@ -178,6 +178,9 @@ int main_vsnddetach(int argc, char **argv); int main_vkbattach(int argc, char **argv); int main_vkblist(int argc, char **argv); int main_vkbdetach(int argc, char **argv); +int main_virtio_diskattach(int argc, char **argv); +int main_virtio_disklist(int argc, char **argv); +int main_virtio_diskdetach(int argc, char **argv); int main_usbctrl_attach(int argc, char **argv); int main_usbctrl_detach(int argc, char **argv); int main_usbdev_attach(int argc, char **argv); diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c index 0833539..2bdf0b7 100644 --- a/tools/xl/xl_cmdtable.c +++ b/tools/xl/xl_cmdtable.c @@ -434,6 +434,21 @@ struct cmd_spec cmd_table[] =3D { "Destroy a domain's virtual sound device", " ", }, + { "virtio-disk-attach", + &main_virtio_diskattach, 1, 1, + "Create a new virtio block device", + " TBD\n" + }, + { "virtio-disk-list", + &main_virtio_disklist, 0, 0, + "List virtio block devices for a domain", + "", + }, + { "virtio-disk-detach", + &main_virtio_diskdetach, 0, 1, + "Destroy a domain's virtio block device", + " ", + }, { "uptime", &main_uptime, 0, 0, "Print uptime for all/some domains", diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index b8306aa..72c0a65 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -1202,6 +1202,120 @@ out: if (rc) exit(EXIT_FAILURE); } =20 +#define MAX_VIRTIO_DISKS 4 + +static int parse_virtio_disk_config(libxl_device_virtio_disk *virtio_disk,= char *token) +{ + char *oparg; + libxl_string_list disks =3D NULL; + int i, rc; + + if (MATCH_OPTION("backend", token, oparg)) { + virtio_disk->backend_domname =3D strdup(oparg); + } else if (MATCH_OPTION("disks", token, oparg)) { + split_string_into_string_list(oparg, ";", &disks); + + virtio_disk->num_disks =3D libxl_string_list_length(&disks); + if (virtio_disk->num_disks > MAX_VIRTIO_DISKS) { + fprintf(stderr, "vdisk: currently only %d disks are supported", + MAX_VIRTIO_DISKS); + return 1; + } + virtio_disk->disks =3D xcalloc(virtio_disk->num_disks, + sizeof(*virtio_disk->disks)); + + for(i =3D 0; i < virtio_disk->num_disks; i++) { + char *disk_opt; + + rc =3D split_string_into_pair(disks[i], ":", &disk_opt, + &virtio_disk->disks[i].filename); + if (rc) { + fprintf(stderr, "vdisk: failed to split \"%s\" into pair\n= ", + disks[i]); + goto out; + } + + if (!strcmp(disk_opt, "ro")) + virtio_disk->disks[i].readonly =3D 1; + else if (!strcmp(disk_opt, "rw")) + virtio_disk->disks[i].readonly =3D 0; + else { + fprintf(stderr, "vdisk: failed to parse \"%s\" disk option= \n", + disk_opt); + rc =3D 1; + } + free(disk_opt); + + if (rc) goto out; + } + } else { + fprintf(stderr, "Unknown string \"%s\" in vdisk spec\n", token); + rc =3D 1; goto out; + } + + rc =3D 0; + +out: + libxl_string_list_dispose(&disks); + return rc; +} + +static void parse_virtio_disk_list(const XLU_Config *config, + libxl_domain_config *d_config) +{ + XLU_ConfigList *virtio_disks; + const char *item; + char *buf =3D NULL; + int rc; + + if (!xlu_cfg_get_list (config, "vdisk", &virtio_disks, 0, 0)) { + libxl_domain_build_info *b_info =3D &d_config->b_info; + int entry =3D 0; + + /* XXX Remove an extra property */ + libxl_defbool_setdefault(&b_info->arch_arm.virtio, false); + if (!libxl_defbool_val(b_info->arch_arm.virtio)) { + fprintf(stderr, "Virtio device requires Virtio property to be = set\n"); + exit(EXIT_FAILURE); + } + + while ((item =3D xlu_cfg_get_listitem(virtio_disks, entry)) !=3D N= ULL) { + libxl_device_virtio_disk *virtio_disk; + char *p; + + virtio_disk =3D ARRAY_EXTEND_INIT(d_config->virtio_disks, + d_config->num_virtio_disks, + libxl_device_virtio_disk_init); + + buf =3D strdup(item); + + p =3D strtok (buf, ","); + while (p !=3D NULL) + { + while (*p =3D=3D ' ') p++; + + rc =3D parse_virtio_disk_config(virtio_disk, p); + if (rc) goto out; + + p =3D strtok (NULL, ","); + } + + entry++; + + if (virtio_disk->num_disks =3D=3D 0) { + fprintf(stderr, "At least one virtio disk should be specif= ied\n"); + rc =3D 1; goto out; + } + } + } + + rc =3D 0; + +out: + free(buf); + if (rc) exit(EXIT_FAILURE); +} + void parse_config_data(const char *config_source, const char *config_data, int config_len, @@ -2732,6 +2846,7 @@ skip_usbdev: } =20 parse_vkb_list(config, d_config); + parse_virtio_disk_list(config, d_config); =20 xlu_cfg_get_defbool(config, "xend_suspend_evtchn_compat", &c_info->xend_suspend_evtchn_compat, 0); diff --git a/tools/xl/xl_virtio_disk.c b/tools/xl/xl_virtio_disk.c new file mode 100644 index 0000000..808a7da --- /dev/null +++ b/tools/xl/xl_virtio_disk.c @@ -0,0 +1,46 @@ +/* + * Copyright (C) 2020 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include + +#include +#include +#include + +#include "xl.h" +#include "xl_utils.h" +#include "xl_parse.h" + +int main_virtio_diskattach(int argc, char **argv) +{ + return 0; +} + +int main_virtio_disklist(int argc, char **argv) +{ + return 0; +} + +int main_virtio_diskdetach(int argc, char **argv) +{ + return 0; +} + +/* + * Local variables: + * mode: C + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.7.4