From nobody Fri Apr 19 07:13:11 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488771; cv=none; d=zohomail.com; s=zohoarc; b=EwwykX++nafhm7UNtrB5he+KrY5oX9KLnwUK5t2c4xoSME255CpeA3D3FWm0FpO4QoQukBsxktJQHA+wz4P+b1bD/HsoJ4QrgLVtVcgKIuCeZ6gtGozJ48FT6bwraOABoZ2cWpUcd4kbY4CyzE7zoX4Zm+za+lLRxmD55E2h5Rk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488771; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=FKttzyrlhs9r0a6Bqtfw7wbziiG+iHahebfFYHiHG7M=; b=ZjxT/VUrUr3yM/S2uOMRNyuObw5wbA6XyrBW27okiYs6NaXNLwgH9SrJGS8TAya916j2lwUnanPEzp5U6ldU/nhYDTo8QAHX9x6JrFEz5ecURogrwra1Km30cYQleScxEUCQjgEQrXDc/NxDp3DCGfz0eS82mfBoNYufxSWkTUE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488771944360.2672307519392; Tue, 12 Jan 2021 13:59:31 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66094.117350 (Exim 4.92) (envelope-from ) id 1kzRh0-00048Y-4V; Tue, 12 Jan 2021 21:59:10 +0000 Received: by outflank-mailman (output) from mailman id 66094.117350; Tue, 12 Jan 2021 21:59:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRgy-00047D-Vt; Tue, 12 Jan 2021 21:59:08 +0000 Received: by outflank-mailman (input) for mailman id 66094; Tue, 12 Jan 2021 21:59:07 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbv-0002PK-IB for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:55 +0000 Received: from mail-wm1-x32b.google.com (unknown [2a00:1450:4864:20::32b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 88aa00d4-4797-4f33-89e1-a41385444d37; Tue, 12 Jan 2021 21:53:04 +0000 (UTC) Received: by mail-wm1-x32b.google.com with SMTP id g8so3232504wme.1 for ; Tue, 12 Jan 2021 13:53:04 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.53.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:02 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 88aa00d4-4797-4f33-89e1-a41385444d37 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FKttzyrlhs9r0a6Bqtfw7wbziiG+iHahebfFYHiHG7M=; b=QY5gbz8oh+FFv7V8vWaRviWHfOCxVRY9/zex0CCji7gzYqZTK6pB674J8rGPC+7Llk G8rO25rXp67hYveQYCC7kU6IiSGRytxJ28AW0s37Vx4vBVLkexkk+N+k/yatzwDDwPzx BmBdZHQPZjIFGd4FAQyFhToD/h9PWuwExLWqJsFlfcdSBG96uy/CApkxLBWmqf75/hyh xZJ49rDLK29VFsX+lOc/RVdqSexSZSxd4O5d2d4KZdSMusNwAhOtNJaQ/9bIj8eNwzDj DHw/Ui0a6tXxrLvyAxHB+NM+0NhdWmx35RO4OhU+E2+Iay+mo9AYg/Etc2fhi3Ghuf4o Pm3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FKttzyrlhs9r0a6Bqtfw7wbziiG+iHahebfFYHiHG7M=; b=ohUWTc7eE0KVSQSS0P4UI6AisRefi//NgoB0W1UJMopJ+BMbxwWYzxC+EDO95cx13m geTFu2o4dMnDNF1q0vtpsSYANf5R58Lahv9+fNRLkxKoWANCfLh5GhqEZRveXcD9boSW bf1/KdNx5hDB8yl3qctXaD6PzbQuvRp8du7BPtM5zykEW2T8SCP469x9JSz/eVGSu/58 vL2kTzFRVx/45HzKSoq1ZoyT7d2B6AlLwG4nZVzWOrzo9Zconw0li+cEvV+hOF3voF3W +FnE+oqD6y8/DM+P29cznIqOZSGHzXexQL27ClgEoGUTMZPtX4Vg/UnHVqN7V+5pIHMI Nf9g== X-Gm-Message-State: AOAM530pWlaErp9Ah65FRl/+dstGSQuw/OQTtZumIGJbpShJ4cM+rmCG rKKyo96tZGdAaf+68NMAE+D8N5lMxSWZeQ== X-Google-Smtp-Source: ABdhPJyRGmvztBY1ZUv8thskAilUJB9oOkmDa2dwhfjO88GtENj/dlpBZBDTz8jMyUoaOyFg41HOJA== X-Received: by 2002:a1c:7c19:: with SMTP id x25mr1177518wmc.94.1610488383522; Tue, 12 Jan 2021 13:53:03 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V4 10/24] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu Date: Tue, 12 Jan 2021 23:52:18 +0200 Message-Id: <1610488352-18494-11-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and these fields will be used on Arm as is. Move them to common struct vcpu as a part of new struct vcpu_io and drop duplicating "io" prefixes. Also move enum hvm_io_completion to xen/sched.h and remove "hvm" prefixes. This patch completely removes layering violation in the common code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall [On Arm only] Tested-by: Wei Chen Acked-by: Jan Beulich Reviewed-by: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - update patch description - drop the "io" prefixes from the field names - wrap IO_realmode_completion Changes V3 -> V4: - rename all hvm_vcpu_io locals to "hvio" - rename according to the new renaming scheme IO_ -> VIO_ (io_ -> vio_) - drop "io" prefix from io_completion locals --- xen/arch/x86/hvm/emulate.c | 210 +++++++++++++++++++---------------= ---- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 32 +++--- xen/arch/x86/hvm/ioreq.c | 6 +- xen/arch/x86/hvm/svm/nestedsvm.c | 2 +- xen/arch/x86/hvm/vmx/realmode.c | 8 +- xen/common/ioreq.c | 26 ++--- xen/include/asm-x86/hvm/emulate.h | 2 +- xen/include/asm-x86/hvm/vcpu.h | 11 -- xen/include/xen/ioreq.h | 2 +- xen/include/xen/sched.h | 19 ++++ 11 files changed, 164 insertions(+), 156 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 4d62199..21051ce 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -140,15 +140,15 @@ static const struct hvm_io_handler ioreq_server_handl= er =3D { */ void hvmemul_cancel(struct vcpu *v) { - struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &v->arch.hvm.hvm_io; =20 - vio->io_req.state =3D STATE_IOREQ_NONE; - vio->io_completion =3D HVMIO_no_completion; - vio->mmio_cache_count =3D 0; - vio->mmio_insn_bytes =3D 0; - vio->mmio_access =3D (struct npfec){}; - vio->mmio_retry =3D false; - vio->g2m_ioport =3D NULL; + v->io.req.state =3D STATE_IOREQ_NONE; + v->io.completion =3D VIO_no_completion; + hvio->mmio_cache_count =3D 0; + hvio->mmio_insn_bytes =3D 0; + hvio->mmio_access =3D (struct npfec){}; + hvio->mmio_retry =3D false; + hvio->g2m_ioport =3D NULL; =20 hvmemul_cache_disable(v); } @@ -159,7 +159,7 @@ static int hvmemul_do_io( { struct vcpu *curr =3D current; struct domain *currd =3D curr->domain; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct vcpu_io *vio =3D &curr->io; ioreq_t p =3D { .type =3D is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO, .addr =3D addr, @@ -184,13 +184,13 @@ static int hvmemul_do_io( return X86EMUL_UNHANDLEABLE; } =20 - switch ( vio->io_req.state ) + switch ( vio->req.state ) { case STATE_IOREQ_NONE: break; case STATE_IORESP_READY: - vio->io_req.state =3D STATE_IOREQ_NONE; - p =3D vio->io_req; + vio->req.state =3D STATE_IOREQ_NONE; + p =3D vio->req; =20 /* Verify the emulation request has been correctly re-issued */ if ( (p.type !=3D (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) || @@ -238,7 +238,7 @@ static int hvmemul_do_io( } ASSERT(p.count); =20 - vio->io_req =3D p; + vio->req =3D p; =20 rc =3D hvm_io_intercept(&p); =20 @@ -247,12 +247,12 @@ static int hvmemul_do_io( * our callers and mirror this into latched state. */ ASSERT(p.count <=3D *reps); - *reps =3D vio->io_req.count =3D p.count; + *reps =3D vio->req.count =3D p.count; =20 switch ( rc ) { case X86EMUL_OKAY: - vio->io_req.state =3D STATE_IOREQ_NONE; + vio->req.state =3D STATE_IOREQ_NONE; break; case X86EMUL_UNHANDLEABLE: { @@ -305,7 +305,7 @@ static int hvmemul_do_io( if ( s =3D=3D NULL ) { rc =3D X86EMUL_RETRY; - vio->io_req.state =3D STATE_IOREQ_NONE; + vio->req.state =3D STATE_IOREQ_NONE; break; } =20 @@ -316,7 +316,7 @@ static int hvmemul_do_io( if ( dir =3D=3D IOREQ_READ ) { rc =3D hvm_process_io_intercept(&ioreq_server_handler,= &p); - vio->io_req.state =3D STATE_IOREQ_NONE; + vio->req.state =3D STATE_IOREQ_NONE; break; } } @@ -329,14 +329,14 @@ static int hvmemul_do_io( if ( !s ) { rc =3D hvm_process_io_intercept(&null_handler, &p); - vio->io_req.state =3D STATE_IOREQ_NONE; + vio->req.state =3D STATE_IOREQ_NONE; } else { rc =3D hvm_send_ioreq(s, &p, 0); if ( rc !=3D X86EMUL_RETRY || currd->is_shutting_down ) - vio->io_req.state =3D STATE_IOREQ_NONE; - else if ( !ioreq_needs_completion(&vio->io_req) ) + vio->req.state =3D STATE_IOREQ_NONE; + else if ( !ioreq_needs_completion(&vio->req) ) rc =3D X86EMUL_OKAY; } break; @@ -1005,14 +1005,14 @@ static int hvmemul_phys_mmio_access( * cache indexed by linear MMIO address. */ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( - struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir, bool create) + struct hvm_vcpu_io *hvio, unsigned long gla, uint8_t dir, bool create) { unsigned int i; struct hvm_mmio_cache *cache; =20 - for ( i =3D 0; i < vio->mmio_cache_count; i ++ ) + for ( i =3D 0; i < hvio->mmio_cache_count; i ++ ) { - cache =3D &vio->mmio_cache[i]; + cache =3D &hvio->mmio_cache[i]; =20 if ( gla =3D=3D cache->gla && dir =3D=3D cache->dir ) @@ -1022,13 +1022,13 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cac= he( if ( !create ) return NULL; =20 - i =3D vio->mmio_cache_count; - if( i =3D=3D ARRAY_SIZE(vio->mmio_cache) ) + i =3D hvio->mmio_cache_count; + if( i =3D=3D ARRAY_SIZE(hvio->mmio_cache) ) return NULL; =20 - ++vio->mmio_cache_count; + ++hvio->mmio_cache_count; =20 - cache =3D &vio->mmio_cache[i]; + cache =3D &hvio->mmio_cache[i]; memset(cache, 0, sizeof (*cache)); =20 cache->gla =3D gla; @@ -1037,26 +1037,26 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cac= he( return cache; } =20 -static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gl= a, +static void latch_linear_to_phys(struct hvm_vcpu_io *hvio, unsigned long g= la, unsigned long gpa, bool_t write) { - if ( vio->mmio_access.gla_valid ) + if ( hvio->mmio_access.gla_valid ) return; =20 - vio->mmio_gla =3D gla & PAGE_MASK; - vio->mmio_gpfn =3D PFN_DOWN(gpa); - vio->mmio_access =3D (struct npfec){ .gla_valid =3D 1, - .read_access =3D 1, - .write_access =3D write }; + hvio->mmio_gla =3D gla & PAGE_MASK; + hvio->mmio_gpfn =3D PFN_DOWN(gpa); + hvio->mmio_access =3D (struct npfec){ .gla_valid =3D 1, + .read_access =3D 1, + .write_access =3D write }; } =20 static int hvmemul_linear_mmio_access( unsigned long gla, unsigned int size, uint8_t dir, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpf= n) { - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; unsigned long offset =3D gla & ~PAGE_MASK; - struct hvm_mmio_cache *cache =3D hvmemul_find_mmio_cache(vio, gla, dir= , true); + struct hvm_mmio_cache *cache =3D hvmemul_find_mmio_cache(hvio, gla, di= r, true); unsigned int chunk, buffer_offset =3D 0; paddr_t gpa; unsigned long one_rep =3D 1; @@ -1068,7 +1068,7 @@ static int hvmemul_linear_mmio_access( chunk =3D min_t(unsigned int, size, PAGE_SIZE - offset); =20 if ( known_gpfn ) - gpa =3D pfn_to_paddr(vio->mmio_gpfn) | offset; + gpa =3D pfn_to_paddr(hvio->mmio_gpfn) | offset; else { rc =3D hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec, @@ -1076,7 +1076,7 @@ static int hvmemul_linear_mmio_access( if ( rc !=3D X86EMUL_OKAY ) return rc; =20 - latch_linear_to_phys(vio, gla, gpa, dir =3D=3D IOREQ_WRITE); + latch_linear_to_phys(hvio, gla, gpa, dir =3D=3D IOREQ_WRITE); } =20 for ( ;; ) @@ -1122,22 +1122,22 @@ static inline int hvmemul_linear_mmio_write( =20 static bool known_gla(unsigned long addr, unsigned int bytes, uint32_t pfe= c) { - const struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + const struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; =20 if ( pfec & PFEC_write_access ) { - if ( !vio->mmio_access.write_access ) + if ( !hvio->mmio_access.write_access ) return false; } else if ( pfec & PFEC_insn_fetch ) { - if ( !vio->mmio_access.insn_fetch ) + if ( !hvio->mmio_access.insn_fetch ) return false; } - else if ( !vio->mmio_access.read_access ) + else if ( !hvio->mmio_access.read_access ) return false; =20 - return (vio->mmio_gla =3D=3D (addr & PAGE_MASK) && + return (hvio->mmio_gla =3D=3D (addr & PAGE_MASK) && (addr & ~PAGE_MASK) + bytes <=3D PAGE_SIZE); } =20 @@ -1145,7 +1145,7 @@ static int linear_read(unsigned long addr, unsigned i= nt bytes, void *p_data, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctx= t) { pagefault_info_t pfinfo; - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; unsigned int offset =3D addr & ~PAGE_MASK; int rc =3D HVMTRANS_bad_gfn_to_mfn; =20 @@ -1167,7 +1167,7 @@ static int linear_read(unsigned long addr, unsigned i= nt bytes, void *p_data, * we handle this access in the same way to guarantee completion and h= ence * clean up any interim state. */ - if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_READ, false) ) + if ( !hvmemul_find_mmio_cache(hvio, addr, IOREQ_READ, false) ) rc =3D hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfin= fo); =20 switch ( rc ) @@ -1200,7 +1200,7 @@ static int linear_write(unsigned long addr, unsigned = int bytes, void *p_data, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ct= xt) { pagefault_info_t pfinfo; - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; unsigned int offset =3D addr & ~PAGE_MASK; int rc =3D HVMTRANS_bad_gfn_to_mfn; =20 @@ -1222,7 +1222,7 @@ static int linear_write(unsigned long addr, unsigned = int bytes, void *p_data, * we handle this access in the same way to guarantee completion and h= ence * clean up any interim state. */ - if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_WRITE, false) ) + if ( !hvmemul_find_mmio_cache(hvio, addr, IOREQ_WRITE, false) ) rc =3D hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo= ); =20 switch ( rc ) @@ -1599,7 +1599,7 @@ static int hvmemul_cmpxchg( struct vcpu *curr =3D current; unsigned long addr; uint32_t pfec =3D PFEC_page_present | PFEC_write_access; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; int rc; void *mapping =3D NULL; =20 @@ -1625,8 +1625,8 @@ static int hvmemul_cmpxchg( /* Fix this in case the guest is really relying on r-m-w atomicity= . */ return hvmemul_linear_mmio_write(addr, bytes, p_new, pfec, hvmemul_ctxt, - vio->mmio_access.write_access && - vio->mmio_gla =3D=3D (addr & PAGE= _MASK)); + hvio->mmio_access.write_access && + hvio->mmio_gla =3D=3D (addr & PAG= E_MASK)); } =20 switch ( bytes ) @@ -1823,7 +1823,7 @@ static int hvmemul_rep_movs( struct hvm_emulate_ctxt *hvmemul_ctxt =3D container_of(ctxt, struct hvm_emulate_ctxt, ctxt); struct vcpu *curr =3D current; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; unsigned long saddr, daddr, bytes; paddr_t sgpa, dgpa; uint32_t pfec =3D PFEC_page_present; @@ -1846,18 +1846,18 @@ static int hvmemul_rep_movs( if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl =3D=3D 3 ) pfec |=3D PFEC_user_mode; =20 - if ( vio->mmio_access.read_access && - (vio->mmio_gla =3D=3D (saddr & PAGE_MASK)) && + if ( hvio->mmio_access.read_access && + (hvio->mmio_gla =3D=3D (saddr & PAGE_MASK)) && /* * Upon initial invocation don't truncate large batches just beca= use * of a hit for the translation: Doing the guest page table walk = is * cheaper than multiple round trips through the device model. Yet * when processing a response we can always re-use the translatio= n. */ - (vio->io_req.state =3D=3D STATE_IORESP_READY || + (curr->io.req.state =3D=3D STATE_IORESP_READY || ((!df || *reps =3D=3D 1) && PAGE_SIZE - (saddr & ~PAGE_MASK) >=3D *reps * bytes_per_rep)) ) - sgpa =3D pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK); + sgpa =3D pfn_to_paddr(hvio->mmio_gpfn) | (saddr & ~PAGE_MASK); else { rc =3D hvmemul_linear_to_phys(saddr, &sgpa, bytes_per_rep, reps, p= fec, @@ -1867,13 +1867,13 @@ static int hvmemul_rep_movs( } =20 bytes =3D PAGE_SIZE - (daddr & ~PAGE_MASK); - if ( vio->mmio_access.write_access && - (vio->mmio_gla =3D=3D (daddr & PAGE_MASK)) && + if ( hvio->mmio_access.write_access && + (hvio->mmio_gla =3D=3D (daddr & PAGE_MASK)) && /* See comment above. */ - (vio->io_req.state =3D=3D STATE_IORESP_READY || + (curr->io.req.state =3D=3D STATE_IORESP_READY || ((!df || *reps =3D=3D 1) && PAGE_SIZE - (daddr & ~PAGE_MASK) >=3D *reps * bytes_per_rep)) ) - dgpa =3D pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK); + dgpa =3D pfn_to_paddr(hvio->mmio_gpfn) | (daddr & ~PAGE_MASK); else { rc =3D hvmemul_linear_to_phys(daddr, &dgpa, bytes_per_rep, reps, @@ -1892,14 +1892,14 @@ static int hvmemul_rep_movs( =20 if ( sp2mt =3D=3D p2m_mmio_dm ) { - latch_linear_to_phys(vio, saddr, sgpa, 0); + latch_linear_to_phys(hvio, saddr, sgpa, 0); return hvmemul_do_mmio_addr( sgpa, reps, bytes_per_rep, IOREQ_READ, df, dgpa); } =20 if ( dp2mt =3D=3D p2m_mmio_dm ) { - latch_linear_to_phys(vio, daddr, dgpa, 1); + latch_linear_to_phys(hvio, daddr, dgpa, 1); return hvmemul_do_mmio_addr( dgpa, reps, bytes_per_rep, IOREQ_WRITE, df, sgpa); } @@ -1992,7 +1992,7 @@ static int hvmemul_rep_stos( struct hvm_emulate_ctxt *hvmemul_ctxt =3D container_of(ctxt, struct hvm_emulate_ctxt, ctxt); struct vcpu *curr =3D current; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; unsigned long addr, bytes; paddr_t gpa; p2m_type_t p2mt; @@ -2004,13 +2004,13 @@ static int hvmemul_rep_stos( return rc; =20 bytes =3D PAGE_SIZE - (addr & ~PAGE_MASK); - if ( vio->mmio_access.write_access && - (vio->mmio_gla =3D=3D (addr & PAGE_MASK)) && + if ( hvio->mmio_access.write_access && + (hvio->mmio_gla =3D=3D (addr & PAGE_MASK)) && /* See respective comment in MOVS processing. */ - (vio->io_req.state =3D=3D STATE_IORESP_READY || + (curr->io.req.state =3D=3D STATE_IORESP_READY || ((!df || *reps =3D=3D 1) && PAGE_SIZE - (addr & ~PAGE_MASK) >=3D *reps * bytes_per_rep)) ) - gpa =3D pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK); + gpa =3D pfn_to_paddr(hvio->mmio_gpfn) | (addr & ~PAGE_MASK); else { uint32_t pfec =3D PFEC_page_present | PFEC_write_access; @@ -2103,7 +2103,7 @@ static int hvmemul_rep_stos( return X86EMUL_UNHANDLEABLE; =20 case p2m_mmio_dm: - latch_linear_to_phys(vio, addr, gpa, 1); + latch_linear_to_phys(hvio, addr, gpa, 1); return hvmemul_do_mmio_buffer(gpa, reps, bytes_per_rep, IOREQ_WRIT= E, df, p_data); } @@ -2613,18 +2613,18 @@ static const struct x86_emulate_ops hvm_emulate_ops= _no_write =3D { }; =20 /* - * Note that passing HVMIO_no_completion into this function serves as kind + * Note that passing VIO_no_completion into this function serves as kind * of (but not fully) an "auto select completion" indicator. When there's * no completion needed, the passed in value will be ignored in any case. */ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, const struct x86_emulate_ops *ops, - enum hvm_io_completion completion) + enum vio_completion completion) { const struct cpu_user_regs *regs =3D hvmemul_ctxt->ctxt.regs; struct vcpu *curr =3D current; uint32_t new_intr_shadow; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; int rc; =20 /* @@ -2632,45 +2632,45 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt= *hvmemul_ctxt, * untouched if it's already enabled, for re-execution to consume * entries populated by an earlier pass. */ - if ( vio->cache->num_ents > vio->cache->max_ents ) + if ( hvio->cache->num_ents > hvio->cache->max_ents ) { - ASSERT(vio->io_req.state =3D=3D STATE_IOREQ_NONE); - vio->cache->num_ents =3D 0; + ASSERT(curr->io.req.state =3D=3D STATE_IOREQ_NONE); + hvio->cache->num_ents =3D 0; } else - ASSERT(vio->io_req.state =3D=3D STATE_IORESP_READY); + ASSERT(curr->io.req.state =3D=3D STATE_IORESP_READY); =20 - hvm_emulate_init_per_insn(hvmemul_ctxt, vio->mmio_insn, - vio->mmio_insn_bytes); + hvm_emulate_init_per_insn(hvmemul_ctxt, hvio->mmio_insn, + hvio->mmio_insn_bytes); =20 - vio->mmio_retry =3D 0; + hvio->mmio_retry =3D 0; =20 rc =3D x86_emulate(&hvmemul_ctxt->ctxt, ops); - if ( rc =3D=3D X86EMUL_OKAY && vio->mmio_retry ) + if ( rc =3D=3D X86EMUL_OKAY && hvio->mmio_retry ) rc =3D X86EMUL_RETRY; =20 - if ( !ioreq_needs_completion(&vio->io_req) ) - completion =3D HVMIO_no_completion; - else if ( completion =3D=3D HVMIO_no_completion ) - completion =3D (vio->io_req.type !=3D IOREQ_TYPE_PIO || - hvmemul_ctxt->is_mem_access) ? HVMIO_mmio_completion - : HVMIO_pio_completion; + if ( !ioreq_needs_completion(&curr->io.req) ) + completion =3D VIO_no_completion; + else if ( completion =3D=3D VIO_no_completion ) + completion =3D (curr->io.req.type !=3D IOREQ_TYPE_PIO || + hvmemul_ctxt->is_mem_access) ? VIO_mmio_completion + : VIO_pio_completion; =20 - switch ( vio->io_completion =3D completion ) + switch ( curr->io.completion =3D completion ) { - case HVMIO_no_completion: - case HVMIO_pio_completion: - vio->mmio_cache_count =3D 0; - vio->mmio_insn_bytes =3D 0; - vio->mmio_access =3D (struct npfec){}; + case VIO_no_completion: + case VIO_pio_completion: + hvio->mmio_cache_count =3D 0; + hvio->mmio_insn_bytes =3D 0; + hvio->mmio_access =3D (struct npfec){}; hvmemul_cache_disable(curr); break; =20 - case HVMIO_mmio_completion: - case HVMIO_realmode_completion: - BUILD_BUG_ON(sizeof(vio->mmio_insn) < sizeof(hvmemul_ctxt->insn_bu= f)); - vio->mmio_insn_bytes =3D hvmemul_ctxt->insn_buf_bytes; - memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, vio->mmio_insn_byte= s); + case VIO_mmio_completion: + case VIO_realmode_completion: + BUILD_BUG_ON(sizeof(hvio->mmio_insn) < sizeof(hvmemul_ctxt->insn_b= uf)); + hvio->mmio_insn_bytes =3D hvmemul_ctxt->insn_buf_bytes; + memcpy(hvio->mmio_insn, hvmemul_ctxt->insn_buf, hvio->mmio_insn_by= tes); break; =20 default: @@ -2716,7 +2716,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *= hvmemul_ctxt, =20 int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion) + enum vio_completion completion) { return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops, completion); } @@ -2754,7 +2754,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned = long gla) guest_cpu_user_regs()); ctxt.ctxt.data =3D &mmio_ro_ctxt; =20 - switch ( rc =3D _hvm_emulate_one(&ctxt, ops, HVMIO_no_completion) ) + switch ( rc =3D _hvm_emulate_one(&ctxt, ops, VIO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: @@ -2782,28 +2782,28 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, = unsigned int trapnr, { case EMUL_KIND_NOWRITE: rc =3D _hvm_emulate_one(&ctx, &hvm_emulate_ops_no_write, - HVMIO_no_completion); + VIO_no_completion); break; case EMUL_KIND_SET_CONTEXT_INSN: { struct vcpu *curr =3D current; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; =20 - BUILD_BUG_ON(sizeof(vio->mmio_insn) !=3D + BUILD_BUG_ON(sizeof(hvio->mmio_insn) !=3D sizeof(curr->arch.vm_event->emul.insn.data)); - ASSERT(!vio->mmio_insn_bytes); + ASSERT(!hvio->mmio_insn_bytes); =20 /* * Stash insn buffer into mmio buffer here instead of ctx * to avoid having to add more logic to hvm_emulate_one. */ - vio->mmio_insn_bytes =3D sizeof(vio->mmio_insn); - memcpy(vio->mmio_insn, curr->arch.vm_event->emul.insn.data, - vio->mmio_insn_bytes); + hvio->mmio_insn_bytes =3D sizeof(hvio->mmio_insn); + memcpy(hvio->mmio_insn, curr->arch.vm_event->emul.insn.data, + hvio->mmio_insn_bytes); } /* Fall-through */ default: ctx.set_context =3D (kind =3D=3D EMUL_KIND_SET_CONTEXT_DATA); - rc =3D hvm_emulate_one(&ctx, HVMIO_no_completion); + rc =3D hvm_emulate_one(&ctx, VIO_no_completion); } =20 switch ( rc ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index bc96947..4ed929c 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3800,7 +3800,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs) return; } =20 - switch ( hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( hvm_emulate_one(&ctxt, VIO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index ef8286b..dd733e1 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -85,7 +85,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validat= e, const char *descr) =20 hvm_emulate_init_once(&ctxt, validate, guest_cpu_user_regs()); =20 - switch ( rc =3D hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( rc =3D hvm_emulate_one(&ctxt, VIO_no_completion) ) { case X86EMUL_UNHANDLEABLE: hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt, rc); @@ -109,20 +109,20 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *val= idate, const char *descr) bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec access) { - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; =20 - vio->mmio_access =3D access.gla_valid && - access.kind =3D=3D npfec_kind_with_gla - ? access : (struct npfec){}; - vio->mmio_gla =3D gla & PAGE_MASK; - vio->mmio_gpfn =3D gpfn; + hvio->mmio_access =3D access.gla_valid && + access.kind =3D=3D npfec_kind_with_gla + ? access : (struct npfec){}; + hvio->mmio_gla =3D gla & PAGE_MASK; + hvio->mmio_gpfn =3D gpfn; return handle_mmio(); } =20 bool handle_pio(uint16_t port, unsigned int size, int dir) { struct vcpu *curr =3D current; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct vcpu_io *vio =3D &curr->io; unsigned int data; int rc; =20 @@ -135,8 +135,8 @@ bool handle_pio(uint16_t port, unsigned int size, int d= ir) =20 rc =3D hvmemul_do_pio_buffer(port, size, dir, &data); =20 - if ( ioreq_needs_completion(&vio->io_req) ) - vio->io_completion =3D HVMIO_pio_completion; + if ( ioreq_needs_completion(&vio->req) ) + vio->completion =3D VIO_pio_completion; =20 switch ( rc ) { @@ -175,7 +175,7 @@ static bool_t g2m_portio_accept(const struct hvm_io_han= dler *handler, { struct vcpu *curr =3D current; const struct hvm_domain *hvm =3D &curr->domain->arch.hvm; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; struct g2m_ioport *g2m_ioport; unsigned int start, end; =20 @@ -185,7 +185,7 @@ static bool_t g2m_portio_accept(const struct hvm_io_han= dler *handler, end =3D start + g2m_ioport->np; if ( (p->addr >=3D start) && (p->addr + p->size <=3D end) ) { - vio->g2m_ioport =3D g2m_ioport; + hvio->g2m_ioport =3D g2m_ioport; return 1; } } @@ -196,8 +196,8 @@ static bool_t g2m_portio_accept(const struct hvm_io_han= dler *handler, static int g2m_portio_read(const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, uint64_t *data) { - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; - const struct g2m_ioport *g2m_ioport =3D vio->g2m_ioport; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; + const struct g2m_ioport *g2m_ioport =3D hvio->g2m_ioport; unsigned int mport =3D (addr - g2m_ioport->gport) + g2m_ioport->mport; =20 switch ( size ) @@ -221,8 +221,8 @@ static int g2m_portio_read(const struct hvm_io_handler = *handler, static int g2m_portio_write(const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, uint64_t data) { - struct hvm_vcpu_io *vio =3D ¤t->arch.hvm.hvm_io; - const struct g2m_ioport *g2m_ioport =3D vio->g2m_ioport; + struct hvm_vcpu_io *hvio =3D ¤t->arch.hvm.hvm_io; + const struct g2m_ioport *g2m_ioport =3D hvio->g2m_ioport; unsigned int mport =3D (addr - g2m_ioport->gport) + g2m_ioport->mport; =20 switch ( size ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 8393922..c00ee8e 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -40,11 +40,11 @@ bool arch_ioreq_complete_mmio(void) return handle_mmio(); } =20 -bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) +bool arch_vcpu_ioreq_completion(enum vio_completion completion) { - switch ( io_completion ) + switch ( completion ) { - case HVMIO_realmode_completion: + case VIO_realmode_completion: { struct hvm_emulate_ctxt ctxt; =20 diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nested= svm.c index fcfccf7..6d90630 100644 --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -1266,7 +1266,7 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v) * Delay the injection because this would result in delivering * an interrupt *within* the execution of an instruction. */ - if ( v->arch.hvm.hvm_io.io_req.state !=3D STATE_IOREQ_NONE ) + if ( v->io.req.state !=3D STATE_IOREQ_NONE ) return hvm_intblk_shadow; =20 if ( !nv->nv_vmexit_pending && n2vmcb->exit_int_info.v ) diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmod= e.c index 768f01e..cc23afa 100644 --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -101,7 +101,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *= hvmemul_ctxt) =20 perfc_incr(realmode_emulations); =20 - rc =3D hvm_emulate_one(hvmemul_ctxt, HVMIO_realmode_completion); + rc =3D hvm_emulate_one(hvmemul_ctxt, VIO_realmode_completion); =20 if ( rc =3D=3D X86EMUL_UNHANDLEABLE ) { @@ -153,7 +153,7 @@ void vmx_realmode(struct cpu_user_regs *regs) struct vcpu *curr =3D current; struct hvm_emulate_ctxt hvmemul_ctxt; struct segment_register *sreg; - struct hvm_vcpu_io *vio =3D &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio =3D &curr->arch.hvm.hvm_io; unsigned long intr_info; unsigned int emulations =3D 0; =20 @@ -188,7 +188,7 @@ void vmx_realmode(struct cpu_user_regs *regs) =20 vmx_realmode_emulate_one(&hvmemul_ctxt); =20 - if ( vio->io_req.state !=3D STATE_IOREQ_NONE || vio->mmio_retry ) + if ( curr->io.req.state !=3D STATE_IOREQ_NONE || hvio->mmio_retry ) break; =20 /* Stop emulating unless our segment state is not safe */ @@ -202,7 +202,7 @@ void vmx_realmode(struct cpu_user_regs *regs) } =20 /* Need to emulate next time if we've started an IO operation */ - if ( vio->io_req.state !=3D STATE_IOREQ_NONE ) + if ( curr->io.req.state !=3D STATE_IOREQ_NONE ) curr->arch.hvm.vmx.vmx_emulate =3D 1; =20 if ( !curr->arch.hvm.vmx.vmx_emulate && !curr->arch.hvm.vmx.vmx_realmo= de ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 72b5da0..273683f 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -159,7 +159,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, iore= q_t *p) break; } =20 - p =3D &sv->vcpu->arch.hvm.hvm_io.io_req; + p =3D &sv->vcpu->io.req; if ( ioreq_needs_completion(p) ) p->data =3D data; =20 @@ -171,10 +171,10 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, io= req_t *p) bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d =3D v->domain; - struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io; + struct vcpu_io *vio =3D &v->io; struct ioreq_server *s; struct ioreq_vcpu *sv; - enum hvm_io_completion io_completion; + enum vio_completion completion; =20 if ( has_vpci(d) && vpci_process_pending(v) ) { @@ -186,29 +186,29 @@ bool handle_hvm_io_completion(struct vcpu *v) if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) return false; =20 - vio->io_req.state =3D ioreq_needs_completion(&vio->io_req) ? + vio->req.state =3D ioreq_needs_completion(&vio->req) ? STATE_IORESP_READY : STATE_IOREQ_NONE; =20 msix_write_completion(v); vcpu_end_shutdown_deferral(v); =20 - io_completion =3D vio->io_completion; - vio->io_completion =3D HVMIO_no_completion; + completion =3D vio->completion; + vio->completion =3D VIO_no_completion; =20 - switch ( io_completion ) + switch ( completion ) { - case HVMIO_no_completion: + case VIO_no_completion: break; =20 - case HVMIO_mmio_completion: + case VIO_mmio_completion: return arch_ioreq_complete_mmio(); =20 - case HVMIO_pio_completion: - return handle_pio(vio->io_req.addr, vio->io_req.size, - vio->io_req.dir); + case VIO_pio_completion: + return handle_pio(vio->req.addr, vio->req.size, + vio->req.dir); =20 default: - return arch_vcpu_ioreq_completion(io_completion); + return arch_vcpu_ioreq_completion(completion); } =20 return true; diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/em= ulate.h index 1620cc7..610078b 100644 --- a/xen/include/asm-x86/hvm/emulate.h +++ b/xen/include/asm-x86/hvm/emulate.h @@ -65,7 +65,7 @@ bool __nonnull(1, 2) hvm_emulate_one_insn( const char *descr); int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion); + enum vio_completion completion); void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 6c1feda..8adf455 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -28,13 +28,6 @@ #include #include =20 -enum hvm_io_completion { - HVMIO_no_completion, - HVMIO_mmio_completion, - HVMIO_pio_completion, - HVMIO_realmode_completion -}; - struct hvm_vcpu_asid { uint64_t generation; uint32_t asid; @@ -52,10 +45,6 @@ struct hvm_mmio_cache { }; =20 struct hvm_vcpu_io { - /* I/O request in flight to device model. */ - enum hvm_io_completion io_completion; - ioreq_t io_req; - /* * HVM emulation: * Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn. diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 7a90873..dffed60 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -105,7 +105,7 @@ void hvm_ioreq_init(struct domain *d); int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const= _op); =20 bool arch_ioreq_complete_mmio(void); -bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); +bool arch_vcpu_ioreq_completion(enum vio_completion completion); int arch_ioreq_server_map_pages(struct ioreq_server *s); void arch_ioreq_server_unmap_pages(struct ioreq_server *s); void arch_ioreq_server_enable(struct ioreq_server *s); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index ad0d761..7aea2bb 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -147,6 +147,21 @@ void evtchn_destroy_final(struct domain *d); /* from c= omplete_domain_destroy */ =20 struct waitqueue_vcpu; =20 +enum vio_completion { + VIO_no_completion, + VIO_mmio_completion, + VIO_pio_completion, +#ifdef CONFIG_X86 + VIO_realmode_completion, +#endif +}; + +struct vcpu_io { + /* I/O request in flight to device model. */ + enum vio_completion completion; + ioreq_t req; +}; + struct vcpu { int vcpu_id; @@ -258,6 +273,10 @@ struct vcpu struct vpci_vcpu vpci; =20 struct arch_vcpu arch; + +#ifdef CONFIG_IOREQ_SERVER + struct vcpu_io io; +#endif }; =20 struct sched_unit { --=20 2.7.4