From nobody Mon Feb 9 21:36:57 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1611601790; cv=none; d=zohomail.com; s=zohoarc; b=ggYyRqte6M5ye7Lkye4mFcH32BvKkvisJqoUOEtE5cK8OQcpr1j8dWf4zjURJEnS3/erthbejBJCoCpmT6Ghi2aOtJ6CqLdKdhtXSHAj9eD9F2NT1iZyrIiCDTSYTsFfJgmoekWFGpjnDxD3FtuIxvN5KILOh1IbvZXvhddXN9w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611601790; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=WXYwiN3dYkxs2ItAbEK7clRgBID+kN/cztiUPQCB6UM=; b=ZM8g6SS8S05BLjTLnRJMLvVPPEycClCofRCOEakn8+uSeBw5Ovu3IIddv8QFfMrBUz2DHAboCxC+N8YLFzNHQiG/psfw/Y5uaClwA8qYaCuaUN1btY417O3iZDSqskjzO4RV6LZZYKEfuHpptkLcKyiVEbNZdv8PAzmFo4VkpWA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1611601790033101.3215362772811; Mon, 25 Jan 2021 11:09:50 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.74306.133597 (Exim 4.92) (envelope-from ) id 1l47F2-0000aE-O7; Mon, 25 Jan 2021 19:09:36 +0000 Received: by outflank-mailman (output) from mailman id 74306.133597; Mon, 25 Jan 2021 19:09:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l47F2-0000a6-IR; Mon, 25 Jan 2021 19:09:36 +0000 Received: by outflank-mailman (input) for mailman id 74306; Mon, 25 Jan 2021 19:09:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l47F1-0008N7-GW for xen-devel@lists.xenproject.org; Mon, 25 Jan 2021 19:09:35 +0000 Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 57e928a5-a02d-446b-840c-138bd6d88fb1; Mon, 25 Jan 2021 19:09:02 +0000 (UTC) Received: by mail-wr1-x432.google.com with SMTP id g10so14126606wrx.1 for ; Mon, 25 Jan 2021 11:09:02 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id k6sm12991031wro.27.2021.01.25.11.09.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 25 Jan 2021 11:09:00 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 57e928a5-a02d-446b-840c-138bd6d88fb1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WXYwiN3dYkxs2ItAbEK7clRgBID+kN/cztiUPQCB6UM=; b=Zzjx+qG0CtVKWbhpDzOa/ZbGehbgh0a5J3yewHX7TR06y/MA5aXWnRhxrpJEL378+R gxABx92HyCU1YGkZl9CxqK29y5aLpQOoTOh1hp6/njGwZNwQyS7ApHAUiU8RlpR5YSlc 9g6YICYeHmK1zMtDhhMprunhCBZ8wRnuZBIJSbKfr800lebxKGeUfP2en9Npw58e9SgA vsJmmW3/kyi0U0XnkvXL7XkFMHMaNEVYjTzIqPE+9TyuqLRnZ5PbjP5fYoZ7jkZpKCTy qwwchDQfAyT/0oHNwMbZs//sw4L6h+9Ate7uTTEoCW6B8mh7BHmT7A2mPBSkc6fTE3Sc jfZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WXYwiN3dYkxs2ItAbEK7clRgBID+kN/cztiUPQCB6UM=; b=NTCHJKJJbrvALhwQa0HciaYM0qCpKnt0bCoDpCrrxUe1MpAdWbydsNm1qfX+cfcy3n Lx61iBqWNIroh5jHe5GMdlRsDr7+iJft2awLEfsWF46ExbwJQEN4KUkVDz3w/bq6iNpB MfRdTCzJ2WIxMmyxmO8cN9TwRXgyRy08x1DO5Hz0RrfGILHok8W2qcM2PwsX6GeHCX9t 7IhXdXdG+cuDIhKTwXk9n14iaaN2s9l4TKonBzIxBJ9C6gJ3ZYwk1mCstNxF9lBcliVh 4mJVbcL5sEYoJSehqLfY3j+c3dAyk0ELExi9AxroT7YfDVk0yXrZMkaWb/n648GVhb+N kvhA== X-Gm-Message-State: AOAM531IzOD9d7xDIHDZFCKugLcypNuBluZFagCkeAdeY7h5Jw5hm+1b LwY1b6HpOfJ4gF5itAd4bRJi+1zRmhxM+w== X-Google-Smtp-Source: ABdhPJxMlQeyE6eQ2sYCm8DaJIzIl5PuEeAXb1GklBXXJsYAfCsff6PMXMzTZP16GF2R916XqXqQyg== X-Received: by 2002:a05:6000:1142:: with SMTP id d2mr2596225wrx.307.1611601741237; Mon, 25 Jan 2021 11:09:01 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall Subject: [PATCH V5 08/22] xen/ioreq: Move x86's ioreq_server to struct domain Date: Mon, 25 Jan 2021 21:08:15 +0200 Message-Id: <1611601709-28361-9-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611601709-28361-1-git-send-email-olekstysh@gmail.com> References: <1611601709-28361-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) From: Oleksandr Tyshchenko The IOREQ is a common feature now and this struct will be used on Arm as is. Move it to common struct domain. This also significantly reduces the layering violation in the common code (*arch.hvm* usage). We don't move ioreq_gfn since it is not used in the common code (the "legacy" mechanism is x86 specific). Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich Reviewed-by: Julien Grall Reviewed-by: Paul Durrant Reviewed-by: Alex Benn=C3=A9e CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - remove the mention of "ioreq_gfn" from patch subject/description - update patch according the "legacy interface" is x86 specific - drop hvm_params related changes in arch/x86/hvm/hvm.c - leave ioreq_gfn in hvm_domain Changes V3 -> V4: - rebase - drop the stale part of the comment above struct ioreq_server - add Jan's A-b Changes V4 -> V5: - add Julien's, Alex's and Paul's R-b --- --- xen/common/ioreq.c | 60 ++++++++++++++++++++----------------= ---- xen/include/asm-x86/hvm/domain.h | 8 ------ xen/include/xen/sched.h | 10 +++++++ 3 files changed, 40 insertions(+), 38 deletions(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 7320f23..4cb26e6 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -38,13 +38,13 @@ static void set_ioreq_server(struct domain *d, unsigned= int id, struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + ASSERT(!s || !d->ioreq_server.server[id]); =20 - d->arch.hvm.ioreq_server.server[id] =3D s; + d->ioreq_server.server[id] =3D s; } =20 #define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] + (d)->ioreq_server.server[id] =20 static struct ioreq_server *get_ioreq_server(const struct domain *d, unsigned int id) @@ -285,7 +285,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) unsigned int id; bool found =3D false; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -296,7 +296,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) } } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return found; } @@ -606,7 +606,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, return -ENOMEM; =20 domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) { @@ -634,13 +634,13 @@ int hvm_create_ioreq_server(struct domain *d, int buf= ioreq_handling, if ( id ) *id =3D i; =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); =20 return 0; =20 fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); =20 xfree(s); @@ -652,7 +652,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -684,7 +684,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -697,7 +697,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -731,7 +731,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -744,7 +744,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, =20 ASSERT(is_hvm_domain(d)); =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -782,7 +782,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, } =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -798,7 +798,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -834,7 +834,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, rc =3D rangeset_add_range(r, start, end); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -850,7 +850,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -886,7 +886,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, rc =3D rangeset_remove_range(r, start, end); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -911,7 +911,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -926,7 +926,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, rc =3D arch_ioreq_server_map_mem_type(d, s, flags); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 if ( rc =3D=3D 0 ) arch_ioreq_server_map_mem_type_completed(d, s, flags); @@ -940,7 +940,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioserv= id_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -964,7 +964,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioserv= id_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } =20 @@ -974,7 +974,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) unsigned int id; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -983,7 +983,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) goto fail; } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return 0; =20 @@ -998,7 +998,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) hvm_ioreq_server_remove_vcpu(s, v); } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -1008,12 +1008,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domai= n *d, struct vcpu *v) struct ioreq_server *s; unsigned int id; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) hvm_ioreq_server_remove_vcpu(s, v); =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } =20 void hvm_destroy_all_ioreq_servers(struct domain *d) @@ -1024,7 +1024,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) if ( !arch_ioreq_server_destroy_all(d) ) return; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 /* No need to domain_pause() as the domain is being torn down */ =20 @@ -1042,7 +1042,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) xfree(s); } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } =20 struct ioreq_server *hvm_select_ioreq_server(struct domain *d, @@ -1274,7 +1274,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buf= fered) =20 void hvm_ioreq_init(struct domain *d) { - spin_lock_init(&d->arch.hvm.ioreq_server.lock); + spin_lock_init(&d->ioreq_server.lock); =20 arch_ioreq_domain_init(d); } diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 3b36c2f..b8be1ad 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -63,8 +63,6 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; =20 -#define MAX_NR_IOREQ_SERVERS 8 - struct hvm_domain { /* Guest page range used for non-default ioreq servers */ struct { @@ -73,12 +71,6 @@ struct hvm_domain { unsigned long legacy_mask; /* indexed by HVM param number */ } ioreq_gfn; =20 - /* Lock protects all other values in the sub-struct and the default */ - struct { - spinlock_t lock; - struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; - } ioreq_server; - /* Cached CF8 for guest PCI config cycles */ uint32_t pci_cf8; =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index da19f4e..f437ee3 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -318,6 +318,8 @@ struct sched_unit { =20 struct evtchn_port_ops; =20 +#define MAX_NR_IOREQ_SERVERS 8 + struct domain { domid_t domain_id; @@ -534,6 +536,14 @@ struct domain unsigned int val; struct vcpu *vcpu; } teardown; + +#ifdef CONFIG_IOREQ_SERVER + /* Lock protects all other values in the sub-struct */ + struct { + spinlock_t lock; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + } ioreq_server; +#endif }; =20 static inline struct page_list_head *page_to_list( --=20 2.7.4