From nobody Sat Apr 20 10:15:02 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1610488432; cv=none; d=zohomail.com; s=zohoarc; b=Do+xuJo8ANAznVRsyLj9yScRbldqSNxDFLzroL41J4OYG58WP10ZVjWyB1sZVf1o7OfPahtevdNhVTFozusxopHR0Tw6Jaf4lp14DAmnQktVRwha6cco/YilsczZi7tfm+/j7hKoT/6Hn8Uwg3mQ3vf9Z9MOhKo+uVN22ot8u5M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610488432; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=YQH/ykFIdBp1o2ex56YNYAHOdPkvCmUObEYKSCRsGqc=; b=Dvbs4NNw1dr+FXLqiRjS7Vu6/Aiy0J6GNxAs3niWdp1uvgVqCjQ3Ohc4LKz0DRn7CVFHUDASjLff0mZlJkTZLNqqooZSc9BUodGZO1Ixvi2AVgGkD27Eh8WrWRnPRr0h1NuZ7QoT2fmfSj0qjnohn4QHgBMPwANWpcdVc96rLIk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610488432898176.28402316553888; Tue, 12 Jan 2021 13:53:52 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.66048.117208 (Exim 4.92) (envelope-from ) id 1kzRbd-00031a-NI; Tue, 12 Jan 2021 21:53:37 +0000 Received: by outflank-mailman (output) from mailman id 66048.117208; Tue, 12 Jan 2021 21:53:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbd-00031S-JA; Tue, 12 Jan 2021 21:53:37 +0000 Received: by outflank-mailman (input) for mailman id 66048; Tue, 12 Jan 2021 21:53:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kzRbb-0002PK-HH for xen-devel@lists.xenproject.org; Tue, 12 Jan 2021 21:53:35 +0000 Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b81d7164-b184-44c2-9db9-6ced6f1bc7e5; Tue, 12 Jan 2021 21:53:02 +0000 (UTC) Received: by mail-wm1-x333.google.com with SMTP id 3so3494282wmg.4 for ; Tue, 12 Jan 2021 13:53:01 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 138sm6574053wma.41.2021.01.12.13.52.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Jan 2021 13:53:00 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b81d7164-b184-44c2-9db9-6ced6f1bc7e5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YQH/ykFIdBp1o2ex56YNYAHOdPkvCmUObEYKSCRsGqc=; b=GfiQCfNwoIh6MqoK3egSACQqddErEYoXS4Q5VJHVxoXifRr+p+aJmgwyVhljQFpSSR 2rh0mUkJn0HDuAL3ZIT7Ix4lSGcbP6CXRKpZfRMv6RPeDf74z2IfVoIH9j7AJFZRJCQ9 Z3bcy+tdSRSUjqP16u5wFgRDsMD2UCSb6P4lZALTSAIlX7Ym2DXJ39VFl3qh5PILf1Oy I12DEU2jrH4dAtE3pbbwspdTkgECs891+P7UBz3XMGAW+sbTU8CAPptOjY2ODfLhLKHG wmLwRGCGsfFD4unMv0e1iQ2k+PxVhmrUKBxrOKtemhT7LN/8JgKjxM9YofcG5QRsRKaz /p8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YQH/ykFIdBp1o2ex56YNYAHOdPkvCmUObEYKSCRsGqc=; b=mIViw2D4s7H+kSogfEC6JTaO8SEFedgBP6KdGmdfxZVdFjDl5bWJPC3CuHvc8DTXXD xlGa1q5oV+2tscxFDJkGrqdRzd882WXBaa4dK23073NcxE0kun97w2GZaQDha4mNtYuX ja8pcSvYwF1zrsy3GKoNx8TGyqWMhoWkyCdHHuh4pkc1LeQHV1pU5FhAph/t81tk2lg9 MOKFVX96y5K1nNdx0B7Jjbt3GdCKatrfcKIgCCdit0Jy5XNnMSpUQ6PFlx9wgYj30LNb 4DwmFF1l384qFYlU79nguVM2MHybirteqz+8kdbSUrpbyKpZnsGsyYiN9x1s5DBX/QAR Xe/Q== X-Gm-Message-State: AOAM530SOWCvNW3S0Eql6g4J8IuZVmV/VyKu6OiSAS9yGXXjIJZXHIo3 iXRBfSOlyADael1771XkeDTQ80fuST0ErA== X-Google-Smtp-Source: ABdhPJwHqXcIsOmFKM1lTmLdMr8jkWoWsAxYDQb70Fhe4D/Uxg9HlTrF6W3Nar59mgo77xAExE5uoA== X-Received: by 2002:a05:600c:2042:: with SMTP id p2mr1168845wmg.152.1610488380947; Tue, 12 Jan 2021 13:53:00 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Julien Grall Subject: [PATCH V4 08/24] xen/ioreq: Move x86's ioreq_server to struct domain Date: Tue, 12 Jan 2021 23:52:16 +0200 Message-Id: <1610488352-18494-9-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> References: <1610488352-18494-1-git-send-email-olekstysh@gmail.com> X-ZohoMail-DKIM: pass (identity @gmail.com) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko The IOREQ is a common feature now and this struct will be used on Arm as is. Move it to common struct domain. This also significantly reduces the layering violation in the common code (*arch.hvm* usage). We don't move ioreq_gfn since it is not used in the common code (the "legacy" mechanism is x86 specific). Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen Reviewed-by: Alex Benn=C3=A9e Reviewed-by: Julien Grall Reviewed-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - remove the mention of "ioreq_gfn" from patch subject/description - update patch according the "legacy interface" is x86 specific - drop hvm_params related changes in arch/x86/hvm/hvm.c - leave ioreq_gfn in hvm_domain Changes V3 -> V4: - rebase - drop the stale part of the comment above struct ioreq_server - add Jan's A-b --- xen/common/ioreq.c | 60 ++++++++++++++++++++----------------= ---- xen/include/asm-x86/hvm/domain.h | 8 ------ xen/include/xen/sched.h | 10 +++++++ 3 files changed, 40 insertions(+), 38 deletions(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 3f631ec..a319c88 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -38,13 +38,13 @@ static void set_ioreq_server(struct domain *d, unsigned= int id, struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + ASSERT(!s || !d->ioreq_server.server[id]); =20 - d->arch.hvm.ioreq_server.server[id] =3D s; + d->ioreq_server.server[id] =3D s; } =20 #define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] + (d)->ioreq_server.server[id] =20 static struct ioreq_server *get_ioreq_server(const struct domain *d, unsigned int id) @@ -285,7 +285,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) unsigned int id; bool found =3D false; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -296,7 +296,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) } } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return found; } @@ -606,7 +606,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufio= req_handling, return -ENOMEM; =20 domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) { @@ -634,13 +634,13 @@ int hvm_create_ioreq_server(struct domain *d, int buf= ioreq_handling, if ( id ) *id =3D i; =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); =20 return 0; =20 fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); =20 xfree(s); @@ -652,7 +652,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -684,7 +684,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid= _t id) rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -697,7 +697,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -731,7 +731,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservi= d_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -744,7 +744,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, =20 ASSERT(is_hvm_domain(d)); =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -782,7 +782,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioserv= id_t id, } =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -798,7 +798,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -834,7 +834,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, = ioservid_t id, rc =3D rangeset_add_range(r, start, end); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -850,7 +850,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -886,7 +886,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain = *d, ioservid_t id, rc =3D rangeset_remove_range(r, start, end); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -911,7 +911,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) return -EINVAL; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -926,7 +926,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, = ioservid_t id, rc =3D arch_ioreq_server_map_mem_type(d, s, flags); =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 if ( rc =3D=3D 0 ) arch_ioreq_server_map_mem_type_completed(d, s, flags); @@ -940,7 +940,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioserv= id_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -964,7 +964,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioserv= id_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } =20 @@ -974,7 +974,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) unsigned int id; int rc; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -983,7 +983,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) goto fail; } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return 0; =20 @@ -998,7 +998,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, st= ruct vcpu *v) hvm_ioreq_server_remove_vcpu(s, v); } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); =20 return rc; } @@ -1008,12 +1008,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domai= n *d, struct vcpu *v) struct ioreq_server *s; unsigned int id; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) hvm_ioreq_server_remove_vcpu(s, v); =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } =20 void hvm_destroy_all_ioreq_servers(struct domain *d) @@ -1024,7 +1024,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) if ( !arch_ioreq_server_destroy_all(d) ) return; =20 - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); =20 /* No need to domain_pause() as the domain is being torn down */ =20 @@ -1042,7 +1042,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) xfree(s); } =20 - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } =20 struct ioreq_server *hvm_select_ioreq_server(struct domain *d, @@ -1274,7 +1274,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buf= fered) =20 void hvm_ioreq_init(struct domain *d) { - spin_lock_init(&d->arch.hvm.ioreq_server.lock); + spin_lock_init(&d->ioreq_server.lock); =20 arch_ioreq_domain_init(d); } diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/dom= ain.h index 1c4ca47..b8be1ad 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -63,8 +63,6 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; =20 -#define MAX_NR_IOREQ_SERVERS 8 - struct hvm_domain { /* Guest page range used for non-default ioreq servers */ struct { @@ -73,12 +71,6 @@ struct hvm_domain { unsigned long legacy_mask; /* indexed by HVM param number */ } ioreq_gfn; =20 - /* Lock protects all other values in the sub-struct and the default */ - struct { - spinlock_t lock; - struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; - } ioreq_server; - /* Cached CF8 for guest PCI config cycles */ uint32_t pci_cf8; =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 3e46384..ad0d761 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -318,6 +318,8 @@ struct sched_unit { =20 struct evtchn_port_ops; =20 +#define MAX_NR_IOREQ_SERVERS 8 + struct domain { domid_t domain_id; @@ -533,6 +535,14 @@ struct domain struct { unsigned int val; } teardown; + +#ifdef CONFIG_IOREQ_SERVER + /* Lock protects all other values in the sub-struct */ + struct { + spinlock_t lock; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + } ioreq_server; +#endif }; =20 static inline struct page_list_head *page_to_list( --=20 2.7.4