From nobody Sun May 5 06:44:27 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1611735340; cv=none; d=zohomail.com; s=zohoarc; b=l3oQGYKWJ5bHE5ebgAakHOElD4/NoRn+uYnUDcQpxW3puZ4UXHY/vvUUzZ7YHbFETNvNlyjzhqFGeERRL5WApifZxgLLrXJaD03CqdKFGgn9J2aeLyQ2lGgrtZfNfLlQEdJOHhZJHPBAw/Vkc87THlj8mvkceDFZWg/RnBhoCg4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611735340; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=kJW/wMgZhZCk29BDYV5bG2rGCyC3dQXKAGN0v9QFQbc=; b=hBtq4EYGxmLyn3GP4lopSseqtuInBQgg3nYWse26Vfu0e3brFQD82TjQ5bys+LVjr6WINk3c+avkEKjbWmy4XvsgqXURA9yij4/iG0F//kQGA3lhU04Gm2i8X1g6om0OZl1hMs9gebI9LyUyEknMZOug3IJn+vp9J8sEcmbQW6M= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1611735340942847.6528012345192; Wed, 27 Jan 2021 00:15:40 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.75791.136548 (Exim 4.92) (envelope-from ) id 1l4fz0-0005Xx-3M; Wed, 27 Jan 2021 08:15:22 +0000 Received: by outflank-mailman (output) from mailman id 75791.136548; Wed, 27 Jan 2021 08:15:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4fyz-0005Xq-UK; Wed, 27 Jan 2021 08:15:21 +0000 Received: by outflank-mailman (input) for mailman id 75791; Wed, 27 Jan 2021 08:15:21 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4fyz-0005Xl-4N for xen-devel@lists.xenproject.org; Wed, 27 Jan 2021 08:15:21 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f85644b1-0360-4f28-9457-ad7b2787e0ed; Wed, 27 Jan 2021 08:15:15 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 43194AD7F; Wed, 27 Jan 2021 08:15:14 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f85644b1-0360-4f28-9457-ad7b2787e0ed X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1611735314; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kJW/wMgZhZCk29BDYV5bG2rGCyC3dQXKAGN0v9QFQbc=; b=VFe0AdolORphdj2WdblAICjqD6xElm2R1NQ3LEbQViI+r2yTQc4XnoOqviH/NWHNnieA+Q oXbnWohNP3DiPk56WFzLNwhUnmoaYP7oFM7MRWxzVCMkADVwC05+1CzVcwvO0a337xo943 7HyOUTGPTjCXaMisdwWeTwFk6olNLRk= Subject: [PATCH v5 1/6] evtchn: use per-channel lock where possible From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Message-ID: <081b2253-e445-4242-3f0c-0f2912412d43@suse.com> Date: Wed, 27 Jan 2021 09:15:14 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Neither evtchn_status() nor domain_dump_evtchn_info() nor flask_get_peer_sid() need to hold the per-domain lock - they all only read a single channel's state (at a time, in the dump case). Signed-off-by: Jan Beulich --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -974,15 +974,16 @@ int evtchn_status(evtchn_status_t *statu if ( d =3D=3D NULL ) return -ESRCH; =20 - spin_lock(&d->event_lock); - if ( !port_is_valid(d, port) ) { - rc =3D -EINVAL; - goto out; + rcu_unlock_domain(d); + return -EINVAL; } =20 chn =3D evtchn_from_port(d, port); + + evtchn_read_lock(chn); + if ( consumer_is_xen(chn) ) { rc =3D -EACCES; @@ -1027,7 +1028,7 @@ int evtchn_status(evtchn_status_t *statu status->vcpu =3D chn->notify_vcpu_id; =20 out: - spin_unlock(&d->event_lock); + evtchn_read_unlock(chn); rcu_unlock_domain(d); =20 return rc; @@ -1582,22 +1583,32 @@ void evtchn_move_pirqs(struct vcpu *v) static void domain_dump_evtchn_info(struct domain *d) { unsigned int port; - int irq; =20 printk("Event channel information for domain %d:\n" "Polling vCPUs: {%*pbl}\n" " port [p/m/s]\n", d->domain_id, d->max_vcpus, d->poll_mask); =20 - spin_lock(&d->event_lock); - for ( port =3D 1; port_is_valid(d, port); ++port ) { - const struct evtchn *chn; + struct evtchn *chn; char *ssid; =20 + if ( !(port & 0x3f) ) + process_pending_softirqs(); + chn =3D evtchn_from_port(d, port); + + if ( !evtchn_read_trylock(chn) ) + { + printk(" %4u in flux\n", port); + continue; + } + if ( chn->state =3D=3D ECS_FREE ) + { + evtchn_read_unlock(chn); continue; + } =20 printk(" %4u [%d/%d/", port, @@ -1607,26 +1618,49 @@ static void domain_dump_evtchn_info(stru printk("]: s=3D%d n=3D%d x=3D%d", chn->state, chn->notify_vcpu_id, chn->xen_consumer); =20 + ssid =3D xsm_show_security_evtchn(d, chn); + switch ( chn->state ) { case ECS_UNBOUND: printk(" d=3D%d", chn->u.unbound.remote_domid); break; + case ECS_INTERDOMAIN: printk(" d=3D%d p=3D%d", chn->u.interdomain.remote_dom->domain_id, chn->u.interdomain.remote_port); break; - case ECS_PIRQ: - irq =3D domain_pirq_to_irq(d, chn->u.pirq.irq); - printk(" p=3D%d i=3D%d", chn->u.pirq.irq, irq); + + case ECS_PIRQ: { + unsigned int pirq =3D chn->u.pirq.irq; + + /* + * The per-channel locks nest inside the per-domain one, so we + * can't acquire the latter without first letting go of the fo= rmer. + */ + evtchn_read_unlock(chn); + chn =3D NULL; + if ( spin_trylock(&d->event_lock) ) + { + int irq =3D domain_pirq_to_irq(d, pirq); + + spin_unlock(&d->event_lock); + printk(" p=3D%u i=3D%d", pirq, irq); + } + else + printk(" p=3D%u i=3D?", pirq); break; + } + case ECS_VIRQ: printk(" v=3D%d", chn->u.virq); break; } =20 - ssid =3D xsm_show_security_evtchn(d, chn); + if ( chn ) + evtchn_read_unlock(chn); + if (ssid) { printk(" Z=3D%s\n", ssid); xfree(ssid); @@ -1634,8 +1668,6 @@ static void domain_dump_evtchn_info(stru printk("\n"); } } - - spin_unlock(&d->event_lock); } =20 static void dump_evtchn_info(unsigned char key) --- a/xen/xsm/flask/flask_op.c +++ b/xen/xsm/flask/flask_op.c @@ -555,12 +555,13 @@ static int flask_get_peer_sid(struct xen struct evtchn *chn; struct domain_security_struct *dsec; =20 - spin_lock(&d->event_lock); - if ( !port_is_valid(d, arg->evtchn) ) - goto out; + return -EINVAL; =20 chn =3D evtchn_from_port(d, arg->evtchn); + + evtchn_read_lock(chn); + if ( chn->state !=3D ECS_INTERDOMAIN ) goto out; =20 @@ -573,7 +574,7 @@ static int flask_get_peer_sid(struct xen rv =3D 0; =20 out: - spin_unlock(&d->event_lock); + evtchn_read_unlock(chn); return rv; } =20 From nobody Sun May 5 06:44:27 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1611735389; cv=none; d=zohomail.com; s=zohoarc; b=B+33Qxx8lxhMeeLXtHc8VvEiZM5lp6wbEG+tD1fdbxqDYlC7OpAwpgqSAGAwlXMEc3inw3MswbIfzK01kjXEQtwvZgtibcmX9/kOzwe5yeXpN2I/QgMXxhWEdxiOuLSA4f6Z6b+xGeHgFR9HBToEmwzot3N1zL701DLpqsLCspA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611735389; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=xyZ81DdlQSkObCarELiLZk3fc1LFujFVU0p+Jq+VIqk=; b=BTv/UXyIEIzH3p/B+M96TEuST9pKGBr8Lkc82lBqDkGGcvpoS9Dxt2+HQEySmwLKMSyC9LGTzw/gcneuljBR+xozr4KrkUsWfqLm9khd76nPRSBS+z0f2Z3BlCyyIlmv17UPso/eCgnLrw6tmmWyMlh4V1EtYK5DCp0iDlNTJ7c= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1611735389315883.5987988077178; Wed, 27 Jan 2021 00:16:29 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.75795.136560 (Exim 4.92) (envelope-from ) id 1l4fzq-0005eS-Bd; Wed, 27 Jan 2021 08:16:14 +0000 Received: by outflank-mailman (output) from mailman id 75795.136560; Wed, 27 Jan 2021 08:16:14 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4fzq-0005eL-8K; Wed, 27 Jan 2021 08:16:14 +0000 Received: by outflank-mailman (input) for mailman id 75795; Wed, 27 Jan 2021 08:16:12 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4fzo-0005e9-Jk for xen-devel@lists.xenproject.org; Wed, 27 Jan 2021 08:16:12 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 2f9fc0c0-071e-4d78-b6e0-5a4dae795380; Wed, 27 Jan 2021 08:16:08 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 431E5AF17; Wed, 27 Jan 2021 08:16:07 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2f9fc0c0-071e-4d78-b6e0-5a4dae795380 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1611735367; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xyZ81DdlQSkObCarELiLZk3fc1LFujFVU0p+Jq+VIqk=; b=lIB/Nk4PuwRsfFCdaQcmCG2LKaABNfwtSntWiVQk+v+Z9/u6lO/aG+fLMc0WkJzKu6UGOc gPmbMMKAfmYgv6zJ/HVQyr+paxyaDoaANNRD8rLb6RrEQpFbNTxun/MI6bXADUhBN6d8fS /qc4nqthy/lGVazVAMCqxoY/7Wps6pI= Subject: [PATCH v5 2/6] evtchn: convert domain event lock to an r/w one From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini , =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= , Kevin Tian References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Message-ID: <5f5fc6a7-6e27-8275-0f05-11ba5454156a@suse.com> Date: Wed, 27 Jan 2021 09:16:07 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Especially for the use in evtchn_move_pirqs() (called when moving a vCPU across pCPU-s) and the ones in EOI handling in PCI pass-through code, serializing perhaps an entire domain isn't helpful when no state (which isn't e.g. further protected by the per-channel lock) changes. Unfortunately this implies dropping of lock profiling for this lock, until r/w locks may get enabled for such functionality. While ->notify_vcpu_id is now meant to be consistently updated with the per-channel lock held, an extension applies to ECS_PIRQ: The field is also guaranteed to not change with the per-domain event lock held for writing. Therefore the link_pirq_port() call from evtchn_bind_pirq() could in principle be moved out of the per-channel locked regions, but this further code churn didn't seem worth it. Signed-off-by: Jan Beulich reviewed-by on the next version (assuming there are nothing wrong with=20 --- v5: Re-base, also over dropped earlier patch. v4: Re-base, in particular over new earlier patches. Acquire both per-domain locks for writing in evtchn_close(). Adjust spin_barrier() related comments. v3: Re-base. v2: Consistently lock for writing in evtchn_reset(). Fix error path in pci_clean_dpci_irqs(). Lock for writing in pt_irq_time_out(), hvm_dirq_assist(), hvm_dpci_eoi(), and hvm_dpci_isairq_eoi(). Move rw_barrier() introduction here. Re-base over changes earlier in the series. --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -903,7 +903,7 @@ int arch_domain_soft_reset(struct domain if ( !is_hvm_domain(d) ) return -EINVAL; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); for ( i =3D 0; i < d->nr_pirqs ; i++ ) { if ( domain_pirq_to_emuirq(d, i) !=3D IRQ_UNBOUND ) @@ -913,7 +913,7 @@ int arch_domain_soft_reset(struct domain break; } } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 if ( ret ) return ret; --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -528,9 +528,9 @@ void hvm_migrate_pirqs(struct vcpu *v) if ( !is_iommu_enabled(d) || !hvm_domain_irq(d)->dpci ) return; =20 - spin_lock(&d->event_lock); + read_lock(&d->event_lock); pt_pirq_iterate(d, migrate_pirq, v); - spin_unlock(&d->event_lock); + read_unlock(&d->event_lock); } =20 static bool hvm_get_pending_event(struct vcpu *v, struct x86_event *info) --- a/xen/arch/x86/hvm/irq.c +++ b/xen/arch/x86/hvm/irq.c @@ -404,9 +404,9 @@ int hvm_inject_msi(struct domain *d, uin { int rc; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); rc =3D map_domain_emuirq_pirq(d, pirq, IRQ_MSI_EMU); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); if ( rc ) return rc; info =3D pirq_info(d, pirq); --- a/xen/arch/x86/hvm/vioapic.c +++ b/xen/arch/x86/hvm/vioapic.c @@ -203,9 +203,9 @@ static int vioapic_hwdom_map_gsi(unsigne { gprintk(XENLOG_WARNING, "vioapic: error binding GSI %u: %d\n", gsi, ret); - spin_lock(&currd->event_lock); + write_lock(&currd->event_lock); unmap_domain_pirq(currd, pirq); - spin_unlock(&currd->event_lock); + write_unlock(&currd->event_lock); } pcidevs_unlock(); =20 --- a/xen/arch/x86/hvm/vmsi.c +++ b/xen/arch/x86/hvm/vmsi.c @@ -465,7 +465,7 @@ int msixtbl_pt_register(struct domain *d int r =3D -EINVAL; =20 ASSERT(pcidevs_locked()); - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); =20 if ( !msixtbl_initialised(d) ) return -ENODEV; @@ -535,7 +535,7 @@ void msixtbl_pt_unregister(struct domain struct msixtbl_entry *entry; =20 ASSERT(pcidevs_locked()); - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); =20 if ( !msixtbl_initialised(d) ) return; @@ -589,13 +589,13 @@ void msixtbl_pt_cleanup(struct domain *d if ( !msixtbl_initialised(d) ) return; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 list_for_each_entry_safe( entry, temp, &d->arch.hvm.msixtbl_list, list ) del_msixtbl_entry(entry); =20 - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); } =20 void msix_write_completion(struct vcpu *v) @@ -809,9 +809,9 @@ static void vpci_msi_disable(const struc ASSERT(!rc); } =20 - spin_lock(&pdev->domain->event_lock); + write_lock(&pdev->domain->event_lock); unmap_domain_pirq(pdev->domain, pirq); - spin_unlock(&pdev->domain->event_lock); + write_unlock(&pdev->domain->event_lock); pcidevs_unlock(); } =20 --- a/xen/arch/x86/io_apic.c +++ b/xen/arch/x86/io_apic.c @@ -2413,10 +2413,10 @@ int ioapic_guest_write(unsigned long phy } if ( pirq >=3D 0 ) { - spin_lock(&hardware_domain->event_lock); + write_lock(&hardware_domain->event_lock); ret =3D map_domain_pirq(hardware_domain, pirq, irq, MAP_PIRQ_TYPE_GSI, NULL); - spin_unlock(&hardware_domain->event_lock); + write_unlock(&hardware_domain->event_lock); if ( ret < 0 ) return ret; } --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -1552,7 +1552,7 @@ int pirq_guest_bind(struct vcpu *v, stru unsigned int max_nr_guests =3D will_share ? irq_max_guests : 1; int rc =3D 0; =20 - WARN_ON(!spin_is_locked(&v->domain->event_lock)); + WARN_ON(!rw_is_write_locked(&v->domain->event_lock)); BUG_ON(!local_irq_is_enabled()); =20 retry: @@ -1766,7 +1766,7 @@ void pirq_guest_unbind(struct domain *d, struct irq_desc *desc; int irq =3D 0; =20 - WARN_ON(!spin_is_locked(&d->event_lock)); + WARN_ON(!rw_is_write_locked(&d->event_lock)); =20 BUG_ON(!local_irq_is_enabled()); desc =3D pirq_spin_lock_irq_desc(pirq, NULL); @@ -1803,7 +1803,7 @@ static bool pirq_guest_force_unbind(stru unsigned int i; bool bound =3D false; =20 - WARN_ON(!spin_is_locked(&d->event_lock)); + WARN_ON(!rw_is_write_locked(&d->event_lock)); =20 BUG_ON(!local_irq_is_enabled()); desc =3D pirq_spin_lock_irq_desc(pirq, NULL); @@ -2045,7 +2045,7 @@ int get_free_pirq(struct domain *d, int { int i; =20 - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); =20 if ( type =3D=3D MAP_PIRQ_TYPE_GSI ) { @@ -2070,7 +2070,7 @@ int get_free_pirqs(struct domain *d, uns { unsigned int i, found =3D 0; =20 - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); =20 for ( i =3D d->nr_pirqs - 1; i >=3D nr_irqs_gsi; --i ) if ( is_free_pirq(d, pirq_info(d, i)) ) @@ -2098,7 +2098,7 @@ int map_domain_pirq( DECLARE_BITMAP(prepared, MAX_MSI_IRQS) =3D {}; DECLARE_BITMAP(granted, MAX_MSI_IRQS) =3D {}; =20 - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); =20 if ( !irq_access_permitted(current->domain, irq)) return -EPERM; @@ -2317,7 +2317,7 @@ int unmap_domain_pirq(struct domain *d, return -EINVAL; =20 ASSERT(pcidevs_locked()); - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); =20 info =3D pirq_info(d, pirq); if ( !info || (irq =3D info->arch.irq) <=3D 0 ) @@ -2444,13 +2444,13 @@ void free_domain_pirqs(struct domain *d) int i; =20 pcidevs_lock(); - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 for ( i =3D 0; i < d->nr_pirqs; i++ ) if ( domain_pirq_to_irq(d, i) > 0 ) unmap_domain_pirq(d, i); =20 - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); pcidevs_unlock(); } =20 @@ -2693,7 +2693,7 @@ int map_domain_emuirq_pirq(struct domain int old_emuirq =3D IRQ_UNBOUND, old_pirq =3D IRQ_UNBOUND; struct pirq *info; =20 - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); =20 if ( !is_hvm_domain(d) ) return -EINVAL; @@ -2759,7 +2759,7 @@ int unmap_domain_pirq_emuirq(struct doma if ( (pirq < 0) || (pirq >=3D d->nr_pirqs) ) return -EINVAL; =20 - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); =20 emuirq =3D domain_pirq_to_emuirq(d, pirq); if ( emuirq =3D=3D IRQ_UNBOUND ) @@ -2807,7 +2807,7 @@ static int allocate_pirq(struct domain * { int current_pirq; =20 - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); current_pirq =3D domain_irq_to_pirq(d, irq); if ( pirq < 0 ) { @@ -2879,7 +2879,7 @@ int allocate_and_map_gsi_pirq(struct dom } =20 /* Verify or get pirq. */ - spin_lock(&d->event_lock); + write_lock(&d->event_lock); pirq =3D allocate_pirq(d, index, *pirq_p, irq, MAP_PIRQ_TYPE_GSI, NULL= ); if ( pirq < 0 ) { @@ -2892,7 +2892,7 @@ int allocate_and_map_gsi_pirq(struct dom *pirq_p =3D pirq; =20 done: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return ret; } @@ -2933,7 +2933,7 @@ int allocate_and_map_msi_pirq(struct dom =20 pcidevs_lock(); /* Verify or get pirq. */ - spin_lock(&d->event_lock); + write_lock(&d->event_lock); pirq =3D allocate_pirq(d, index, *pirq_p, irq, type, &msi->entry_nr); if ( pirq < 0 ) { @@ -2946,7 +2946,7 @@ int allocate_and_map_msi_pirq(struct dom *pirq_p =3D pirq; =20 done: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); pcidevs_unlock(); if ( ret ) { --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -34,7 +34,7 @@ static int physdev_hvm_map_pirq( =20 ASSERT(!is_hardware_domain(d)); =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); switch ( type ) { case MAP_PIRQ_TYPE_GSI: { @@ -84,7 +84,7 @@ static int physdev_hvm_map_pirq( break; } =20 - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return ret; } =20 @@ -154,18 +154,18 @@ int physdev_unmap_pirq(domid_t domid, in =20 if ( is_hvm_domain(d) && has_pirq(d) ) { - spin_lock(&d->event_lock); + write_lock(&d->event_lock); if ( domain_pirq_to_emuirq(d, pirq) !=3D IRQ_UNBOUND ) ret =3D unmap_domain_pirq_emuirq(d, pirq); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); if ( domid =3D=3D DOMID_SELF || ret ) goto free_domain; } =20 pcidevs_lock(); - spin_lock(&d->event_lock); + write_lock(&d->event_lock); ret =3D unmap_domain_pirq(d, pirq); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); pcidevs_unlock(); =20 free_domain: @@ -192,10 +192,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H ret =3D -EINVAL; if ( eoi.irq >=3D currd->nr_pirqs ) break; - spin_lock(&currd->event_lock); + read_lock(&currd->event_lock); pirq =3D pirq_info(currd, eoi.irq); if ( !pirq ) { - spin_unlock(&currd->event_lock); + read_unlock(&currd->event_lock); break; } if ( currd->arch.auto_unmask ) @@ -214,7 +214,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H && hvm_irq->gsi_assert_count[gsi] ) send_guest_pirq(currd, pirq); } - spin_unlock(&currd->event_lock); + read_unlock(&currd->event_lock); ret =3D 0; break; } @@ -626,7 +626,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H if ( copy_from_guest(&out, arg, 1) !=3D 0 ) break; =20 - spin_lock(&currd->event_lock); + write_lock(&currd->event_lock); =20 ret =3D get_free_pirq(currd, out.type); if ( ret >=3D 0 ) @@ -639,7 +639,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H ret =3D -ENOMEM; } =20 - spin_unlock(&currd->event_lock); + write_unlock(&currd->event_lock); =20 if ( ret >=3D 0 ) { --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -448,7 +448,7 @@ static long pv_shim_event_channel_op(int if ( rc ) = \ break; = \ = \ - spin_lock(&d->event_lock); = \ + write_lock(&d->event_lock); = \ rc =3D evtchn_allocate_port(d, op.port_field); = \ if ( rc ) = \ { = \ @@ -457,7 +457,7 @@ static long pv_shim_event_channel_op(int } = \ else = \ evtchn_reserve(d, op.port_field); = \ - spin_unlock(&d->event_lock); = \ + write_unlock(&d->event_lock); = \ = \ if ( !rc && __copy_to_guest(arg, &op, 1) ) = \ rc =3D -EFAULT; = \ @@ -585,11 +585,11 @@ static long pv_shim_event_channel_op(int if ( rc ) break; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); rc =3D evtchn_allocate_port(d, ipi.port); if ( rc ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 close.port =3D ipi.port; BUG_ON(xen_hypercall_event_channel_op(EVTCHNOP_close, &close)); @@ -598,7 +598,7 @@ static long pv_shim_event_channel_op(int =20 evtchn_assign_vcpu(d, ipi.port, ipi.vcpu); evtchn_reserve(d, ipi.port); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 if ( __copy_to_guest(arg, &ipi, 1) ) rc =3D -EFAULT; --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -294,7 +294,7 @@ static long evtchn_alloc_unbound(evtchn_ if ( d =3D=3D NULL ) return -ESRCH; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 if ( (port =3D get_free_port(d)) < 0 ) ERROR_EXIT_DOM(port, d); @@ -317,7 +317,7 @@ static long evtchn_alloc_unbound(evtchn_ =20 out: check_free_port(d, port); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); rcu_unlock_domain(d); =20 return rc; @@ -355,14 +355,14 @@ static long evtchn_bind_interdomain(evtc /* Avoid deadlock by first acquiring lock of domain with smaller id. */ if ( ld < rd ) { - spin_lock(&ld->event_lock); - spin_lock(&rd->event_lock); + write_lock(&ld->event_lock); + write_lock(&rd->event_lock); } else { if ( ld !=3D rd ) - spin_lock(&rd->event_lock); - spin_lock(&ld->event_lock); + write_lock(&rd->event_lock); + write_lock(&ld->event_lock); } =20 if ( (lport =3D get_free_port(ld)) < 0 ) @@ -403,9 +403,9 @@ static long evtchn_bind_interdomain(evtc =20 out: check_free_port(ld, lport); - spin_unlock(&ld->event_lock); + write_unlock(&ld->event_lock); if ( ld !=3D rd ) - spin_unlock(&rd->event_lock); + write_unlock(&rd->event_lock); =20 rcu_unlock_domain(rd); =20 @@ -436,7 +436,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t if ( (v =3D domain_vcpu(d, vcpu)) =3D=3D NULL ) return -ENOENT; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 if ( read_atomic(&v->virq_to_evtchn[virq]) ) ERROR_EXIT(-EEXIST); @@ -477,7 +477,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t write_atomic(&v->virq_to_evtchn[virq], port); =20 out: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return rc; } @@ -493,7 +493,7 @@ static long evtchn_bind_ipi(evtchn_bind_ if ( domain_vcpu(d, vcpu) =3D=3D NULL ) return -ENOENT; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 if ( (port =3D get_free_port(d)) < 0 ) ERROR_EXIT(port); @@ -511,7 +511,7 @@ static long evtchn_bind_ipi(evtchn_bind_ bind->port =3D port; =20 out: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return rc; } @@ -557,7 +557,7 @@ static long evtchn_bind_pirq(evtchn_bind if ( !is_hvm_domain(d) && !pirq_access_permitted(d, pirq) ) return -EPERM; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 if ( pirq_to_evtchn(d, pirq) !=3D 0 ) ERROR_EXIT(-EEXIST); @@ -597,7 +597,7 @@ static long evtchn_bind_pirq(evtchn_bind =20 out: check_free_port(d, port); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return rc; } @@ -611,7 +611,7 @@ int evtchn_close(struct domain *d1, int long rc =3D 0; =20 again: - spin_lock(&d1->event_lock); + write_lock(&d1->event_lock); =20 if ( !port_is_valid(d1, port1) ) { @@ -682,13 +682,11 @@ int evtchn_close(struct domain *d1, int rcu_lock_domain(d2); =20 if ( d1 < d2 ) - { - spin_lock(&d2->event_lock); - } + write_lock(&d2->event_lock); else if ( d1 !=3D d2 ) { - spin_unlock(&d1->event_lock); - spin_lock(&d2->event_lock); + write_unlock(&d1->event_lock); + write_lock(&d2->event_lock); goto again; } } @@ -735,11 +733,11 @@ int evtchn_close(struct domain *d1, int if ( d2 !=3D NULL ) { if ( d1 !=3D d2 ) - spin_unlock(&d2->event_lock); + write_unlock(&d2->event_lock); rcu_unlock_domain(d2); } =20 - spin_unlock(&d1->event_lock); + write_unlock(&d1->event_lock); =20 return rc; } @@ -1046,7 +1044,7 @@ long evtchn_bind_vcpu(unsigned int port, if ( (v =3D domain_vcpu(d, vcpu_id)) =3D=3D NULL ) return -ENOENT; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 if ( !port_is_valid(d, port) ) { @@ -1090,7 +1088,7 @@ long evtchn_bind_vcpu(unsigned int port, } =20 out: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return rc; } @@ -1136,7 +1134,7 @@ int evtchn_reset(struct domain *d, bool if ( d !=3D current->domain && !d->controller_pause_count ) return -EINVAL; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 /* * If we are resuming, then start where we stopped. Otherwise, check @@ -1147,7 +1145,7 @@ int evtchn_reset(struct domain *d, bool if ( i > d->next_evtchn ) d->next_evtchn =3D i; =20 - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 if ( !i ) return -EBUSY; @@ -1159,14 +1157,14 @@ int evtchn_reset(struct domain *d, bool /* NB: Choice of frequency is arbitrary. */ if ( !(i & 0x3f) && hypercall_preempt_check() ) { - spin_lock(&d->event_lock); + write_lock(&d->event_lock); d->next_evtchn =3D i; - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -ERESTART; } } =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 d->next_evtchn =3D 0; =20 @@ -1179,7 +1177,7 @@ int evtchn_reset(struct domain *d, bool evtchn_2l_init(d); } =20 - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return rc; } @@ -1369,7 +1367,7 @@ int alloc_unbound_xen_event_channel( struct evtchn *chn; int port, rc; =20 - spin_lock(&ld->event_lock); + write_lock(&ld->event_lock); =20 port =3D rc =3D get_free_port(ld); if ( rc < 0 ) @@ -1397,7 +1395,7 @@ int alloc_unbound_xen_event_channel( =20 out: check_free_port(ld, port); - spin_unlock(&ld->event_lock); + write_unlock(&ld->event_lock); =20 return rc < 0 ? rc : port; } @@ -1408,7 +1406,7 @@ void free_xen_event_channel(struct domai { /* * Make sure ->is_dying is read /after/ ->valid_evtchns, pairing - * with the spin_barrier() and BUG_ON() in evtchn_destroy(). + * with the kind-of-barrier and BUG_ON() in evtchn_destroy(). */ smp_rmb(); BUG_ON(!d->is_dying); @@ -1428,7 +1426,7 @@ void notify_via_xen_event_channel(struct { /* * Make sure ->is_dying is read /after/ ->valid_evtchns, pairing - * with the spin_barrier() and BUG_ON() in evtchn_destroy(). + * with the kind-of-barrier and BUG_ON() in evtchn_destroy(). */ smp_rmb(); ASSERT(ld->is_dying); @@ -1485,7 +1483,8 @@ int evtchn_init(struct domain *d, unsign return -ENOMEM; d->valid_evtchns =3D EVTCHNS_PER_BUCKET; =20 - spin_lock_init_prof(d, event_lock); + rwlock_init(&d->event_lock); + if ( get_free_port(d) !=3D 0 ) { free_evtchn_bucket(d, d->evtchn); @@ -1510,9 +1509,10 @@ int evtchn_destroy(struct domain *d) { unsigned int i; =20 - /* After this barrier no new event-channel allocations can occur. */ + /* After this kind-of-barrier no new event-channel allocations can occ= ur. */ BUG_ON(!d->is_dying); - spin_barrier(&d->event_lock); + read_lock(&d->event_lock); + read_unlock(&d->event_lock); =20 /* Close all existing event channels. */ for ( i =3D d->valid_evtchns; --i; ) @@ -1570,13 +1570,13 @@ void evtchn_move_pirqs(struct vcpu *v) unsigned int port; struct evtchn *chn; =20 - spin_lock(&d->event_lock); + read_lock(&d->event_lock); for ( port =3D v->pirq_evtchn_head; port; port =3D chn->u.pirq.next_po= rt ) { chn =3D evtchn_from_port(d, port); pirq_set_affinity(d, chn->u.pirq.irq, mask); } - spin_unlock(&d->event_lock); + read_unlock(&d->event_lock); } =20 =20 @@ -1641,11 +1641,11 @@ static void domain_dump_evtchn_info(stru */ evtchn_read_unlock(chn); chn =3D NULL; - if ( spin_trylock(&d->event_lock) ) + if ( read_trylock(&d->event_lock) ) { int irq =3D domain_pirq_to_irq(d, pirq); =20 - spin_unlock(&d->event_lock); + read_unlock(&d->event_lock); printk(" p=3D%u i=3D%d", pirq, irq); } else --- a/xen/common/event_fifo.c +++ b/xen/common/event_fifo.c @@ -600,7 +600,7 @@ int evtchn_fifo_init_control(struct evtc if ( offset & (8 - 1) ) return -EINVAL; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 /* * If this is the first control block, setup an empty event array @@ -636,13 +636,13 @@ int evtchn_fifo_init_control(struct evtc else rc =3D map_control_block(v, gfn, offset); =20 - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return rc; =20 error: evtchn_fifo_destroy(d); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } =20 @@ -695,9 +695,9 @@ int evtchn_fifo_expand_array(const struc if ( !d->evtchn_fifo ) return -EOPNOTSUPP; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); rc =3D add_page_to_event_array(d, expand_array->array_gfn); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return rc; } --- a/xen/drivers/passthrough/vtd/x86/hvm.c +++ b/xen/drivers/passthrough/vtd/x86/hvm.c @@ -54,7 +54,7 @@ void hvm_dpci_isairq_eoi(struct domain * if ( !is_iommu_enabled(d) ) return; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 dpci =3D domain_get_irq_dpci(d); =20 @@ -63,5 +63,5 @@ void hvm_dpci_isairq_eoi(struct domain * /* Multiple mirq may be mapped to one isa irq */ pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq); } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); } --- a/xen/drivers/passthrough/x86/hvm.c +++ b/xen/drivers/passthrough/x86/hvm.c @@ -105,7 +105,7 @@ static void pt_pirq_softirq_reset(struct { struct domain *d =3D pirq_dpci->dom; =20 - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); =20 switch ( cmpxchg(&pirq_dpci->state, 1 << STATE_SCHED, 0) ) { @@ -162,7 +162,7 @@ static void pt_irq_time_out(void *data) const struct hvm_irq_dpci *dpci; const struct dev_intx_gsi_link *digl; =20 - spin_lock(&irq_map->dom->event_lock); + write_lock(&irq_map->dom->event_lock); =20 if ( irq_map->flags & HVM_IRQ_DPCI_IDENTITY_GSI ) { @@ -177,7 +177,7 @@ static void pt_irq_time_out(void *data) hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq); irq_map->flags |=3D HVM_IRQ_DPCI_EOI_LATCH; pt_irq_guest_eoi(irq_map->dom, irq_map, NULL); - spin_unlock(&irq_map->dom->event_lock); + write_unlock(&irq_map->dom->event_lock); return; } =20 @@ -185,7 +185,7 @@ static void pt_irq_time_out(void *data) if ( unlikely(!dpci) ) { ASSERT_UNREACHABLE(); - spin_unlock(&irq_map->dom->event_lock); + write_unlock(&irq_map->dom->event_lock); return; } list_for_each_entry ( digl, &irq_map->digl_list, list ) @@ -204,7 +204,7 @@ static void pt_irq_time_out(void *data) =20 pt_pirq_iterate(irq_map->dom, pt_irq_guest_eoi, NULL); =20 - spin_unlock(&irq_map->dom->event_lock); + write_unlock(&irq_map->dom->event_lock); } =20 struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *d) @@ -288,7 +288,7 @@ int pt_irq_create_bind( return -EINVAL; =20 restart: - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 hvm_irq_dpci =3D domain_get_irq_dpci(d); if ( !hvm_irq_dpci && !is_hardware_domain(d) ) @@ -304,7 +304,7 @@ int pt_irq_create_bind( hvm_irq_dpci =3D xzalloc(struct hvm_irq_dpci); if ( hvm_irq_dpci =3D=3D NULL ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -ENOMEM; } for ( i =3D 0; i < NR_HVM_DOMU_IRQS; i++ ) @@ -316,7 +316,7 @@ int pt_irq_create_bind( info =3D pirq_get_info(d, pirq); if ( !info ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -ENOMEM; } pirq_dpci =3D pirq_dpci(info); @@ -331,7 +331,7 @@ int pt_irq_create_bind( */ if ( pt_pirq_softirq_active(pirq_dpci) ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); cpu_relax(); goto restart; } @@ -389,7 +389,7 @@ int pt_irq_create_bind( pirq_dpci->dom =3D NULL; pirq_dpci->flags =3D 0; pirq_cleanup_check(info, d); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } } @@ -399,7 +399,7 @@ int pt_irq_create_bind( =20 if ( (pirq_dpci->flags & mask) !=3D mask ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -EBUSY; } =20 @@ -423,7 +423,7 @@ int pt_irq_create_bind( =20 dest_vcpu_id =3D hvm_girq_dest_2_vcpu_id(d, dest, dest_mode); pirq_dpci->gmsi.dest_vcpu_id =3D dest_vcpu_id; - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 pirq_dpci->gmsi.posted =3D false; vcpu =3D (dest_vcpu_id >=3D 0) ? d->vcpu[dest_vcpu_id] : NULL; @@ -483,7 +483,7 @@ int pt_irq_create_bind( =20 if ( !digl || !girq ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); xfree(girq); xfree(digl); return -ENOMEM; @@ -510,7 +510,7 @@ int pt_irq_create_bind( if ( pt_irq_bind->irq_type !=3D PT_IRQ_TYPE_PCI || pirq >=3D hvm_domain_irq(d)->nr_gsis ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return -EINVAL; } @@ -546,7 +546,7 @@ int pt_irq_create_bind( =20 if ( mask < 0 || trigger_mode < 0 ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 ASSERT_UNREACHABLE(); return -EINVAL; @@ -594,14 +594,14 @@ int pt_irq_create_bind( } pirq_dpci->flags =3D 0; pirq_cleanup_check(info, d); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); xfree(girq); xfree(digl); return rc; } } =20 - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 if ( iommu_verbose ) { @@ -619,7 +619,7 @@ int pt_irq_create_bind( } =20 default: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -EOPNOTSUPP; } =20 @@ -672,13 +672,13 @@ int pt_irq_destroy_bind( return -EOPNOTSUPP; } =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); =20 hvm_irq_dpci =3D domain_get_irq_dpci(d); =20 if ( !hvm_irq_dpci && !is_hardware_domain(d) ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -EINVAL; } =20 @@ -711,7 +711,7 @@ int pt_irq_destroy_bind( =20 if ( girq ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -EINVAL; } =20 @@ -755,7 +755,7 @@ int pt_irq_destroy_bind( pirq_cleanup_check(pirq, d); } =20 - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 if ( what && iommu_verbose ) { @@ -799,7 +799,7 @@ int pt_pirq_iterate(struct domain *d, unsigned int pirq =3D 0, n, i; struct pirq *pirqs[8]; =20 - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_locked(&d->event_lock)); =20 do { n =3D radix_tree_gang_lookup(&d->pirq_tree, (void **)pirqs, pirq, @@ -880,9 +880,9 @@ void hvm_dpci_msi_eoi(struct domain *d, (!hvm_domain_irq(d)->dpci && !is_hardware_domain(d)) ) return; =20 - spin_lock(&d->event_lock); + read_lock(&d->event_lock); pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector); - spin_unlock(&d->event_lock); + read_unlock(&d->event_lock); } =20 static void hvm_dirq_assist(struct domain *d, struct hvm_pirq_dpci *pirq_d= pci) @@ -893,7 +893,7 @@ static void hvm_dirq_assist(struct domai return; } =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); if ( test_and_clear_bool(pirq_dpci->masked) ) { struct pirq *pirq =3D dpci_pirq(pirq_dpci); @@ -947,7 +947,7 @@ static void hvm_dirq_assist(struct domai } =20 out: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); } =20 static void hvm_pirq_eoi(struct pirq *pirq) @@ -1007,7 +1007,7 @@ void hvm_dpci_eoi(struct domain *d, unsi =20 if ( is_hardware_domain(d) ) { - spin_lock(&d->event_lock); + write_lock(&d->event_lock); hvm_gsi_eoi(d, guest_gsi); goto unlock; } @@ -1018,7 +1018,7 @@ void hvm_dpci_eoi(struct domain *d, unsi return; } =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); hvm_irq_dpci =3D domain_get_irq_dpci(d); =20 if ( !hvm_irq_dpci ) @@ -1028,7 +1028,7 @@ void hvm_dpci_eoi(struct domain *d, unsi __hvm_dpci_eoi(d, girq); =20 unlock: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); } =20 static int pci_clean_dpci_irq(struct domain *d, @@ -1066,7 +1066,7 @@ int arch_pci_clean_pirqs(struct domain * if ( !is_hvm_domain(d) ) return 0; =20 - spin_lock(&d->event_lock); + write_lock(&d->event_lock); hvm_irq_dpci =3D domain_get_irq_dpci(d); if ( hvm_irq_dpci !=3D NULL ) { @@ -1074,14 +1074,14 @@ int arch_pci_clean_pirqs(struct domain * =20 if ( ret ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return ret; } =20 hvm_domain_irq(d)->dpci =3D NULL; free_hvm_irq_dpci(hvm_irq_dpci); } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); =20 return 0; } --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -377,7 +377,7 @@ struct domain unsigned int xen_evtchns; /* Port to resume from in evtchn_reset(), when in a continuation. */ unsigned int next_evtchn; - spinlock_t event_lock; + rwlock_t event_lock; const struct evtchn_port_ops *evtchn_port_ops; struct evtchn_fifo_domain *evtchn_fifo; =20 From nobody Sun May 5 06:44:27 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1611735413; cv=none; d=zohomail.com; s=zohoarc; b=jalBSf1yp3x1b5VrHOYSjMqaO9YloSMOANfUGneAAWYrjPWNqSUzavOZgAvoDdrZIi/LUBSPWv6kV4xC+g6vte6N391oeLp1tlfzW4FkOu/FUdVN9eBeFUyHB0CXIoy9zrTWPDlF0EXNlGwdGfiXWdR06bqGRhlwJahaTRB5KTg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611735413; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=QpyRMXe7gyQJEoJJGBUBHj/cowjnW/uFCjUcSR4dZrw=; b=kRgG84JVe489IAQcqKzys9qPUcIDZQh3cvHZeTWE569yKAetziu1oUOVXkimpVXSn662+pj7ThohRGeiy6AG5KH5Q1s9XSooY46wYTDGJGSjMzusE7hDY2b0W/CSB5wDV7KPAblAqRsFg7fWDoB0j09Yel5SolPzW4haE0hmPoQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1611735413915312.3969452810975; Wed, 27 Jan 2021 00:16:53 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.75798.136571 (Exim 4.92) (envelope-from ) id 1l4g0F-0005l2-PH; Wed, 27 Jan 2021 08:16:39 +0000 Received: by outflank-mailman (output) from mailman id 75798.136571; Wed, 27 Jan 2021 08:16:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4g0F-0005kv-MG; Wed, 27 Jan 2021 08:16:39 +0000 Received: by outflank-mailman (input) for mailman id 75798; Wed, 27 Jan 2021 08:16:38 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4g0E-0005ki-CH for xen-devel@lists.xenproject.org; Wed, 27 Jan 2021 08:16:38 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 088ee26f-2438-44ac-b47c-6ddbbf0d9055; Wed, 27 Jan 2021 08:16:37 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E0570AF17; Wed, 27 Jan 2021 08:16:36 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 088ee26f-2438-44ac-b47c-6ddbbf0d9055 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1611735397; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QpyRMXe7gyQJEoJJGBUBHj/cowjnW/uFCjUcSR4dZrw=; b=YOPgBPWtycIF312nHNy3XHFwMyg5jDj4ig9C8PLwYuHLp7mWC4dyfQ6lelf3SltfPsbKz6 ho68v5JZPtfjgkErNLzXxKH3492P/knZKVY6k59rWW25usasKvDBALtofRi2OHucv34R5s NqX5WXa4BuTsR/kb+lZwpyTXdW3Qbic= Subject: [PATCH v5 3/6] evtchn: slightly defer lock acquire where possible From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Message-ID: Date: Wed, 27 Jan 2021 09:16:36 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" port_is_valid() and evtchn_from_port() are fine to use without holding any locks. Accordingly acquire the per-domain lock slightly later in evtchn_close() and evtchn_bind_vcpu(). Signed-off-by: Jan Beulich --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -610,17 +610,14 @@ int evtchn_close(struct domain *d1, int int port2; long rc =3D 0; =20 - again: - write_lock(&d1->event_lock); - if ( !port_is_valid(d1, port1) ) - { - rc =3D -EINVAL; - goto out; - } + return -EINVAL; =20 chn1 =3D evtchn_from_port(d1, port1); =20 + again: + write_lock(&d1->event_lock); + /* Guest cannot close a Xen-attached event channel. */ if ( unlikely(consumer_is_xen(chn1)) && guest ) { @@ -1044,16 +1041,13 @@ long evtchn_bind_vcpu(unsigned int port, if ( (v =3D domain_vcpu(d, vcpu_id)) =3D=3D NULL ) return -ENOENT; =20 - write_lock(&d->event_lock); - if ( !port_is_valid(d, port) ) - { - rc =3D -EINVAL; - goto out; - } + return -EINVAL; =20 chn =3D evtchn_from_port(d, port); =20 + write_lock(&d->event_lock); + /* Guest cannot re-bind a Xen-attached event channel. */ if ( unlikely(consumer_is_xen(chn)) ) { From nobody Sun May 5 06:44:27 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1611735438; cv=none; d=zohomail.com; s=zohoarc; b=lb3cyog2P6XWco7uwDUeDvIjEi0DH2jTJsSH8qGxpVoMqAzzbwuPCOOWaMijo4NlBNpe1FaT4v7IHfb5sW4++nj3jsCT4jeR5DkK85mNV4hsLdgCxtTiXSax1cMzIPuA8SXaWvQuHQzop08SydmK1ewH7QttLpdZOAkVKpH9v1A= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611735438; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=gEGTEYpdjKYg3S3Ltj5tTKIk5+nFXk5YGvD0lOGX3do=; b=IkVBhpJyxcu+tLDgV0Yt0G8jcSB3TRfivXJ6atPoGUFuPOCiY0tHWo3b6u4PpdLAtWX7Ze6sdSVLDIAaEp1uFuWRGaHtdGrgiahNyf5oqLINaI4iWFzE6TVRJzVs/xRN3lVy2yqlvzXXYKtvFTrEIduUyDq5Ve9er2VtypWT0kE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1611735438012829.7471959605595; Wed, 27 Jan 2021 00:17:18 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.75804.136584 (Exim 4.92) (envelope-from ) id 1l4g0e-0005rb-2K; Wed, 27 Jan 2021 08:17:04 +0000 Received: by outflank-mailman (output) from mailman id 75804.136584; Wed, 27 Jan 2021 08:17:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4g0d-0005rU-V3; Wed, 27 Jan 2021 08:17:03 +0000 Received: by outflank-mailman (input) for mailman id 75804; Wed, 27 Jan 2021 08:17:02 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4g0c-0005rK-Mj for xen-devel@lists.xenproject.org; Wed, 27 Jan 2021 08:17:02 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 73088352-a3cb-4101-97d0-defbae38bcdc; Wed, 27 Jan 2021 08:17:00 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id D6C8AAF17; Wed, 27 Jan 2021 08:16:59 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 73088352-a3cb-4101-97d0-defbae38bcdc X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1611735420; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gEGTEYpdjKYg3S3Ltj5tTKIk5+nFXk5YGvD0lOGX3do=; b=dM4GiI1fRZ4aTkYEKrCcaEnsfHx7/mSrbluDCPLEJH8zn/go4wVrf6mxeiGbkNODIUEkm8 /AKTS1oPdNPA39tJkwdqRbmPI2nyLvwTsGEh5+HpXD+MBoWYh8BjjMI9iBjRsJPxytTOK5 ihUgX1lgScKG0ojFxNb54k9M8D5wJoo= Subject: [PATCH v5 4/6] evtchn: add helper for port_is_valid() + evtchn_from_port() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Message-ID: Date: Wed, 27 Jan 2021 09:16:59 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" The combination is pretty common, so adding a simple local helper seems worthwhile. Make it const- and type-correct, in turn requiring the two called function to also be const-correct (and at this occasion also make them type-correct). Signed-off-by: Jan Beulich Acked-by: Julien Grall --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -147,6 +147,11 @@ static bool virq_is_global(unsigned int return true; } =20 +static struct evtchn *_evtchn_from_port(const struct domain *d, + evtchn_port_t port) +{ + return port_is_valid(d, port) ? evtchn_from_port(d, port) : NULL; +} =20 static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int p= ort) { @@ -369,9 +374,9 @@ static long evtchn_bind_interdomain(evtc ERROR_EXIT(lport); lchn =3D evtchn_from_port(ld, lport); =20 - if ( !port_is_valid(rd, rport) ) + rchn =3D _evtchn_from_port(rd, rport); + if ( !rchn ) ERROR_EXIT_DOM(-EINVAL, rd); - rchn =3D evtchn_from_port(rd, rport); if ( (rchn->state !=3D ECS_UNBOUND) || (rchn->u.unbound.remote_domid !=3D ld->domain_id) ) ERROR_EXIT_DOM(-EINVAL, rd); @@ -606,15 +611,12 @@ static long evtchn_bind_pirq(evtchn_bind int evtchn_close(struct domain *d1, int port1, bool guest) { struct domain *d2 =3D NULL; - struct evtchn *chn1, *chn2; - int port2; + struct evtchn *chn1 =3D _evtchn_from_port(d1, port1), *chn2; long rc =3D 0; =20 - if ( !port_is_valid(d1, port1) ) + if ( !chn1 ) return -EINVAL; =20 - chn1 =3D evtchn_from_port(d1, port1); - again: write_lock(&d1->event_lock); =20 @@ -700,10 +702,8 @@ int evtchn_close(struct domain *d1, int goto out; } =20 - port2 =3D chn1->u.interdomain.remote_port; - BUG_ON(!port_is_valid(d2, port2)); - - chn2 =3D evtchn_from_port(d2, port2); + chn2 =3D _evtchn_from_port(d2, chn1->u.interdomain.remote_port); + BUG_ON(!chn2); BUG_ON(chn2->state !=3D ECS_INTERDOMAIN); BUG_ON(chn2->u.interdomain.remote_dom !=3D d1); =20 @@ -741,15 +741,13 @@ int evtchn_close(struct domain *d1, int =20 int evtchn_send(struct domain *ld, unsigned int lport) { - struct evtchn *lchn, *rchn; + struct evtchn *lchn =3D _evtchn_from_port(ld, lport), *rchn; struct domain *rd; int rport, ret =3D 0; =20 - if ( !port_is_valid(ld, lport) ) + if ( !lchn ) return -EINVAL; =20 - lchn =3D evtchn_from_port(ld, lport); - evtchn_read_lock(lchn); =20 /* Guest cannot send via a Xen-attached event channel. */ @@ -961,7 +959,6 @@ int evtchn_status(evtchn_status_t *statu { struct domain *d; domid_t dom =3D status->dom; - int port =3D status->port; struct evtchn *chn; long rc =3D 0; =20 @@ -969,14 +966,13 @@ int evtchn_status(evtchn_status_t *statu if ( d =3D=3D NULL ) return -ESRCH; =20 - if ( !port_is_valid(d, port) ) + chn =3D _evtchn_from_port(d, status->port); + if ( !chn ) { rcu_unlock_domain(d); return -EINVAL; } =20 - chn =3D evtchn_from_port(d, port); - evtchn_read_lock(chn); =20 if ( consumer_is_xen(chn) ) @@ -1041,11 +1037,10 @@ long evtchn_bind_vcpu(unsigned int port, if ( (v =3D domain_vcpu(d, vcpu_id)) =3D=3D NULL ) return -ENOENT; =20 - if ( !port_is_valid(d, port) ) + chn =3D _evtchn_from_port(d, port); + if ( !chn ) return -EINVAL; =20 - chn =3D evtchn_from_port(d, port); - write_lock(&d->event_lock); =20 /* Guest cannot re-bind a Xen-attached event channel. */ @@ -1091,13 +1086,11 @@ long evtchn_bind_vcpu(unsigned int port, int evtchn_unmask(unsigned int port) { struct domain *d =3D current->domain; - struct evtchn *evtchn; + struct evtchn *evtchn =3D _evtchn_from_port(d, port); =20 - if ( unlikely(!port_is_valid(d, port)) ) + if ( unlikely(!evtchn) ) return -EINVAL; =20 - evtchn =3D evtchn_from_port(d, port); - evtchn_read_lock(evtchn); =20 evtchn_port_unmask(d, evtchn); @@ -1180,14 +1173,12 @@ static long evtchn_set_priority(const st { struct domain *d =3D current->domain; unsigned int port =3D set_priority->port; - struct evtchn *chn; + struct evtchn *chn =3D _evtchn_from_port(d, port); long ret; =20 - if ( !port_is_valid(d, port) ) + if ( !chn ) return -EINVAL; =20 - chn =3D evtchn_from_port(d, port); - evtchn_read_lock(chn); =20 ret =3D evtchn_port_set_priority(d, chn, set_priority->priority); @@ -1413,10 +1404,10 @@ void free_xen_event_channel(struct domai =20 void notify_via_xen_event_channel(struct domain *ld, int lport) { - struct evtchn *lchn, *rchn; + struct evtchn *lchn =3D _evtchn_from_port(ld, lport), *rchn; struct domain *rd; =20 - if ( !port_is_valid(ld, lport) ) + if ( !lchn ) { /* * Make sure ->is_dying is read /after/ ->valid_evtchns, pairing @@ -1427,8 +1418,6 @@ void notify_via_xen_event_channel(struct return; } =20 - lchn =3D evtchn_from_port(ld, lport); - if ( !evtchn_read_trylock(lchn) ) return; =20 @@ -1582,16 +1571,17 @@ static void domain_dump_evtchn_info(stru "Polling vCPUs: {%*pbl}\n" " port [p/m/s]\n", d->domain_id, d->max_vcpus, d->poll_mask); =20 - for ( port =3D 1; port_is_valid(d, port); ++port ) + for ( port =3D 1; ; ++port ) { - struct evtchn *chn; + struct evtchn *chn =3D _evtchn_from_port(d, port); char *ssid; =20 + if ( !chn ) + break; + if ( !(port & 0x3f) ) process_pending_softirqs(); =20 - chn =3D evtchn_from_port(d, port); - if ( !evtchn_read_trylock(chn) ) { printk(" %4u in flux\n", port); --- a/xen/include/xen/event.h +++ b/xen/include/xen/event.h @@ -120,7 +120,7 @@ static inline void evtchn_read_unlock(st read_unlock(&evtchn->lock); } =20 -static inline bool_t port_is_valid(struct domain *d, unsigned int p) +static inline bool port_is_valid(const struct domain *d, evtchn_port_t p) { if ( p >=3D read_atomic(&d->valid_evtchns) ) return false; @@ -135,7 +135,8 @@ static inline bool_t port_is_valid(struc return true; } =20 -static inline struct evtchn *evtchn_from_port(struct domain *d, unsigned i= nt p) +static inline struct evtchn *evtchn_from_port(const struct domain *d, + evtchn_port_t p) { if ( p < EVTCHNS_PER_BUCKET ) return &d->evtchn[array_index_nospec(p, EVTCHNS_PER_BUCKET)]; From nobody Sun May 5 06:44:27 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1611735461; cv=none; d=zohomail.com; s=zohoarc; b=M2Jy2zNNrCqZA5LNV9TPtzSaxbx2aolEA7uW38A0lfoH2ulBgBpq2/NDFEtguYzsnk1bpsJxH2O5XODWiE6VhSoZECqvHrPfGSMG5ktdpxlDGB0yqY39SoYJOUz5gTAqUDWc9X2BwsAadZHAd3nDByYk4DGeoAal1bsayziH55c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611735461; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=B2RArq70ig3igybu8O+ijF3q8cBl2UbzWuYiOmMpvZE=; b=RHkZ8sIFvTm9wVH5t/G1q2hCAhmZ2e2+4dyz9YYZYMyujXSbbFMDvSNaXqv/+V+N3RH7hAMJLkyReLKTB4UO5zk8QJl6DydeBNHuFlr2w152SbBnFFXe0r8wEyGUiTLmxto6ON17L8GImeEJHDBrKjVjefbwmP1Ioly4GM2zLyg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1611735461408491.9146243566928; Wed, 27 Jan 2021 00:17:41 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.75807.136596 (Exim 4.92) (envelope-from ) id 1l4g0z-0005xx-Ap; Wed, 27 Jan 2021 08:17:25 +0000 Received: by outflank-mailman (output) from mailman id 75807.136596; Wed, 27 Jan 2021 08:17:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4g0z-0005xq-7n; Wed, 27 Jan 2021 08:17:25 +0000 Received: by outflank-mailman (input) for mailman id 75807; Wed, 27 Jan 2021 08:17:24 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4g0y-0005xa-8i for xen-devel@lists.xenproject.org; Wed, 27 Jan 2021 08:17:24 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 65358ec2-719f-4229-a7dc-dd82c1df4888; Wed, 27 Jan 2021 08:17:23 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7B14AB98B; Wed, 27 Jan 2021 08:17:22 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 65358ec2-719f-4229-a7dc-dd82c1df4888 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1611735442; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B2RArq70ig3igybu8O+ijF3q8cBl2UbzWuYiOmMpvZE=; b=eWWC3vq1ur/Q277+fYhAZgfuXUk1s3cq36yvnSaGpKFlC3J7Ttz6W0lWoBkGoquHhChTxQ 9E9w43A/GaYvz7Lme2lTm/sq/T3TShl9e6IziDcsezYd5HYNyO+ZIOzhY61Sz39uG7rHL2 9EX9ufryz4CPBlg+ApMKX9MrBJPGqXI= Subject: [PATCH v5 5/6] evtchn: type adjustments From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Message-ID: <3cb6de31-39e3-43ff-2a9f-a09aa1b1fc26@suse.com> Date: Wed, 27 Jan 2021 09:17:22 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" First of all avoid "long" when "int" suffices, i.e. in particular when merely conveying error codes. 32-bit values are slightly cheaper to deal with on x86, and their processing is at least no more expensive on Arm. Where possible use evtchn_port_t for port numbers and unsigned int for other unsigned quantities in adjacent code. In evtchn_set_priority() eliminate a local variable altogether instead of changing its type. Signed-off-by: Jan Beulich --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -287,13 +287,12 @@ void evtchn_free(struct domain *d, struc xsm_evtchn_close_post(chn); } =20 -static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc) +static int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc) { struct evtchn *chn; struct domain *d; - int port; + int port, rc; domid_t dom =3D alloc->dom; - long rc; =20 d =3D rcu_lock_domain_by_any_id(dom); if ( d =3D=3D NULL ) @@ -346,13 +345,13 @@ static void double_evtchn_unlock(struct evtchn_write_unlock(rchn); } =20 -static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind) +static int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind) { struct evtchn *lchn, *rchn; struct domain *ld =3D current->domain, *rd; - int lport, rport =3D bind->remote_port; + int lport, rc; + evtchn_port_t rport =3D bind->remote_port; domid_t rdom =3D bind->remote_dom; - long rc; =20 if ( (rd =3D rcu_lock_domain_by_any_id(rdom)) =3D=3D NULL ) return -ESRCH; @@ -488,12 +487,12 @@ int evtchn_bind_virq(evtchn_bind_virq_t } =20 =20 -static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind) +static int evtchn_bind_ipi(evtchn_bind_ipi_t *bind) { struct evtchn *chn; struct domain *d =3D current->domain; - int port, vcpu =3D bind->vcpu; - long rc =3D 0; + int port, rc =3D 0; + unsigned int vcpu =3D bind->vcpu; =20 if ( domain_vcpu(d, vcpu) =3D=3D NULL ) return -ENOENT; @@ -547,16 +546,16 @@ static void unlink_pirq_port(struct evtc } =20 =20 -static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind) +static int evtchn_bind_pirq(evtchn_bind_pirq_t *bind) { struct evtchn *chn; struct domain *d =3D current->domain; struct vcpu *v =3D d->vcpu[0]; struct pirq *info; - int port =3D 0, pirq =3D bind->pirq; - long rc; + int port =3D 0, rc; + unsigned int pirq =3D bind->pirq; =20 - if ( (pirq < 0) || (pirq >=3D d->nr_pirqs) ) + if ( pirq >=3D d->nr_pirqs ) return -EINVAL; =20 if ( !is_hvm_domain(d) && !pirq_access_permitted(d, pirq) ) @@ -612,7 +611,7 @@ int evtchn_close(struct domain *d1, int { struct domain *d2 =3D NULL; struct evtchn *chn1 =3D _evtchn_from_port(d1, port1), *chn2; - long rc =3D 0; + int rc =3D 0; =20 if ( !chn1 ) return -EINVAL; @@ -960,7 +959,7 @@ int evtchn_status(evtchn_status_t *statu struct domain *d; domid_t dom =3D status->dom; struct evtchn *chn; - long rc =3D 0; + int rc =3D 0; =20 d =3D rcu_lock_domain_by_any_id(dom); if ( d =3D=3D NULL ) @@ -1026,11 +1025,11 @@ int evtchn_status(evtchn_status_t *statu } =20 =20 -long evtchn_bind_vcpu(unsigned int port, unsigned int vcpu_id) +int evtchn_bind_vcpu(evtchn_port_t port, unsigned int vcpu_id) { struct domain *d =3D current->domain; struct evtchn *chn; - long rc =3D 0; + int rc =3D 0; struct vcpu *v; =20 /* Use the vcpu info to prevent speculative out-of-bound accesses */ @@ -1169,12 +1168,11 @@ int evtchn_reset(struct domain *d, bool return rc; } =20 -static long evtchn_set_priority(const struct evtchn_set_priority *set_prio= rity) +static int evtchn_set_priority(const struct evtchn_set_priority *set_prior= ity) { struct domain *d =3D current->domain; - unsigned int port =3D set_priority->port; - struct evtchn *chn =3D _evtchn_from_port(d, port); - long ret; + struct evtchn *chn =3D _evtchn_from_port(d, set_priority->port); + int ret; =20 if ( !chn ) return -EINVAL; @@ -1190,7 +1188,7 @@ static long evtchn_set_priority(const st =20 long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { - long rc; + int rc; =20 switch ( cmd ) { --- a/xen/include/xen/event.h +++ b/xen/include/xen/event.h @@ -54,7 +54,7 @@ void send_guest_pirq(struct domain *, co int evtchn_send(struct domain *d, unsigned int lport); =20 /* Bind a local event-channel port to the specified VCPU. */ -long evtchn_bind_vcpu(unsigned int port, unsigned int vcpu_id); +int evtchn_bind_vcpu(evtchn_port_t port, unsigned int vcpu_id); =20 /* Bind a VIRQ. */ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port); From nobody Sun May 5 06:44:27 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1611735479; cv=none; d=zohomail.com; s=zohoarc; b=YuciLhzPMjubI+K083ccY7XYoDOy1ldE9oIl2rxskTF5R0xz/P8tAGu7kBUaiCbd3J7XPJc4I3FqO3rkpuA/NsqsYypIGLNcTjcUHZTU7suscxBquhzNGyvTwio5xKcwjJ/GulGQIztYYMXI/m3E9fmCqgBc0lhYBu4pGV4fz4U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611735479; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=/EhjnIXtH6FCpQuS23+dXB+OzQ2jjIr1Cjqu713u57k=; b=YiqZFngeF1RFr0Urohjxhi6hWTjHLz1DmUr9mvVXdCmJN7htzKNyziowgdjRw37EWlImuNUKR1oi1U1AiGMAFyNiU8Zft3gRJKaFekGoS1SUY4dNgIu/MHVRbv9R25KlDPCh7jRPzDJli51fM4ZhAs69No3cGmT6EpIrueDiD6w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1611735479047408.2474710774518; Wed, 27 Jan 2021 00:17:59 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.75810.136608 (Exim 4.92) (envelope-from ) id 1l4g1J-00064l-KO; Wed, 27 Jan 2021 08:17:45 +0000 Received: by outflank-mailman (output) from mailman id 75810.136608; Wed, 27 Jan 2021 08:17:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4g1J-00064d-HH; Wed, 27 Jan 2021 08:17:45 +0000 Received: by outflank-mailman (input) for mailman id 75810; Wed, 27 Jan 2021 08:17:44 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l4g1H-00064K-UX for xen-devel@lists.xenproject.org; Wed, 27 Jan 2021 08:17:43 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 74e8d0d5-38aa-4695-8ed7-f6f2e3badaee; Wed, 27 Jan 2021 08:17:43 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 20104B73C; Wed, 27 Jan 2021 08:17:42 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 74e8d0d5-38aa-4695-8ed7-f6f2e3badaee X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1611735462; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/EhjnIXtH6FCpQuS23+dXB+OzQ2jjIr1Cjqu713u57k=; b=aF26Ta4+ijGUcgekgJavvUN5V1x+CHjb6/vDd/lflDu3kCmhJK9mUTl63im1eKh0yeDw3t hj4+edcUxHNeSdsow8r4LX7+06r96K9mGCMEMT+aLGkKv2ZlNi/tig3Rq4D/1aOjYIoZB/ l0pEA1pc313PlvHBZOWqDobDAhCn7Xo= Subject: [PATCH v5 6/6] evtchn: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Message-ID: <5c3e705a-d028-dc9e-0f22-cb2e55f250f7@suse.com> Date: Wed, 27 Jan 2021 09:17:42 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com> Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" The per-vCPU virq_lock, which is being held anyway, together with there not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[] is zero, provide sufficient guarantees. Undo the lock addition done for XSA-343 (commit e045199c7c9c "evtchn: address races with evtchn_reset()"). Update the description next to struct evtchn_port_ops accordingly. Signed-off-by: Jan Beulich --- v4: Move to end of series, for being the most controversial change. v3: Re-base. v2: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -812,7 +812,6 @@ void send_guest_vcpu_virq(struct vcpu *v unsigned long flags; int port; struct domain *d; - struct evtchn *chn; =20 ASSERT(!virq_is_global(virq)); =20 @@ -823,12 +822,7 @@ void send_guest_vcpu_virq(struct vcpu *v goto out; =20 d =3D v->domain; - chn =3D evtchn_from_port(d, port); - if ( evtchn_read_trylock(chn) ) - { - evtchn_port_set_pending(d, v->vcpu_id, chn); - evtchn_read_unlock(chn); - } + evtchn_port_set_pending(d, v->vcpu_id, evtchn_from_port(d, port)); =20 out: read_unlock_irqrestore(&v->virq_lock, flags); @@ -857,11 +851,7 @@ void send_guest_global_virq(struct domai goto out; =20 chn =3D evtchn_from_port(d, port); - if ( evtchn_read_trylock(chn) ) - { - evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); - evtchn_read_unlock(chn); - } + evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); =20 out: read_unlock_irqrestore(&v->virq_lock, flags); --- a/xen/include/xen/event.h +++ b/xen/include/xen/event.h @@ -193,9 +193,16 @@ int evtchn_reset(struct domain *d, bool * Low-level event channel port ops. * * All hooks have to be called with a lock held which prevents the channel - * from changing state. This may be the domain event lock, the per-channel - * lock, or in the case of sending interdomain events also the other side's - * per-channel lock. Exceptions apply in certain cases for the PV shim. + * from changing state. This may be + * - the domain event lock, + * - the per-channel lock, + * - in the case of sending interdomain events the other side's per-channel + * lock, + * - in the case of sending non-global vIRQ-s the per-vCPU virq_lock (in + * combination with the ordering enforced through how the vCPU's + * virq_to_evtchn[] gets updated), + * - in the case of sending global vIRQ-s vCPU 0's virq_lock. + * Exceptions apply in certain cases for the PV shim. */ struct evtchn_port_ops { void (*init)(struct domain *d, struct evtchn *evtchn);