From nobody Fri Apr 19 01:47:06 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1602493884; cv=none; d=zohomail.com; s=zohoarc; b=NSmRV6HDVSImfmhdu5JfmR37pR00M5IIiJ0mEoVGMZ1agMrfmMfOHyZrl123DlUoyPWgtILp1QUzl316r+BNq/FRX5lOFmgv8dIj+pnGGNREQJhaZD25gL9RYFbWejI8IqWmURZfMe5clL41YaiT/WhryTKCF0f5p+QfQE5pHk8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602493884; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=SYSqKrZWGVU8nxonrOSCcTNQRNTPVy8fH0KXNbdNrqQ=; b=XAyBUzjQmWWlvFUCcfH3dRyTw6CyzyrvmUnFLbskhWOKxwx8D4tZWeP2Up3lBrb/Po2booLoGNltH78fV7kTJ3oRIw7RcfpbRGiHv4DCZ9Onwealumc5bolnSMpH62vz82YqFHhzrd3z2013wgethGSUKw31q+Fel0icGNLV7Gc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602493884238948.2719890739725; Mon, 12 Oct 2020 02:11:24 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.5824.15131 (Exim 4.92) (envelope-from ) id 1kRtrI-0005Gi-4z; Mon, 12 Oct 2020 09:11:08 +0000 Received: by outflank-mailman (output) from mailman id 5824.15131; Mon, 12 Oct 2020 09:11:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kRtrI-0005Gb-1Z; Mon, 12 Oct 2020 09:11:08 +0000 Received: by outflank-mailman (input) for mailman id 5824; Mon, 12 Oct 2020 09:11:06 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kRtrG-0005GU-6V for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:06 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id fa7f867d-7762-43b6-ae41-1608beac901c; Mon, 12 Oct 2020 09:11:02 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 5F9BEB03D; Mon, 12 Oct 2020 09:11:01 +0000 (UTC) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kRtrG-0005GU-6V for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:06 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id fa7f867d-7762-43b6-ae41-1608beac901c; Mon, 12 Oct 2020 09:11:02 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 5F9BEB03D; Mon, 12 Oct 2020 09:11:01 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fa7f867d-7762-43b6-ae41-1608beac901c X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1602493861; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SYSqKrZWGVU8nxonrOSCcTNQRNTPVy8fH0KXNbdNrqQ=; b=UQVVEfJ95n3dY3iu7GzrIgq7zCx6w88hoYhCakpqGLOl2YZ9H6Ss4ezICtUjHacArx1Yh0 SeptfKSuohJ0y89iYiYJ5qm93EOoYxJODDSX347hFr03NZJ6D2L02a1NNZqkZUCUlmRB+V 0/aZ1d/CpcQ+Xe0QVFXp1Km/xpEF28g= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH 1/2] xen/events: access last_priority and last_vcpu_id together Date: Mon, 12 Oct 2020 11:10:57 +0200 Message-Id: <20201012091058.27023-2-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012091058.27023-1-jgross@suse.com> References: <20201012091058.27023-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" The queue for a fifo event is depending on the vcpu_id and the priority of the event. When sending an event it might happen the event needs to change queues and the old queue needs to be kept for keeping the links between queue elements intact. For this purpose the event channel contains last_priority and last_vcpu_id values elements for being able to identify the old queue. In order to avoid races always access last_priority and last_vcpu_id with a single atomic operation avoiding any inconsistencies. Signed-off-by: Juergen Gross --- xen/common/event_fifo.c | 25 +++++++++++++++++++------ xen/include/xen/sched.h | 3 +-- 2 files changed, 20 insertions(+), 8 deletions(-) diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c index fc189152e1..fffbd409c8 100644 --- a/xen/common/event_fifo.c +++ b/xen/common/event_fifo.c @@ -42,6 +42,14 @@ struct evtchn_fifo_domain { unsigned int num_evtchns; }; =20 +union evtchn_fifo_lastq { + u32 raw; + struct { + u8 last_priority; + u16 last_vcpu_id; + }; +}; + static inline event_word_t *evtchn_fifo_word_from_port(const struct domain= *d, unsigned int port) { @@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const s= truct domain *d, struct vcpu *v; struct evtchn_fifo_queue *q, *old_q; unsigned int try; + union evtchn_fifo_lastq lastq; =20 for ( try =3D 0; try < 3; try++ ) { - v =3D d->vcpu[evtchn->last_vcpu_id]; - old_q =3D &v->evtchn_fifo->queue[evtchn->last_priority]; + lastq.raw =3D read_atomic(&evtchn->fifo_lastq); + v =3D d->vcpu[lastq.last_vcpu_id]; + old_q =3D &v->evtchn_fifo->queue[lastq.last_priority]; =20 spin_lock_irqsave(&old_q->lock, *flags); =20 - v =3D d->vcpu[evtchn->last_vcpu_id]; - q =3D &v->evtchn_fifo->queue[evtchn->last_priority]; + v =3D d->vcpu[lastq.last_vcpu_id]; + q =3D &v->evtchn_fifo->queue[lastq.last_priority]; =20 if ( old_q =3D=3D q ) return old_q; @@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, st= ruct evtchn *evtchn) /* Moved to a different queue? */ if ( old_q !=3D q ) { - evtchn->last_vcpu_id =3D v->vcpu_id; - evtchn->last_priority =3D q->priority; + union evtchn_fifo_lastq lastq; + + lastq.last_vcpu_id =3D v->vcpu_id; + lastq.last_priority =3D q->priority; + write_atomic(&evtchn->fifo_lastq, lastq.raw); =20 spin_unlock_irqrestore(&old_q->lock, flags); spin_lock_irqsave(&q->lock, flags); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index d8ed83f869..a298ff4df8 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -114,8 +114,7 @@ struct evtchn u16 virq; /* state =3D=3D ECS_VIRQ */ } u; u8 priority; - u8 last_priority; - u16 last_vcpu_id; + u32 fifo_lastq; /* Data for fifo events identifying last queue. */ #ifdef CONFIG_XSM union { #ifdef XSM_NEED_GENERIC_EVTCHN_SSID --=20 2.26.2 From nobody Fri Apr 19 01:47:06 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1602493895; cv=none; d=zohomail.com; s=zohoarc; b=S/51ubR/iXoit4u1gMI+p/r4ndilIKQw9D6Uj0/Au2lpRy/0eDIPwgkjPhcQ9N6lGk1ZV9lgNkzaLQ0OTH03BQd0+AW1Bugw3GvbhuBLTgodYInzF9o490Dp7r2zOS5boh3hNgJ1Y8lv4lHSqQHv+6N0OIXDaItD9mFP9RTJe/A= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1602493895; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=u/vnZwI3ysh3QTMWdvYSgNcqLtswfypo6dZsUh9TVME=; b=CYNBQ3LrGtr6EtOMFPFInPYPiWThCv03ta179DTdBZQTH8H1klsIGrSYy8UyoLWhJUMfOg2IMVeG2FZeIJMmdlzu91CkUGHWYiaz/r81jnIE1i3uzRuBmLEM1dC/ZB9Sxvb7wniObUERSLOcCZqXegwP2yBHGbq9fJY452rIHd8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1602493895664658.159647561982; Mon, 12 Oct 2020 02:11:35 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.5826.15155 (Exim 4.92) (envelope-from ) id 1kRtrR-0005NT-L3; Mon, 12 Oct 2020 09:11:17 +0000 Received: by outflank-mailman (output) from mailman id 5826.15155; Mon, 12 Oct 2020 09:11:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kRtrR-0005NL-I7; Mon, 12 Oct 2020 09:11:17 +0000 Received: by outflank-mailman (input) for mailman id 5826; Mon, 12 Oct 2020 09:11:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kRtrQ-0005GU-5I for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:16 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8582d4de-33c6-41bf-8989-abde2aff5395; Mon, 12 Oct 2020 09:11:02 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 9B637B087; Mon, 12 Oct 2020 09:11:01 +0000 (UTC) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kRtrQ-0005GU-5I for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:16 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8582d4de-33c6-41bf-8989-abde2aff5395; Mon, 12 Oct 2020 09:11:02 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 9B637B087; Mon, 12 Oct 2020 09:11:01 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8582d4de-33c6-41bf-8989-abde2aff5395 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1602493861; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u/vnZwI3ysh3QTMWdvYSgNcqLtswfypo6dZsUh9TVME=; b=cqcqU9o8htJLN/3QPHWtghB5EPDWDK5Wf9GIwRubX2FRszOF/4H50wwgb9m38yiW27X7rp AAlz52JpxysS7iHjay5YjRQDJ7WVH/cmbFtW/iHfgsJZ3SBQw4ZtXW8FFeDsKudvhHYsb+ Dh669Q+5t4OodHpZGaAu5/TrB9gh8sk= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini Subject: [PATCH 2/2] xen/evtchn: rework per event channel lock Date: Mon, 12 Oct 2020 11:10:58 +0200 Message-Id: <20201012091058.27023-3-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012091058.27023-1-jgross@suse.com> References: <20201012091058.27023-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Currently the lock for a single event channel needs to be taken with interrupts off, which causes deadlocks in some cases. Rework the per event channel lock to be non-blocking for the case of sending an event and removing the need for disabling interrupts for taking the lock. The lock is needed for avoiding races between sending an event or querying the channel's state against removal of the event channel. Use a locking scheme similar to a rwlock, but with some modifications: - sending an event or querying the event channel's state uses an operation similar to read_trylock(), in case of not obtaining the lock the sending is omitted or a default state is returned - closing an event channel is similar to write_lock(), but without real fairness regarding multiple writers (this saves some space in the event channel structure and multiple writers are impossible as closing an event channel requires the domain's event_lock to be held). With this locking scheme it is mandatory that a writer will always either start with an unbound or free event channel or will end with an unbound or free event channel, as otherwise the reaction of a reader not getting the lock would be wrong. Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()") Signed-off-by: Juergen Gross --- xen/arch/x86/irq.c | 6 +- xen/arch/x86/pv/shim.c | 9 +-- xen/common/event_channel.c | 109 +++++++++++++++++-------------------- xen/include/xen/event.h | 50 ++++++++++++++--- xen/include/xen/sched.h | 2 +- 5 files changed, 100 insertions(+), 76 deletions(-) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 93c4fb9a79..77290032f5 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key) pirq =3D domain_irq_to_pirq(d, irq); info =3D pirq_info(d, pirq); evtchn =3D evtchn_from_port(d, info->evtchn); - local_irq_disable(); - if ( spin_trylock(&evtchn->lock) ) + if ( evtchn_tryread_lock(evtchn) ) { pending =3D evtchn_is_pending(d, evtchn); masked =3D evtchn_is_masked(d, evtchn); - spin_unlock(&evtchn->lock); + evtchn_read_unlock(evtchn); } - local_irq_enable(); printk("d%d:%3d(%c%c%c)%c", d->domain_id, pirq, "-P?"[pending], "-M?"[masked], info->masked ? 'M' : '-', diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c index 9aef7a860a..3734250bf7 100644 --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port) if ( port_is_valid(guest, port) ) { struct evtchn *chn =3D evtchn_from_port(guest, port); - unsigned long flags; =20 - spin_lock_irqsave(&chn->lock, flags); - evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn); - spin_unlock_irqrestore(&chn->lock, flags); + if ( evtchn_tryread_lock(chn) ) + { + evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn); + evtchn_read_unlock(chn); + } } } =20 diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c index e365b5498f..398a1e7aa0 100644 --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -131,7 +131,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain= *d, unsigned int port) return NULL; } chn[i].port =3D port + i; - spin_lock_init(&chn[i].lock); + atomic_set(&chn[i].lock, 0); } return chn; } @@ -253,7 +253,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t= *alloc) int port; domid_t dom =3D alloc->dom; long rc; - unsigned long flags; =20 d =3D rcu_lock_domain_by_any_id(dom); if ( d =3D=3D NULL ) @@ -269,14 +268,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound= _t *alloc) if ( rc ) goto out; =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_UNBOUND; if ( (chn->u.unbound.remote_domid =3D alloc->remote_dom) =3D=3D DOMID_= SELF ) chn->u.unbound.remote_domid =3D current->domain->domain_id; evtchn_port_init(d, chn); =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 alloc->port =3D port; =20 @@ -289,32 +288,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound= _t *alloc) } =20 =20 -static unsigned long double_evtchn_lock(struct evtchn *lchn, - struct evtchn *rchn) +static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn) { - unsigned long flags; - if ( lchn <=3D rchn ) { - spin_lock_irqsave(&lchn->lock, flags); + evtchn_write_lock(lchn); if ( lchn !=3D rchn ) - spin_lock(&rchn->lock); + evtchn_write_lock(rchn); } else { - spin_lock_irqsave(&rchn->lock, flags); - spin_lock(&lchn->lock); + evtchn_write_lock(rchn); + evtchn_write_lock(lchn); } - - return flags; } =20 -static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn, - unsigned long flags) +static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn) { if ( lchn !=3D rchn ) - spin_unlock(&lchn->lock); - spin_unlock_irqrestore(&rchn->lock, flags); + evtchn_write_unlock(lchn); + evtchn_write_unlock(rchn); } =20 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind) @@ -324,7 +317,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdo= main_t *bind) int lport, rport =3D bind->remote_port; domid_t rdom =3D bind->remote_dom; long rc; - unsigned long flags; =20 if ( rdom =3D=3D DOMID_SELF ) rdom =3D current->domain->domain_id; @@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdo= main_t *bind) if ( rc ) goto out; =20 - flags =3D double_evtchn_lock(lchn, rchn); + double_evtchn_lock(lchn, rchn); =20 lchn->u.interdomain.remote_dom =3D rd; lchn->u.interdomain.remote_port =3D rport; @@ -377,7 +369,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdo= main_t *bind) */ evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn); =20 - double_evtchn_unlock(lchn, rchn, flags); + double_evtchn_unlock(lchn, rchn); =20 bind->local_port =3D lport; =20 @@ -400,7 +392,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_p= ort_t port) struct domain *d =3D current->domain; int virq =3D bind->virq, vcpu =3D bind->vcpu; int rc =3D 0; - unsigned long flags; =20 if ( (virq < 0) || (virq >=3D ARRAY_SIZE(v->virq_to_evtchn)) ) return -EINVAL; @@ -438,14 +429,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn= _port_t port) =20 chn =3D evtchn_from_port(d, port); =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_VIRQ; chn->notify_vcpu_id =3D vcpu; chn->u.virq =3D virq; evtchn_port_init(d, chn); =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 v->virq_to_evtchn[virq] =3D bind->port =3D port; =20 @@ -462,7 +453,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind) struct domain *d =3D current->domain; int port, vcpu =3D bind->vcpu; long rc =3D 0; - unsigned long flags; =20 if ( domain_vcpu(d, vcpu) =3D=3D NULL ) return -ENOENT; @@ -474,13 +464,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind) =20 chn =3D evtchn_from_port(d, port); =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_IPI; chn->notify_vcpu_id =3D vcpu; evtchn_port_init(d, chn); =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 bind->port =3D port; =20 @@ -524,7 +514,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind) struct pirq *info; int port =3D 0, pirq =3D bind->pirq; long rc; - unsigned long flags; =20 if ( (pirq < 0) || (pirq >=3D d->nr_pirqs) ) return -EINVAL; @@ -557,14 +546,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind) goto out; } =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_PIRQ; chn->u.pirq.irq =3D pirq; link_pirq_port(port, chn, v); evtchn_port_init(d, chn); =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 bind->port =3D port; =20 @@ -585,7 +574,6 @@ int evtchn_close(struct domain *d1, int port1, bool gue= st) struct evtchn *chn1, *chn2; int port2; long rc =3D 0; - unsigned long flags; =20 again: spin_lock(&d1->event_lock); @@ -686,14 +674,14 @@ int evtchn_close(struct domain *d1, int port1, bool g= uest) BUG_ON(chn2->state !=3D ECS_INTERDOMAIN); BUG_ON(chn2->u.interdomain.remote_dom !=3D d1); =20 - flags =3D double_evtchn_lock(chn1, chn2); + double_evtchn_lock(chn1, chn2); =20 evtchn_free(d1, chn1); =20 chn2->state =3D ECS_UNBOUND; chn2->u.unbound.remote_domid =3D d1->domain_id; =20 - double_evtchn_unlock(chn1, chn2, flags); + double_evtchn_unlock(chn1, chn2); =20 goto out; =20 @@ -701,9 +689,9 @@ int evtchn_close(struct domain *d1, int port1, bool gue= st) BUG(); } =20 - spin_lock_irqsave(&chn1->lock, flags); + evtchn_write_lock(chn1); evtchn_free(d1, chn1); - spin_unlock_irqrestore(&chn1->lock, flags); + evtchn_write_unlock(chn1); =20 out: if ( d2 !=3D NULL ) @@ -723,7 +711,6 @@ int evtchn_send(struct domain *ld, unsigned int lport) struct evtchn *lchn, *rchn; struct domain *rd; int rport, ret =3D 0; - unsigned long flags; =20 if ( !port_is_valid(ld, lport) ) return -EINVAL; @@ -736,7 +723,8 @@ int evtchn_send(struct domain *ld, unsigned int lport) =20 lchn =3D evtchn_from_port(ld, lport); =20 - spin_lock_irqsave(&lchn->lock, flags); + if ( !evtchn_tryread_lock(lchn) ) + return 0; =20 /* Guest cannot send via a Xen-attached event channel. */ if ( unlikely(consumer_is_xen(lchn)) ) @@ -771,7 +759,7 @@ int evtchn_send(struct domain *ld, unsigned int lport) } =20 out: - spin_unlock_irqrestore(&lchn->lock, flags); + evtchn_read_unlock(lchn); =20 return ret; } @@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t vir= q) =20 d =3D v->domain; chn =3D evtchn_from_port(d, port); - spin_lock(&chn->lock); - evtchn_port_set_pending(d, v->vcpu_id, chn); - spin_unlock(&chn->lock); + if ( evtchn_tryread_lock(chn) ) + { + evtchn_port_set_pending(d, v->vcpu_id, chn); + evtchn_read_unlock(chn); + } =20 out: spin_unlock_irqrestore(&v->virq_lock, flags); @@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t= virq) goto out; =20 chn =3D evtchn_from_port(d, port); - spin_lock(&chn->lock); - evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); - spin_unlock(&chn->lock); + if ( evtchn_tryread_lock(chn) ) + { + evtchn_port_set_pending(d, v->vcpu_id, chn); + evtchn_read_unlock(chn); + } =20 out: spin_unlock_irqrestore(&v->virq_lock, flags); @@ -841,7 +833,6 @@ void send_guest_pirq(struct domain *d, const struct pir= q *pirq) { int port; struct evtchn *chn; - unsigned long flags; =20 /* * PV guests: It should not be possible to race with __evtchn_close().= The @@ -856,9 +847,11 @@ void send_guest_pirq(struct domain *d, const struct pi= rq *pirq) } =20 chn =3D evtchn_from_port(d, port); - spin_lock_irqsave(&chn->lock, flags); - evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); - spin_unlock_irqrestore(&chn->lock, flags); + if ( evtchn_tryread_lock(chn) ) + { + evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); + evtchn_read_unlock(chn); + } } =20 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly; @@ -1060,15 +1053,16 @@ int evtchn_unmask(unsigned int port) { struct domain *d =3D current->domain; struct evtchn *evtchn; - unsigned long flags; =20 if ( unlikely(!port_is_valid(d, port)) ) return -EINVAL; =20 evtchn =3D evtchn_from_port(d, port); - spin_lock_irqsave(&evtchn->lock, flags); - evtchn_port_unmask(d, evtchn); - spin_unlock_irqrestore(&evtchn->lock, flags); + if ( evtchn_tryread_lock(evtchn) ) + { + evtchn_port_unmask(d, evtchn); + evtchn_read_unlock(evtchn); + } =20 return 0; } @@ -1327,7 +1321,6 @@ int alloc_unbound_xen_event_channel( { struct evtchn *chn; int port, rc; - unsigned long flags; =20 spin_lock(&ld->event_lock); =20 @@ -1340,14 +1333,14 @@ int alloc_unbound_xen_event_channel( if ( rc ) goto out; =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_UNBOUND; chn->xen_consumer =3D get_xen_consumer(notification_fn); chn->notify_vcpu_id =3D lvcpu; chn->u.unbound.remote_domid =3D remote_domid; =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 /* * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit @@ -1383,7 +1376,6 @@ void notify_via_xen_event_channel(struct domain *ld, = int lport) { struct evtchn *lchn, *rchn; struct domain *rd; - unsigned long flags; =20 if ( !port_is_valid(ld, lport) ) { @@ -1398,7 +1390,8 @@ void notify_via_xen_event_channel(struct domain *ld, = int lport) =20 lchn =3D evtchn_from_port(ld, lport); =20 - spin_lock_irqsave(&lchn->lock, flags); + if ( !evtchn_tryread_lock(lchn) ) + return; =20 if ( likely(lchn->state =3D=3D ECS_INTERDOMAIN) ) { @@ -1408,7 +1401,7 @@ void notify_via_xen_event_channel(struct domain *ld, = int lport) evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn); } =20 - spin_unlock_irqrestore(&lchn->lock, flags); + evtchn_read_unlock(lchn); } =20 void evtchn_check_pollers(struct domain *d, unsigned int port) diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h index 509d3ae861..abf26a892c 100644 --- a/xen/include/xen/event.h +++ b/xen/include/xen/event.h @@ -105,6 +105,39 @@ void notify_via_xen_event_channel(struct domain *ld, i= nt lport); #define bucket_from_port(d, p) \ ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKE= T]) =20 +#define EVENT_WRITE_LOCK_INC MAX_VIRT_CPUS +static inline void evtchn_write_lock(struct evtchn *evtchn) +{ + int val; + + for ( val =3D atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock); + val !=3D EVENT_WRITE_LOCK_INC; + val =3D atomic_read(&evtchn->lock) ) + cpu_relax(); +} + +static inline void evtchn_write_unlock(struct evtchn *evtchn) +{ + atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock); +} + +static inline bool evtchn_tryread_lock(struct evtchn *evtchn) +{ + if ( atomic_read(&evtchn->lock) >=3D EVENT_WRITE_LOCK_INC ) + return false; + + if ( atomic_inc_return(&evtchn->lock) < EVENT_WRITE_LOCK_INC ) + return true; + + atomic_dec(&evtchn->lock); + return false; +} + +static inline void evtchn_read_unlock(struct evtchn *evtchn) +{ + atomic_dec(&evtchn->lock); +} + static inline unsigned int max_evtchns(const struct domain *d) { return d->evtchn_fifo ? EVTCHN_FIFO_NR_CHANNELS @@ -249,12 +282,11 @@ static inline bool evtchn_is_masked(const struct doma= in *d, static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t p= ort) { struct evtchn *evtchn =3D evtchn_from_port(d, port); - bool rc; - unsigned long flags; + bool rc =3D true; =20 - spin_lock_irqsave(&evtchn->lock, flags); - rc =3D evtchn_is_masked(d, evtchn); - spin_unlock_irqrestore(&evtchn->lock, flags); + if ( evtchn_tryread_lock(evtchn) ) + rc =3D evtchn_is_masked(d, evtchn); + evtchn_read_unlock(evtchn); =20 return rc; } @@ -274,12 +306,12 @@ static inline int evtchn_port_poll(struct domain *d, = evtchn_port_t port) if ( port_is_valid(d, port) ) { struct evtchn *evtchn =3D evtchn_from_port(d, port); - unsigned long flags; =20 - spin_lock_irqsave(&evtchn->lock, flags); - if ( evtchn_usable(evtchn) ) + if ( evtchn_tryread_lock(evtchn) && evtchn_usable(evtchn) ) + { rc =3D evtchn_is_pending(d, evtchn); - spin_unlock_irqrestore(&evtchn->lock, flags); + evtchn_read_unlock(evtchn); + } } =20 return rc; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index a298ff4df8..096e0ec6af 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -85,7 +85,7 @@ extern domid_t hardware_domid; =20 struct evtchn { - spinlock_t lock; + atomic_t lock; /* kind of rwlock, use evtchn_*_[un]lock() = */ #define ECS_FREE 0 /* Channel is available for use. = */ #define ECS_RESERVED 1 /* Channel is reserved. = */ #define ECS_UNBOUND 2 /* Channel is waiting to bind to a remote domai= n. */ --=20 2.26.2