From nobody Fri Apr 26 14:41:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1604939944; cv=none; d=zohomail.com; s=zohoarc; b=UghNLsCdoG3xMWoYJm4TTGIQYK4EiSLFH4K+lBb7wevKcQ1bdWp946LE+76M0akebjHqECdgbB1v+0YV7uzXWg0EC2RtfLG9V55hTwzb0DaJoWEUjcRAjN9Q1PWuIyCMVsxUX6GHZHCGqkuUCWNmSQU+EW9Ju/EHm4XHzaKHf2o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1604939944; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=DpSBXBI2OHigxodLIbPPZp3jSMwO4fg1IhybSfwcpsI=; b=Ds2Doi2NDZzHZssFdcyTohABUUg6w6RcY7DNhfFKdMWFVEhtUkZdu9SJcmdMB0SSrCTtb8evr0CDZV2YelEPZsvC6A9poYUfn+aOYMImRiMznxRfZLpbhMQLI22tPiXLJdknOqi4L3VTFAQZDR09UAF+pidlW4DiHN5cVpm7+cI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 160493994463541.509613475001174; Mon, 9 Nov 2020 08:39:04 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.22720.49128 (Exim 4.92) (envelope-from ) id 1kcABh-0000yA-4E; Mon, 09 Nov 2020 16:38:37 +0000 Received: by outflank-mailman (output) from mailman id 22720.49128; Mon, 09 Nov 2020 16:38:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcABh-0000y2-0z; Mon, 09 Nov 2020 16:38:37 +0000 Received: by outflank-mailman (input) for mailman id 22720; Mon, 09 Nov 2020 16:38:35 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcABf-0000v8-RJ for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:35 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 2849798a-e04d-47c6-a55b-07f6f9a5605f; Mon, 09 Nov 2020 16:38:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 3D9B7AD07; Mon, 9 Nov 2020 16:38:29 +0000 (UTC) Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcABf-0000v8-RJ for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:35 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 2849798a-e04d-47c6-a55b-07f6f9a5605f; Mon, 09 Nov 2020 16:38:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 3D9B7AD07; Mon, 9 Nov 2020 16:38:29 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2849798a-e04d-47c6-a55b-07f6f9a5605f X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1604939909; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DpSBXBI2OHigxodLIbPPZp3jSMwO4fg1IhybSfwcpsI=; b=mdNDxUawjIsq4LTthLx/DVa/uts2f/PQ0R+BflxXhbQ8CoXQShaIrahQInkijFeozpq6AY vg94KekXePLAYQJSpJQz2fc+O+rLhSWsIS1PvoSzq5xgs7mQkd+JswsKJBZYQr5CuwUlG4 SIMK80I89bVcO8tudmJD3H5tUsik97w= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Julien Grall Subject: [PATCH v6 1/3] xen/events: access last_priority and last_vcpu_id together Date: Mon, 9 Nov 2020 17:38:24 +0100 Message-Id: <20201109163826.13035-2-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201109163826.13035-1-jgross@suse.com> References: <20201109163826.13035-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" The queue for a fifo event is depending on the vcpu_id and the priority of the event. When sending an event it might happen the event needs to change queues and the old queue needs to be kept for keeping the links between queue elements intact. For this purpose the event channel contains last_priority and last_vcpu_id values elements for being able to identify the old queue. In order to avoid races always access last_priority and last_vcpu_id with a single atomic operation avoiding any inconsistencies. Signed-off-by: Juergen Gross Reviewed-by: Julien Grall --- xen/common/event_fifo.c | 25 +++++++++++++++++++------ xen/include/xen/sched.h | 3 +-- 2 files changed, 20 insertions(+), 8 deletions(-) diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c index c6e58d2a1a..79090c04ca 100644 --- a/xen/common/event_fifo.c +++ b/xen/common/event_fifo.c @@ -42,6 +42,14 @@ struct evtchn_fifo_domain { unsigned int num_evtchns; }; =20 +union evtchn_fifo_lastq { + uint32_t raw; + struct { + uint8_t last_priority; + uint16_t last_vcpu_id; + }; +}; + static inline event_word_t *evtchn_fifo_word_from_port(const struct domain= *d, unsigned int port) { @@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const s= truct domain *d, struct vcpu *v; struct evtchn_fifo_queue *q, *old_q; unsigned int try; + union evtchn_fifo_lastq lastq; =20 for ( try =3D 0; try < 3; try++ ) { - v =3D d->vcpu[evtchn->last_vcpu_id]; - old_q =3D &v->evtchn_fifo->queue[evtchn->last_priority]; + lastq.raw =3D read_atomic(&evtchn->fifo_lastq); + v =3D d->vcpu[lastq.last_vcpu_id]; + old_q =3D &v->evtchn_fifo->queue[lastq.last_priority]; =20 spin_lock_irqsave(&old_q->lock, *flags); =20 - v =3D d->vcpu[evtchn->last_vcpu_id]; - q =3D &v->evtchn_fifo->queue[evtchn->last_priority]; + v =3D d->vcpu[lastq.last_vcpu_id]; + q =3D &v->evtchn_fifo->queue[lastq.last_priority]; =20 if ( old_q =3D=3D q ) return old_q; @@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, st= ruct evtchn *evtchn) /* Moved to a different queue? */ if ( old_q !=3D q ) { - evtchn->last_vcpu_id =3D v->vcpu_id; - evtchn->last_priority =3D q->priority; + union evtchn_fifo_lastq lastq =3D { }; + + lastq.last_vcpu_id =3D v->vcpu_id; + lastq.last_priority =3D q->priority; + write_atomic(&evtchn->fifo_lastq, lastq.raw); =20 spin_unlock_irqrestore(&old_q->lock, flags); spin_lock_irqsave(&q->lock, flags); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index d8ed83f869..a298ff4df8 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -114,8 +114,7 @@ struct evtchn u16 virq; /* state =3D=3D ECS_VIRQ */ } u; u8 priority; - u8 last_priority; - u16 last_vcpu_id; + u32 fifo_lastq; /* Data for fifo events identifying last queue. */ #ifdef CONFIG_XSM union { #ifdef XSM_NEED_GENERIC_EVTCHN_SSID --=20 2.26.2 From nobody Fri Apr 26 14:41:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1604939944; cv=none; d=zohomail.com; s=zohoarc; b=jPqgSa0+JikSZzNcHninZFdP/lQa3UV+jRW3NirprpNiBdD2EhWGA6IbVJnhLKtNFFRypE7rv+JqRUy/MIlmuh8LqXCQljc4PfXESPgyLZeR+38Pjs0x4kychGBCwUft1E+r/+F5rpQFvtfms5fTQ0SO0GP1Hr0ZGVi0aqC6ZLg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1604939944; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zA2xJswA/ZiZycp0tBhOJN4sbHBbwAVz3EYB9fQC5Xw=; b=TUvFX80JS2xm41TpV4cE1GhRXFzpYV4IYam4xShofV4kmc3tRNueedXBBP5FRnASRC9wWF077i2UF28Ez1qVZu9eHb95SpCLDCqkoR1C9kIdLLPvwurCYcwdL9N4aAB92b03gPn1Zx6Nv6z4p9KBYdomI27IQr3fW+tQsiSlGws= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1604939944510781.0792489433278; Mon, 9 Nov 2020 08:39:04 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.22721.49135 (Exim 4.92) (envelope-from ) id 1kcABh-0000ym-Ir; Mon, 09 Nov 2020 16:38:37 +0000 Received: by outflank-mailman (output) from mailman id 22721.49135; Mon, 09 Nov 2020 16:38:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcABh-0000ye-AZ; Mon, 09 Nov 2020 16:38:37 +0000 Received: by outflank-mailman (input) for mailman id 22721; Mon, 09 Nov 2020 16:38:36 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcABg-0000vD-HK for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:36 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 92da756d-819c-4ff9-883c-5d9172814c7c; Mon, 09 Nov 2020 16:38:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 79FB2AD0F; Mon, 9 Nov 2020 16:38:29 +0000 (UTC) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcABg-0000vD-HK for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:36 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 92da756d-819c-4ff9-883c-5d9172814c7c; Mon, 09 Nov 2020 16:38:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 79FB2AD0F; Mon, 9 Nov 2020 16:38:29 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 92da756d-819c-4ff9-883c-5d9172814c7c X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1604939909; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zA2xJswA/ZiZycp0tBhOJN4sbHBbwAVz3EYB9fQC5Xw=; b=i5LB/RS0DMA2RLw7ZpEoGG04PunrSQBzWvQTssLI5OmCcuvvu7HuPpKEH9bmpEr0PgwXFi Ngv+nGZDsRuRtsk1MoYi6t9fVk+L1iLgE1EFV8huyEHVXXbV5+0rZ+TmyvSAYrW1/MUemQ +caAUpSJ6ypDa8BRtoIWXY6rGMCL8Fw= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini Subject: [PATCH v6 2/3] xen/evtchn: rework per event channel lock Date: Mon, 9 Nov 2020 17:38:25 +0100 Message-Id: <20201109163826.13035-3-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201109163826.13035-1-jgross@suse.com> References: <20201109163826.13035-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Currently the lock for a single event channel needs to be taken with interrupts off, which causes deadlocks in some cases. Rework the per event channel lock to be non-blocking for the case of sending an event and removing the need for disabling interrupts for taking the lock. The lock is needed for avoiding races between event channel state changes (creation, closing, binding) against normal operations (set pending, [un]masking, priority changes). Use a rwlock, but with some restrictions: - Changing the state of an event channel (creation, closing, binding) needs to use write_lock(), with ASSERT()ing that the lock is taken as writer only when the state of the event channel is either before or after the locked region appropriate (either free or unbound). - Sending an event needs to use read_trylock() mostly, in case of not obtaining the lock the operation is omitted. This is needed as sending an event can happen with interrupts off (at least in some cases). - Dumping the event channel state for debug purposes is using read_trylock(), too, in order to avoid blocking in case the lock is taken as writer for a long time. - All other cases can use read_lock(). Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()") Signed-off-by: Juergen Gross Acked-by: Julien Grall Reviewed-by: Jan Beulich --- V4: - switch to rwlock - add ASSERT() to verify correct write_lock() usage V3: - corrected a copy-and-paste error (Jan Beulich) - corrected unlocking in two cases (Jan Beulich) - renamed evtchn_read_trylock() (Jan Beulich) - added some comments and an ASSERT() for evtchn_write_lock() - set EVENT_WRITE_LOCK_INC to INT_MIN V2: - added needed barriers Signed-off-by: Juergen Gross --- xen/arch/x86/irq.c | 6 +- xen/arch/x86/pv/shim.c | 9 +-- xen/common/event_channel.c | 138 +++++++++++++++++++++---------------- xen/include/xen/event.h | 29 ++++++-- xen/include/xen/sched.h | 5 +- 5 files changed, 112 insertions(+), 75 deletions(-) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 93c4fb9a79..8d1f9a9fc6 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key) pirq =3D domain_irq_to_pirq(d, irq); info =3D pirq_info(d, pirq); evtchn =3D evtchn_from_port(d, info->evtchn); - local_irq_disable(); - if ( spin_trylock(&evtchn->lock) ) + if ( evtchn_read_trylock(evtchn) ) { pending =3D evtchn_is_pending(d, evtchn); masked =3D evtchn_is_masked(d, evtchn); - spin_unlock(&evtchn->lock); + evtchn_read_unlock(evtchn); } - local_irq_enable(); printk("d%d:%3d(%c%c%c)%c", d->domain_id, pirq, "-P?"[pending], "-M?"[masked], info->masked ? 'M' : '-', diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c index 9aef7a860a..b4e83e0778 100644 --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port) if ( port_is_valid(guest, port) ) { struct evtchn *chn =3D evtchn_from_port(guest, port); - unsigned long flags; =20 - spin_lock_irqsave(&chn->lock, flags); - evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn); - spin_unlock_irqrestore(&chn->lock, flags); + if ( evtchn_read_trylock(chn) ) + { + evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn); + evtchn_read_unlock(chn); + } } } =20 diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c index cd4a2c0501..43e3520df6 100644 --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -50,6 +50,29 @@ =20 #define consumer_is_xen(e) (!!(e)->xen_consumer) =20 +/* + * Lock an event channel exclusively. This is allowed only when the channe= l is + * free or unbound either when taking or when releasing the lock, as any + * concurrent operation on the event channel using evtchn_read_trylock() w= ill + * just assume the event channel is free or unbound at the moment when the + * evtchn_read_trylock() returns false. + */ +static inline void evtchn_write_lock(struct evtchn *evtchn) +{ + write_lock(&evtchn->lock); + + evtchn->old_state =3D evtchn->state; +} + +static inline void evtchn_write_unlock(struct evtchn *evtchn) +{ + /* Enforce lock discipline. */ + ASSERT(evtchn->old_state =3D=3D ECS_FREE || evtchn->old_state =3D=3D E= CS_UNBOUND || + evtchn->state =3D=3D ECS_FREE || evtchn->state =3D=3D ECS_UNBOU= ND); + + write_unlock(&evtchn->lock); +} + /* * The function alloc_unbound_xen_event_channel() allows an arbitrary * notifier function to be specified. However, very few unique functions @@ -133,7 +156,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain= *d, unsigned int port) return NULL; } chn[i].port =3D port + i; - spin_lock_init(&chn[i].lock); + rwlock_init(&chn[i].lock); } return chn; } @@ -255,7 +278,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t= *alloc) int port; domid_t dom =3D alloc->dom; long rc; - unsigned long flags; =20 d =3D rcu_lock_domain_by_any_id(dom); if ( d =3D=3D NULL ) @@ -271,14 +293,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound= _t *alloc) if ( rc ) goto out; =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_UNBOUND; if ( (chn->u.unbound.remote_domid =3D alloc->remote_dom) =3D=3D DOMID_= SELF ) chn->u.unbound.remote_domid =3D current->domain->domain_id; evtchn_port_init(d, chn); =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 alloc->port =3D port; =20 @@ -291,32 +313,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound= _t *alloc) } =20 =20 -static unsigned long double_evtchn_lock(struct evtchn *lchn, - struct evtchn *rchn) +static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn) { - unsigned long flags; - if ( lchn <=3D rchn ) { - spin_lock_irqsave(&lchn->lock, flags); + evtchn_write_lock(lchn); if ( lchn !=3D rchn ) - spin_lock(&rchn->lock); + evtchn_write_lock(rchn); } else { - spin_lock_irqsave(&rchn->lock, flags); - spin_lock(&lchn->lock); + evtchn_write_lock(rchn); + evtchn_write_lock(lchn); } - - return flags; } =20 -static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn, - unsigned long flags) +static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn) { if ( lchn !=3D rchn ) - spin_unlock(&lchn->lock); - spin_unlock_irqrestore(&rchn->lock, flags); + evtchn_write_unlock(lchn); + evtchn_write_unlock(rchn); } =20 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind) @@ -326,7 +342,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdo= main_t *bind) int lport, rport =3D bind->remote_port; domid_t rdom =3D bind->remote_dom; long rc; - unsigned long flags; =20 if ( rdom =3D=3D DOMID_SELF ) rdom =3D current->domain->domain_id; @@ -362,7 +377,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdo= main_t *bind) if ( rc ) goto out; =20 - flags =3D double_evtchn_lock(lchn, rchn); + double_evtchn_lock(lchn, rchn); =20 lchn->u.interdomain.remote_dom =3D rd; lchn->u.interdomain.remote_port =3D rport; @@ -379,7 +394,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdo= main_t *bind) */ evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn); =20 - double_evtchn_unlock(lchn, rchn, flags); + double_evtchn_unlock(lchn, rchn); =20 bind->local_port =3D lport; =20 @@ -402,7 +417,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_p= ort_t port) struct domain *d =3D current->domain; int virq =3D bind->virq, vcpu =3D bind->vcpu; int rc =3D 0; - unsigned long flags; =20 if ( (virq < 0) || (virq >=3D ARRAY_SIZE(v->virq_to_evtchn)) ) return -EINVAL; @@ -440,14 +454,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn= _port_t port) =20 chn =3D evtchn_from_port(d, port); =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_VIRQ; chn->notify_vcpu_id =3D vcpu; chn->u.virq =3D virq; evtchn_port_init(d, chn); =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 v->virq_to_evtchn[virq] =3D bind->port =3D port; =20 @@ -464,7 +478,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind) struct domain *d =3D current->domain; int port, vcpu =3D bind->vcpu; long rc =3D 0; - unsigned long flags; =20 if ( domain_vcpu(d, vcpu) =3D=3D NULL ) return -ENOENT; @@ -476,13 +489,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind) =20 chn =3D evtchn_from_port(d, port); =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_IPI; chn->notify_vcpu_id =3D vcpu; evtchn_port_init(d, chn); =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 bind->port =3D port; =20 @@ -526,7 +539,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind) struct pirq *info; int port =3D 0, pirq =3D bind->pirq; long rc; - unsigned long flags; =20 if ( (pirq < 0) || (pirq >=3D d->nr_pirqs) ) return -EINVAL; @@ -559,14 +571,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind) goto out; } =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_PIRQ; chn->u.pirq.irq =3D pirq; link_pirq_port(port, chn, v); evtchn_port_init(d, chn); =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 bind->port =3D port; =20 @@ -587,7 +599,6 @@ int evtchn_close(struct domain *d1, int port1, bool gue= st) struct evtchn *chn1, *chn2; int port2; long rc =3D 0; - unsigned long flags; =20 again: spin_lock(&d1->event_lock); @@ -688,14 +699,14 @@ int evtchn_close(struct domain *d1, int port1, bool g= uest) BUG_ON(chn2->state !=3D ECS_INTERDOMAIN); BUG_ON(chn2->u.interdomain.remote_dom !=3D d1); =20 - flags =3D double_evtchn_lock(chn1, chn2); + double_evtchn_lock(chn1, chn2); =20 evtchn_free(d1, chn1); =20 chn2->state =3D ECS_UNBOUND; chn2->u.unbound.remote_domid =3D d1->domain_id; =20 - double_evtchn_unlock(chn1, chn2, flags); + double_evtchn_unlock(chn1, chn2); =20 goto out; =20 @@ -703,9 +714,9 @@ int evtchn_close(struct domain *d1, int port1, bool gue= st) BUG(); } =20 - spin_lock_irqsave(&chn1->lock, flags); + evtchn_write_lock(chn1); evtchn_free(d1, chn1); - spin_unlock_irqrestore(&chn1->lock, flags); + evtchn_write_unlock(chn1); =20 out: if ( d2 !=3D NULL ) @@ -725,7 +736,6 @@ int evtchn_send(struct domain *ld, unsigned int lport) struct evtchn *lchn, *rchn; struct domain *rd; int rport, ret =3D 0; - unsigned long flags; =20 if ( !port_is_valid(ld, lport) ) return -EINVAL; @@ -738,7 +748,7 @@ int evtchn_send(struct domain *ld, unsigned int lport) =20 lchn =3D evtchn_from_port(ld, lport); =20 - spin_lock_irqsave(&lchn->lock, flags); + evtchn_read_lock(lchn); =20 /* Guest cannot send via a Xen-attached event channel. */ if ( unlikely(consumer_is_xen(lchn)) ) @@ -773,7 +783,7 @@ int evtchn_send(struct domain *ld, unsigned int lport) } =20 out: - spin_unlock_irqrestore(&lchn->lock, flags); + evtchn_read_unlock(lchn); =20 return ret; } @@ -806,9 +816,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t vir= q) =20 d =3D v->domain; chn =3D evtchn_from_port(d, port); - spin_lock(&chn->lock); - evtchn_port_set_pending(d, v->vcpu_id, chn); - spin_unlock(&chn->lock); + if ( evtchn_read_trylock(chn) ) + { + evtchn_port_set_pending(d, v->vcpu_id, chn); + evtchn_read_unlock(chn); + } =20 out: spin_unlock_irqrestore(&v->virq_lock, flags); @@ -837,9 +849,11 @@ void send_guest_global_virq(struct domain *d, uint32_t= virq) goto out; =20 chn =3D evtchn_from_port(d, port); - spin_lock(&chn->lock); - evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); - spin_unlock(&chn->lock); + if ( evtchn_read_trylock(chn) ) + { + evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); + evtchn_read_unlock(chn); + } =20 out: spin_unlock_irqrestore(&v->virq_lock, flags); @@ -849,7 +863,6 @@ void send_guest_pirq(struct domain *d, const struct pir= q *pirq) { int port; struct evtchn *chn; - unsigned long flags; =20 /* * PV guests: It should not be possible to race with __evtchn_close().= The @@ -864,9 +877,11 @@ void send_guest_pirq(struct domain *d, const struct pi= rq *pirq) } =20 chn =3D evtchn_from_port(d, port); - spin_lock_irqsave(&chn->lock, flags); - evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); - spin_unlock_irqrestore(&chn->lock, flags); + if ( evtchn_read_trylock(chn) ) + { + evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); + evtchn_read_unlock(chn); + } } =20 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly; @@ -1068,15 +1083,17 @@ int evtchn_unmask(unsigned int port) { struct domain *d =3D current->domain; struct evtchn *evtchn; - unsigned long flags; =20 if ( unlikely(!port_is_valid(d, port)) ) return -EINVAL; =20 evtchn =3D evtchn_from_port(d, port); - spin_lock_irqsave(&evtchn->lock, flags); + + evtchn_read_lock(evtchn); + evtchn_port_unmask(d, evtchn); - spin_unlock_irqrestore(&evtchn->lock, flags); + + evtchn_read_unlock(evtchn); =20 return 0; } @@ -1156,15 +1173,17 @@ static long evtchn_set_priority(const struct evtchn= _set_priority *set_priority) unsigned int port =3D set_priority->port; struct evtchn *chn; long ret; - unsigned long flags; =20 if ( !port_is_valid(d, port) ) return -EINVAL; =20 chn =3D evtchn_from_port(d, port); - spin_lock_irqsave(&chn->lock, flags); + + evtchn_read_lock(chn); + ret =3D evtchn_port_set_priority(d, chn, set_priority->priority); - spin_unlock_irqrestore(&chn->lock, flags); + + evtchn_read_unlock(chn); =20 return ret; } @@ -1332,7 +1351,6 @@ int alloc_unbound_xen_event_channel( { struct evtchn *chn; int port, rc; - unsigned long flags; =20 spin_lock(&ld->event_lock); =20 @@ -1345,14 +1363,14 @@ int alloc_unbound_xen_event_channel( if ( rc ) goto out; =20 - spin_lock_irqsave(&chn->lock, flags); + evtchn_write_lock(chn); =20 chn->state =3D ECS_UNBOUND; chn->xen_consumer =3D get_xen_consumer(notification_fn); chn->notify_vcpu_id =3D lvcpu; chn->u.unbound.remote_domid =3D remote_domid; =20 - spin_unlock_irqrestore(&chn->lock, flags); + evtchn_write_unlock(chn); =20 /* * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit @@ -1388,7 +1406,6 @@ void notify_via_xen_event_channel(struct domain *ld, = int lport) { struct evtchn *lchn, *rchn; struct domain *rd; - unsigned long flags; =20 if ( !port_is_valid(ld, lport) ) { @@ -1403,7 +1420,8 @@ void notify_via_xen_event_channel(struct domain *ld, = int lport) =20 lchn =3D evtchn_from_port(ld, lport); =20 - spin_lock_irqsave(&lchn->lock, flags); + if ( !evtchn_read_trylock(lchn) ) + return; =20 if ( likely(lchn->state =3D=3D ECS_INTERDOMAIN) ) { @@ -1413,7 +1431,7 @@ void notify_via_xen_event_channel(struct domain *ld, = int lport) evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn); } =20 - spin_unlock_irqrestore(&lchn->lock, flags); + evtchn_read_unlock(lchn); } =20 void evtchn_check_pollers(struct domain *d, unsigned int port) diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h index 2ed4be78f6..a0a85cdda8 100644 --- a/xen/include/xen/event.h +++ b/xen/include/xen/event.h @@ -105,6 +105,21 @@ void notify_via_xen_event_channel(struct domain *ld, i= nt lport); #define bucket_from_port(d, p) \ ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKE= T]) =20 +static inline void evtchn_read_lock(struct evtchn *evtchn) +{ + read_lock(&evtchn->lock); +} + +static inline bool evtchn_read_trylock(struct evtchn *evtchn) +{ + return read_trylock(&evtchn->lock); +} + +static inline void evtchn_read_unlock(struct evtchn *evtchn) +{ + read_unlock(&evtchn->lock); +} + static inline bool_t port_is_valid(struct domain *d, unsigned int p) { if ( p >=3D read_atomic(&d->valid_evtchns) ) @@ -235,11 +250,12 @@ static inline bool evtchn_port_is_masked(struct domai= n *d, evtchn_port_t port) { struct evtchn *evtchn =3D evtchn_from_port(d, port); bool rc; - unsigned long flags; =20 - spin_lock_irqsave(&evtchn->lock, flags); + evtchn_read_lock(evtchn); + rc =3D evtchn_is_masked(d, evtchn); - spin_unlock_irqrestore(&evtchn->lock, flags); + + evtchn_read_unlock(evtchn); =20 return rc; } @@ -252,12 +268,13 @@ static inline int evtchn_port_poll(struct domain *d, = evtchn_port_t port) if ( port_is_valid(d, port) ) { struct evtchn *evtchn =3D evtchn_from_port(d, port); - unsigned long flags; =20 - spin_lock_irqsave(&evtchn->lock, flags); + evtchn_read_lock(evtchn); + if ( evtchn_usable(evtchn) ) rc =3D evtchn_is_pending(d, evtchn); - spin_unlock_irqrestore(&evtchn->lock, flags); + + evtchn_read_unlock(evtchn); } =20 return rc; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index a298ff4df8..66d8f058be 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -85,7 +85,7 @@ extern domid_t hardware_domid; =20 struct evtchn { - spinlock_t lock; + rwlock_t lock; #define ECS_FREE 0 /* Channel is available for use. = */ #define ECS_RESERVED 1 /* Channel is reserved. = */ #define ECS_UNBOUND 2 /* Channel is waiting to bind to a remote domai= n. */ @@ -114,6 +114,9 @@ struct evtchn u16 virq; /* state =3D=3D ECS_VIRQ */ } u; u8 priority; +#ifndef NDEBUG + u8 old_state; /* State when taking lock in write mode. */ +#endif u32 fifo_lastq; /* Data for fifo events identifying last queue. */ #ifdef CONFIG_XSM union { --=20 2.26.2 From nobody Fri Apr 26 14:41:15 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1604939944; cv=none; d=zohomail.com; s=zohoarc; b=OGeic51VOfoAFtEb1L9mVMJRjY75ouceG0JB1qC5y6MpIK3KoCxpbzaYNAbksm8U16HYFlsFmFxM/pnXbsNDMH5mcdu6j6XykenXlnEeyyna/Z5JqlUzzuwDFE28XxkXg3QOjNmYVWhDoFna4U3W8kjlp739WP1gIJtU2cATHec= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1604939944; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=1eDlo6s9/bQ6o2hmRgAhuAfISfO0llaHbAtq+TidIvI=; b=KWL3+kd8XuyKNviczqRW0UWwpdT4lywOQ1VAaUgqsuzADY+Q++Dfd2zRG9JDgeahVUyeg+2LL7Xb1n9QOFLJwUmpoiuRpuCyZY3NrCVDT0fNb7Jt9fmoUwsGj3TCg3wyk0D/ga3DNb+DxcZYy0SWoJAqCWjx7Hqep0Kx+Cr4FR0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1604939944466250.2854591790947; Mon, 9 Nov 2020 08:39:04 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.22719.49117 (Exim 4.92) (envelope-from ) id 1kcABd-0000wH-SQ; Mon, 09 Nov 2020 16:38:33 +0000 Received: by outflank-mailman (output) from mailman id 22719.49117; Mon, 09 Nov 2020 16:38:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcABd-0000wA-PB; Mon, 09 Nov 2020 16:38:33 +0000 Received: by outflank-mailman (input) for mailman id 22719; Mon, 09 Nov 2020 16:38:31 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcABb-0000vD-LY for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:31 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 40bef7d9-7384-4f85-bd59-998c1815a481; Mon, 09 Nov 2020 16:38:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B0655AD11; Mon, 9 Nov 2020 16:38:29 +0000 (UTC) Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcABb-0000vD-LY for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:31 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 40bef7d9-7384-4f85-bd59-998c1815a481; Mon, 09 Nov 2020 16:38:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B0655AD11; Mon, 9 Nov 2020 16:38:29 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 40bef7d9-7384-4f85-bd59-998c1815a481 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1604939909; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1eDlo6s9/bQ6o2hmRgAhuAfISfO0llaHbAtq+TidIvI=; b=eIMoxiIGdcdaTDNok1VTLO3NcneGW58cITfTdFyjf9tOW7srpPEHzfmzJVYSA6dNFd05XZ nllz6wrLVbNeok3AsrdXbF0efN3/+3CvdFg8rNR2shRj06FWvHv0K6Re/JGtSrLDMREZoA GG+UCjHa7Gdj+S5pCD6XGm6SGfYmbn8= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Daniel De Graaf Subject: [PATCH v6 3/3] xen/evtchn: revert 52e1fc47abc3a0123 Date: Mon, 9 Nov 2020 17:38:26 +0100 Message-Id: <20201109163826.13035-4-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201109163826.13035-1-jgross@suse.com> References: <20201109163826.13035-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" With the event channel lock no longer disabling interrupts commit 52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can be reverted again. Signed-off-by: Juergen Gross Acked-by: Jan Beulich --- xen/common/event_channel.c | 6 --- xen/include/xsm/xsm.h | 1 - xen/xsm/flask/avc.c | 78 ++++--------------------------------- xen/xsm/flask/hooks.c | 10 ----- xen/xsm/flask/include/avc.h | 2 - 5 files changed, 7 insertions(+), 90 deletions(-) diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c index 43e3520df6..eacd96b92f 100644 --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -740,12 +740,6 @@ int evtchn_send(struct domain *ld, unsigned int lport) if ( !port_is_valid(ld, lport) ) return -EINVAL; =20 - /* - * As the call further down needs to avoid allocations (due to running - * with IRQs off), give XSM a chance to pre-allocate if needed. - */ - xsm_evtchn_send(XSM_HOOK, ld, NULL); - lchn =3D evtchn_from_port(ld, lport); =20 evtchn_read_lock(lchn); diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 358ec13ba8..7bd03d8817 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -59,7 +59,6 @@ struct xsm_operations { int (*evtchn_interdomain) (struct domain *d1, struct evtchn *chn1, struct domain *d2, struct evtchn *= chn2); void (*evtchn_close_post) (struct evtchn *chn); - /* Note: Next hook may be called with 'chn' set to NULL. See call site= . */ int (*evtchn_send) (struct domain *d, struct evtchn *chn); int (*evtchn_status) (struct domain *d, struct evtchn *chn); int (*evtchn_reset) (struct domain *d1, struct domain *d2); diff --git a/xen/xsm/flask/avc.c b/xen/xsm/flask/avc.c index 2dfa1f4295..87ea38b7a0 100644 --- a/xen/xsm/flask/avc.c +++ b/xen/xsm/flask/avc.c @@ -24,9 +24,7 @@ #include #include #include -#include #include -#include #include #include #include @@ -343,79 +341,17 @@ static inline int avc_reclaim_node(void) return ecx; } =20 -static struct avc_node *new_node(void) -{ - struct avc_node *node =3D xzalloc(struct avc_node); - - if ( node ) - { - INIT_RCU_HEAD(&node->rhead); - INIT_HLIST_NODE(&node->list); - avc_cache_stats_incr(allocations); - } - - return node; -} - -/* - * avc_has_perm_noaudit() may consume up to two nodes, which we may not be - * able to obtain from the allocator at that point. Since the is merely - * about caching earlier decisions, allow for (just) one pre-allocated nod= e. - */ -static DEFINE_PER_CPU(struct avc_node *, prealloc_node); - -void avc_prealloc(void) -{ - struct avc_node **prealloc =3D &this_cpu(prealloc_node); - - if ( !*prealloc ) - *prealloc =3D new_node(); -} - -static int cpu_callback(struct notifier_block *nfb, unsigned long action, - void *hcpu) -{ - unsigned int cpu =3D (unsigned long)hcpu; - struct avc_node **prealloc =3D &per_cpu(prealloc_node, cpu); - - if ( action =3D=3D CPU_DEAD && *prealloc ) - { - xfree(*prealloc); - *prealloc =3D NULL; - avc_cache_stats_incr(frees); - } - - return NOTIFY_DONE; -} - -static struct notifier_block cpu_nfb =3D { - .notifier_call =3D cpu_callback, - .priority =3D 99 -}; - -static int __init cpu_nfb_init(void) -{ - register_cpu_notifier(&cpu_nfb); - return 0; -} -__initcall(cpu_nfb_init); - static struct avc_node *avc_alloc_node(void) { - struct avc_node *node, **prealloc =3D &this_cpu(prealloc_node); + struct avc_node *node; =20 - node =3D *prealloc; - *prealloc =3D NULL; + node =3D xzalloc(struct avc_node); + if (!node) + goto out; =20 - if ( !node ) - { - /* Must not call xmalloc() & Co with IRQs off. */ - if ( !local_irq_is_enabled() ) - goto out; - node =3D new_node(); - if ( !node ) - goto out; - } + INIT_RCU_HEAD(&node->rhead); + INIT_HLIST_NODE(&node->list); + avc_cache_stats_incr(allocations); =20 atomic_inc(&avc_cache.active_nodes); if ( atomic_read(&avc_cache.active_nodes) > avc_cache_threshold ) diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index de050cc9fe..19b0d9e3eb 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -281,16 +281,6 @@ static int flask_evtchn_send(struct domain *d, struct = evtchn *chn) { int rc; =20 - /* - * When called with non-NULL chn, memory allocation may not be permitt= ed. - * Allow AVC to preallocate nodes as necessary upon early notification. - */ - if ( !chn ) - { - avc_prealloc(); - return 0; - } - switch ( chn->state ) { case ECS_INTERDOMAIN: diff --git a/xen/xsm/flask/include/avc.h b/xen/xsm/flask/include/avc.h index 722919b762..c14bd07a2b 100644 --- a/xen/xsm/flask/include/avc.h +++ b/xen/xsm/flask/include/avc.h @@ -91,8 +91,6 @@ int avc_has_perm_noaudit(u32 ssid, u32 tsid, u16 tclass, = u32 requested, int avc_has_perm(u32 ssid, u32 tsid, u16 tclass, u32 requested, struct avc_audit_data *auditd= ata); =20 -void avc_prealloc(void); - /* Exported to selinuxfs */ struct xen_flask_hash_stats; int avc_get_hash_stats(struct xen_flask_hash_stats *arg); --=20 2.26.2