From nobody Sun May 5 00:04:35 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1615047537; cv=none; d=zohomail.com; s=zohoarc; b=DIdD3dBvk6uCzYdEtAjNySZjfpQX9ZlykomGdkkuF7AKG+76t8coTEu4uzEr6HONEIXvUsHN6cWCMfF3yq5K7wYIFToplUGD7gMoyeVFwkKOYPZiLCzZfWd0xI7/pz/caKSZdsY+eEtellpYgHBln1VLiWolzZhLqxNDNShXFCY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1615047537; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=CzB0BFIPrmEbErFdpKHp0c0umgSWDwXyfGSsUiACoW8=; b=eAifAOdRDdtS6h95Vhxm4mV0itb/aJEYG3bxRl0KqjCrzGGwfCtxwIoS0vXSB6OVZn3Lj2dZVE6PccjWtzQdXO0ihSX7XwBApLykP5HEy6SWaU8UrAm/E6y1xtrUHWE2HLVxedsrxnD4Q84EjIreann57Ec+oEBkv6WJEZy4nnk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 16150475373738.497993218057104; Sat, 6 Mar 2021 08:18:57 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.94315.177851 (Exim 4.92) (envelope-from ) id 1lIZdb-0002I7-Bv; Sat, 06 Mar 2021 16:18:43 +0000 Received: by outflank-mailman (output) from mailman id 94315.177851; Sat, 06 Mar 2021 16:18:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lIZdb-0002Hv-7c; Sat, 06 Mar 2021 16:18:43 +0000 Received: by outflank-mailman (input) for mailman id 94315; Sat, 06 Mar 2021 16:18:42 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lIZdZ-0002A8-WB for xen-devel@lists.xenproject.org; Sat, 06 Mar 2021 16:18:42 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id faed39b0-5a59-463d-8294-1d3cdb4e1e46; Sat, 06 Mar 2021 16:18:36 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id CAE97ACC6; Sat, 6 Mar 2021 16:18:35 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: faed39b0-5a59-463d-8294-1d3cdb4e1e46 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1615047515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CzB0BFIPrmEbErFdpKHp0c0umgSWDwXyfGSsUiACoW8=; b=pQY1xvSueF7SP3KX315+Ya0QAfly5J0uzzdgoWRUwbzjbIcBCbV7niSNt1s0Jpdyv7cFkg /PulzPGmhZIvN6TVySTSn7gNN0AcTk+qGjmfq7gJ+wlMBaWlO5FJXEwNo8zwh6SUAl36fx o6XNnr4ZvY1lELnXlSVtcbKAZBa03jM= From: Juergen Gross To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , stable@vger.kernel.org, Julien Grall , Julien Grall Subject: [PATCH v4 1/3] xen/events: reset affinity of 2-level event when tearing it down Date: Sat, 6 Mar 2021 17:18:31 +0100 Message-Id: <20210306161833.4552-2-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210306161833.4552-1-jgross@suse.com> References: <20210306161833.4552-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" When creating a new event channel with 2-level events the affinity needs to be reset initially in order to avoid using an old affinity from earlier usage of the event channel port. So when tearing an event channel down reset all affinity bits. The same applies to the affinity when onlining a vcpu: all old affinity settings for this vcpu must be reset. As percpu events get initialized before the percpu event channel hook is called, resetting of the affinities happens after offlining a vcpu (this is working, as initial percpu memory is zeroed out). Cc: stable@vger.kernel.org Reported-by: Julien Grall Signed-off-by: Juergen Gross Reviewed-by: Julien Grall --- V2: - reset affinity when tearing down the event (Julien Grall) --- drivers/xen/events/events_2l.c | 15 +++++++++++++++ drivers/xen/events/events_base.c | 1 + drivers/xen/events/events_internal.h | 8 ++++++++ 3 files changed, 24 insertions(+) diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c index da87f3a1e351..a7f413c5c190 100644 --- a/drivers/xen/events/events_2l.c +++ b/drivers/xen/events/events_2l.c @@ -47,6 +47,11 @@ static unsigned evtchn_2l_max_channels(void) return EVTCHN_2L_NR_CHANNELS; } =20 +static void evtchn_2l_remove(evtchn_port_t evtchn, unsigned int cpu) +{ + clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu))); +} + static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu, unsigned int old_cpu) { @@ -355,9 +360,18 @@ static void evtchn_2l_resume(void) EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD); } =20 +static int evtchn_2l_percpu_deinit(unsigned int cpu) +{ + memset(per_cpu(cpu_evtchn_mask, cpu), 0, sizeof(xen_ulong_t) * + EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD); + + return 0; +} + static const struct evtchn_ops evtchn_ops_2l =3D { .max_channels =3D evtchn_2l_max_channels, .nr_channels =3D evtchn_2l_max_channels, + .remove =3D evtchn_2l_remove, .bind_to_cpu =3D evtchn_2l_bind_to_cpu, .clear_pending =3D evtchn_2l_clear_pending, .set_pending =3D evtchn_2l_set_pending, @@ -367,6 +381,7 @@ static const struct evtchn_ops evtchn_ops_2l =3D { .unmask =3D evtchn_2l_unmask, .handle_events =3D evtchn_2l_handle_events, .resume =3D evtchn_2l_resume, + .percpu_deinit =3D evtchn_2l_percpu_deinit, }; =20 void __init xen_evtchn_2l_init(void) diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_b= ase.c index adb7260e94b2..7e23808892a7 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -377,6 +377,7 @@ static int xen_irq_info_pirq_setup(unsigned irq, static void xen_irq_info_cleanup(struct irq_info *info) { set_evtchn_to_irq(info->evtchn, -1); + xen_evtchn_port_remove(info->evtchn, info->cpu); info->evtchn =3D 0; channels_on_cpu_dec(info); } diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/even= ts_internal.h index 0a97c0549db7..18a4090d0709 100644 --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -14,6 +14,7 @@ struct evtchn_ops { unsigned (*nr_channels)(void); =20 int (*setup)(evtchn_port_t port); + void (*remove)(evtchn_port_t port, unsigned int cpu); void (*bind_to_cpu)(evtchn_port_t evtchn, unsigned int cpu, unsigned int old_cpu); =20 @@ -54,6 +55,13 @@ static inline int xen_evtchn_port_setup(evtchn_port_t ev= tchn) return 0; } =20 +static inline void xen_evtchn_port_remove(evtchn_port_t evtchn, + unsigned int cpu) +{ + if (evtchn_ops->remove) + evtchn_ops->remove(evtchn, cpu); +} + static inline void xen_evtchn_port_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu, unsigned int old_cpu) --=20 2.26.2 From nobody Sun May 5 00:04:35 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1615047534; cv=none; d=zohomail.com; s=zohoarc; b=QgDOsqT1NCt/81esaLh/wyetQVfkIqCy+LWBNfXIiu8ecAtrSP/Qkmq7VbdNJAdKsmEuApw2CZZkG/J2KaZn6cMEzv2fnwRmlOGprWvdwSB1qctK+b0aDZqP/5edSEjBNLAfvlc9tNGmfIndEJDatRNfdfue6wsjXUwQEUbuKk0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1615047534; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=2ItonbQ6nCPS7Ck5aiQWROygJkg9zRoVzDQtkeXBTtM=; b=MegeH18gjAaktmfzmuWOLD8H7y8cb/BRjLYKUI3+nAuZnUngnVbf81tYEI7o8wCv79edwKbSJJA0lNtnchXYkB/M0iZDxHp+KxXJ/xGXFtig/1GKS/SBUlMs8cQHMMetn8uTkAC5sm6XWjORTWAh3c4/A+unE3rENcvAz/P3+qA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 16150475343801009.3458825570984; Sat, 6 Mar 2021 08:18:54 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.94313.177827 (Exim 4.92) (envelope-from ) id 1lIZdX-0002CW-JY; Sat, 06 Mar 2021 16:18:39 +0000 Received: by outflank-mailman (output) from mailman id 94313.177827; Sat, 06 Mar 2021 16:18:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lIZdX-0002CP-Fv; Sat, 06 Mar 2021 16:18:39 +0000 Received: by outflank-mailman (input) for mailman id 94313; Sat, 06 Mar 2021 16:18:38 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lIZdW-0002AY-8W for xen-devel@lists.xenproject.org; Sat, 06 Mar 2021 16:18:38 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id bbb3be79-3cfa-4512-9c61-43b89fee4dcd; Sat, 06 Mar 2021 16:18:36 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 145CAAD21; Sat, 6 Mar 2021 16:18:36 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bbb3be79-3cfa-4512-9c61-43b89fee4dcd X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1615047516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2ItonbQ6nCPS7Ck5aiQWROygJkg9zRoVzDQtkeXBTtM=; b=JhoxF/rLRC/zhQd/M2ReBsjMPiviL+iHuN29hmSxhE0B1CUwZAKjKJkmDUPqKujfXLAe7N d2cUZ2P3gSf3h6YpLMaCnlwxPBZmnNlHQbTQiOmE+vG2picUlslwGjePGYejzswWnohDS3 xhPiD2RK+Hc6oMqbvpjG0IngwUoJVrk= From: Juergen Gross To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , stable@vger.kernel.org, Julien Grall , Julien Grall Subject: [PATCH v4 2/3] xen/events: don't unmask an event channel when an eoi is pending Date: Sat, 6 Mar 2021 17:18:32 +0100 Message-Id: <20210306161833.4552-3-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210306161833.4552-1-jgross@suse.com> References: <20210306161833.4552-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" An event channel should be kept masked when an eoi is pending for it. When being migrated to another cpu it might be unmasked, though. In order to avoid this keep three different flags for each event channel to be able to distinguish "normal" masking/unmasking from eoi related masking/unmasking and temporary masking. The event channel should only be able to generate an interrupt if all flags are cleared. Cc: stable@vger.kernel.org Fixes: 54c9de89895e0a36047 ("xen/events: add a new late EOI evtchn framewor= k") Reported-by: Julien Grall Signed-off-by: Juergen Gross Reviewed-by: Julien Grall Reviewed-by: Boris Ostrovsky Tested-by: Ross Lagerwall --- V2: - introduce a lock around masking/unmasking - merge patch 3 into this one (Jan Beulich) V4: - don't set eoi masking flag in lateeoi_mask_ack_dynirq() --- drivers/xen/events/events_2l.c | 7 -- drivers/xen/events/events_base.c | 101 +++++++++++++++++++++------ drivers/xen/events/events_fifo.c | 7 -- drivers/xen/events/events_internal.h | 6 -- 4 files changed, 80 insertions(+), 41 deletions(-) diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c index a7f413c5c190..b8f2f971c2f0 100644 --- a/drivers/xen/events/events_2l.c +++ b/drivers/xen/events/events_2l.c @@ -77,12 +77,6 @@ static bool evtchn_2l_is_pending(evtchn_port_t port) return sync_test_bit(port, BM(&s->evtchn_pending[0])); } =20 -static bool evtchn_2l_test_and_set_mask(evtchn_port_t port) -{ - struct shared_info *s =3D HYPERVISOR_shared_info; - return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0])); -} - static void evtchn_2l_mask(evtchn_port_t port) { struct shared_info *s =3D HYPERVISOR_shared_info; @@ -376,7 +370,6 @@ static const struct evtchn_ops evtchn_ops_2l =3D { .clear_pending =3D evtchn_2l_clear_pending, .set_pending =3D evtchn_2l_set_pending, .is_pending =3D evtchn_2l_is_pending, - .test_and_set_mask =3D evtchn_2l_test_and_set_mask, .mask =3D evtchn_2l_mask, .unmask =3D evtchn_2l_unmask, .handle_events =3D evtchn_2l_handle_events, diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_b= ase.c index 7e23808892a7..b27c012c86b5 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -98,13 +98,18 @@ struct irq_info { short refcnt; u8 spurious_cnt; u8 is_accounted; - enum xen_irq_type type; /* type */ + short type; /* type: IRQT_* */ + u8 mask_reason; /* Why is event channel masked */ +#define EVT_MASK_REASON_EXPLICIT 0x01 +#define EVT_MASK_REASON_TEMPORARY 0x02 +#define EVT_MASK_REASON_EOI_PENDING 0x04 unsigned irq; evtchn_port_t evtchn; /* event channel */ unsigned short cpu; /* cpu bound */ unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */ unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */ u64 eoi_time; /* Time in jiffies when to EOI. */ + spinlock_t lock; =20 union { unsigned short virq; @@ -154,6 +159,7 @@ static DEFINE_RWLOCK(evtchn_rwlock); * evtchn_rwlock * IRQ-desc lock * percpu eoi_list_lock + * irq_info->lock */ =20 static LIST_HEAD(xen_irq_list_head); @@ -304,6 +310,8 @@ static int xen_irq_info_common_setup(struct irq_info *i= nfo, info->irq =3D irq; info->evtchn =3D evtchn; info->cpu =3D cpu; + info->mask_reason =3D EVT_MASK_REASON_EXPLICIT; + spin_lock_init(&info->lock); =20 ret =3D set_evtchn_to_irq(evtchn, irq); if (ret < 0) @@ -459,6 +467,34 @@ unsigned int cpu_from_evtchn(evtchn_port_t evtchn) return ret; } =20 +static void do_mask(struct irq_info *info, u8 reason) +{ + unsigned long flags; + + spin_lock_irqsave(&info->lock, flags); + + if (!info->mask_reason) + mask_evtchn(info->evtchn); + + info->mask_reason |=3D reason; + + spin_unlock_irqrestore(&info->lock, flags); +} + +static void do_unmask(struct irq_info *info, u8 reason) +{ + unsigned long flags; + + spin_lock_irqsave(&info->lock, flags); + + info->mask_reason &=3D ~reason; + + if (!info->mask_reason) + unmask_evtchn(info->evtchn); + + spin_unlock_irqrestore(&info->lock, flags); +} + #ifdef CONFIG_X86 static bool pirq_check_eoi_map(unsigned irq) { @@ -605,7 +641,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *inf= o, bool spurious) } =20 info->eoi_time =3D 0; - unmask_evtchn(evtchn); + do_unmask(info, EVT_MASK_REASON_EOI_PENDING); } =20 static void xen_irq_lateeoi_worker(struct work_struct *work) @@ -850,7 +886,8 @@ static unsigned int __startup_pirq(unsigned int irq) goto err; =20 out: - unmask_evtchn(evtchn); + do_unmask(info, EVT_MASK_REASON_EXPLICIT); + eoi_pirq(irq_get_irq_data(irq)); =20 return 0; @@ -877,7 +914,7 @@ static void shutdown_pirq(struct irq_data *data) if (!VALID_EVTCHN(evtchn)) return; =20 - mask_evtchn(evtchn); + do_mask(info, EVT_MASK_REASON_EXPLICIT); xen_evtchn_close(evtchn); xen_irq_info_cleanup(info); } @@ -1721,10 +1758,10 @@ void rebind_evtchn_irq(evtchn_port_t evtchn, int ir= q) } =20 /* Rebind an evtchn so that it gets delivered to a specific cpu */ -static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcp= u) +static int xen_rebind_evtchn_to_cpu(struct irq_info *info, unsigned int tc= pu) { struct evtchn_bind_vcpu bind_vcpu; - int masked; + evtchn_port_t evtchn =3D info ? info->evtchn : 0; =20 if (!VALID_EVTCHN(evtchn)) return -1; @@ -1740,7 +1777,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evt= chn, unsigned int tcpu) * Mask the event while changing the VCPU binding to prevent * it being delivered on an unexpected VCPU. */ - masked =3D test_and_set_mask(evtchn); + do_mask(info, EVT_MASK_REASON_TEMPORARY); =20 /* * If this fails, it usually just indicates that we're dealing with a @@ -1750,8 +1787,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evt= chn, unsigned int tcpu) if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >=3D 0) bind_evtchn_to_cpu(evtchn, tcpu, false); =20 - if (!masked) - unmask_evtchn(evtchn); + do_unmask(info, EVT_MASK_REASON_TEMPORARY); =20 return 0; } @@ -1790,7 +1826,7 @@ static int set_affinity_irq(struct irq_data *data, co= nst struct cpumask *dest, unsigned int tcpu =3D select_target_cpu(dest); int ret; =20 - ret =3D xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu); + ret =3D xen_rebind_evtchn_to_cpu(info_for_irq(data->irq), tcpu); if (!ret) irq_data_update_effective_affinity(data, cpumask_of(tcpu)); =20 @@ -1799,18 +1835,20 @@ static int set_affinity_irq(struct irq_data *data, = const struct cpumask *dest, =20 static void enable_dynirq(struct irq_data *data) { - evtchn_port_t evtchn =3D evtchn_from_irq(data->irq); + struct irq_info *info =3D info_for_irq(data->irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; =20 if (VALID_EVTCHN(evtchn)) - unmask_evtchn(evtchn); + do_unmask(info, EVT_MASK_REASON_EXPLICIT); } =20 static void disable_dynirq(struct irq_data *data) { - evtchn_port_t evtchn =3D evtchn_from_irq(data->irq); + struct irq_info *info =3D info_for_irq(data->irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; =20 if (VALID_EVTCHN(evtchn)) - mask_evtchn(evtchn); + do_mask(info, EVT_MASK_REASON_EXPLICIT); } =20 static void ack_dynirq(struct irq_data *data) @@ -1829,18 +1867,39 @@ static void mask_ack_dynirq(struct irq_data *data) ack_dynirq(data); } =20 +static void lateeoi_ack_dynirq(struct irq_data *data) +{ + struct irq_info *info =3D info_for_irq(data->irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; + + if (VALID_EVTCHN(evtchn)) { + do_mask(info, EVT_MASK_REASON_EOI_PENDING); + clear_evtchn(evtchn); + } +} + +static void lateeoi_mask_ack_dynirq(struct irq_data *data) +{ + struct irq_info *info =3D info_for_irq(data->irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; + + if (VALID_EVTCHN(evtchn)) { + do_mask(info, EVT_MASK_REASON_EXPLICIT); + clear_evtchn(evtchn); + } +} + static int retrigger_dynirq(struct irq_data *data) { - evtchn_port_t evtchn =3D evtchn_from_irq(data->irq); - int masked; + struct irq_info *info =3D info_for_irq(data->irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; =20 if (!VALID_EVTCHN(evtchn)) return 0; =20 - masked =3D test_and_set_mask(evtchn); + do_mask(info, EVT_MASK_REASON_TEMPORARY); set_evtchn(evtchn); - if (!masked) - unmask_evtchn(evtchn); + do_unmask(info, EVT_MASK_REASON_TEMPORARY); =20 return 1; } @@ -2054,8 +2113,8 @@ static struct irq_chip xen_lateeoi_chip __read_mostly= =3D { .irq_mask =3D disable_dynirq, .irq_unmask =3D enable_dynirq, =20 - .irq_ack =3D mask_ack_dynirq, - .irq_mask_ack =3D mask_ack_dynirq, + .irq_ack =3D lateeoi_ack_dynirq, + .irq_mask_ack =3D lateeoi_mask_ack_dynirq, =20 .irq_set_affinity =3D set_affinity_irq, .irq_retrigger =3D retrigger_dynirq, diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_f= ifo.c index b234f1766810..ad9fe51d3fb3 100644 --- a/drivers/xen/events/events_fifo.c +++ b/drivers/xen/events/events_fifo.c @@ -209,12 +209,6 @@ static bool evtchn_fifo_is_pending(evtchn_port_t port) return sync_test_bit(EVTCHN_FIFO_BIT(PENDING, word), BM(word)); } =20 -static bool evtchn_fifo_test_and_set_mask(evtchn_port_t port) -{ - event_word_t *word =3D event_word_from_port(port); - return sync_test_and_set_bit(EVTCHN_FIFO_BIT(MASKED, word), BM(word)); -} - static void evtchn_fifo_mask(evtchn_port_t port) { event_word_t *word =3D event_word_from_port(port); @@ -423,7 +417,6 @@ static const struct evtchn_ops evtchn_ops_fifo =3D { .clear_pending =3D evtchn_fifo_clear_pending, .set_pending =3D evtchn_fifo_set_pending, .is_pending =3D evtchn_fifo_is_pending, - .test_and_set_mask =3D evtchn_fifo_test_and_set_mask, .mask =3D evtchn_fifo_mask, .unmask =3D evtchn_fifo_unmask, .handle_events =3D evtchn_fifo_handle_events, diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/even= ts_internal.h index 18a4090d0709..4d3398eff9cd 100644 --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -21,7 +21,6 @@ struct evtchn_ops { void (*clear_pending)(evtchn_port_t port); void (*set_pending)(evtchn_port_t port); bool (*is_pending)(evtchn_port_t port); - bool (*test_and_set_mask)(evtchn_port_t port); void (*mask)(evtchn_port_t port); void (*unmask)(evtchn_port_t port); =20 @@ -84,11 +83,6 @@ static inline bool test_evtchn(evtchn_port_t port) return evtchn_ops->is_pending(port); } =20 -static inline bool test_and_set_mask(evtchn_port_t port) -{ - return evtchn_ops->test_and_set_mask(port); -} - static inline void mask_evtchn(evtchn_port_t port) { return evtchn_ops->mask(port); --=20 2.26.2 From nobody Sun May 5 00:04:35 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1615047534; cv=none; d=zohomail.com; s=zohoarc; b=nfJBX3NGoWnCemYaNGnWkaQ3esBqyQ6k+IzME1bbNebA4Uv62cDP+GZjHTxw6HuJ/0mkz+ucMRAYBmHCPlG90B/hs76AuiHjw3ISOYacDnPhsQrHrS6Ecu/GFhL05az8vALSAzooYe6cDOvlFprOnuTfwcD2W9RqAsaBo8t3ZSo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1615047534; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=J1kkgZZ52SPUCRXNyaZp2NxzcRXpEMVhj1MhJQhXSVY=; b=Jh9mAA9+7+BAn4N76DjMDQEFRzQU7aLDyxwvmD89lo6vWA2kiVt0f34MrjSjGJVmE9FTmHFm5R94jJhkLiYyDEk/fTYWQq+3EaGbByFSvZRg4MdHq9XJsJ7fPEBQLfO8puopJYr0tsLK6Gn8DVKlMC09ZEYv57plBC9eB5sF/yg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1615047534445904.0905419720133; Sat, 6 Mar 2021 08:18:54 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.94314.177831 (Exim 4.92) (envelope-from ) id 1lIZdY-0002DQ-0u; Sat, 06 Mar 2021 16:18:40 +0000 Received: by outflank-mailman (output) from mailman id 94314.177831; Sat, 06 Mar 2021 16:18:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lIZdX-0002DD-RP; Sat, 06 Mar 2021 16:18:39 +0000 Received: by outflank-mailman (input) for mailman id 94314; Sat, 06 Mar 2021 16:18:38 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lIZdW-0002AY-In for xen-devel@lists.xenproject.org; Sat, 06 Mar 2021 16:18:38 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9708d123-c9e6-4ce1-9229-fd83698f65c4; Sat, 06 Mar 2021 16:18:37 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 52614AE42; Sat, 6 Mar 2021 16:18:36 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9708d123-c9e6-4ce1-9229-fd83698f65c4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1615047516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J1kkgZZ52SPUCRXNyaZp2NxzcRXpEMVhj1MhJQhXSVY=; b=eUJK6LvAllQib5KkEmsQJMqpnbV00DWaGbWm4CS75zGLQXL86fhU6QvTpJoDchHwyluuC9 OYGfkOUxSQjjrAg3v08NMYLziJqNf+pbfQE3GPjFmY6CndKsmXb0GTTD86C9HyY1HW05f/ WjFvxiJwdk4H5lzw/uyuH3SRaM61ewQ= From: Juergen Gross To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , stable@vger.kernel.org, Julien Grall , Julien Grall Subject: [PATCH v4 3/3] xen/events: avoid handling the same event on two cpus at the same time Date: Sat, 6 Mar 2021 17:18:33 +0100 Message-Id: <20210306161833.4552-4-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210306161833.4552-1-jgross@suse.com> References: <20210306161833.4552-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" When changing the cpu affinity of an event it can happen today that (with some unlucky timing) the same event will be handled on the old and the new cpu at the same time. Avoid that by adding an "event active" flag to the per-event data and call the handler only if this flag isn't set. Cc: stable@vger.kernel.org Reported-by: Julien Grall Signed-off-by: Juergen Gross Reviewed-by: Julien Grall --- V2: - new patch V3: - use common helper for end of handler action (Julien Grall) - move setting is_active to 0 for lateeoi (Boris Ostrovsky) --- drivers/xen/events/events_base.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_b= ase.c index b27c012c86b5..8236e2364eeb 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -103,6 +103,7 @@ struct irq_info { #define EVT_MASK_REASON_EXPLICIT 0x01 #define EVT_MASK_REASON_TEMPORARY 0x02 #define EVT_MASK_REASON_EOI_PENDING 0x04 + u8 is_active; /* Is event just being handled? */ unsigned irq; evtchn_port_t evtchn; /* event channel */ unsigned short cpu; /* cpu bound */ @@ -810,6 +811,12 @@ static void xen_evtchn_close(evtchn_port_t port) BUG(); } =20 +static void event_handler_exit(struct irq_info *info) +{ + smp_store_release(&info->is_active, 0); + clear_evtchn(info->evtchn); +} + static void pirq_query_unmask(int irq) { struct physdev_irq_status_query irq_status; @@ -828,14 +835,15 @@ static void pirq_query_unmask(int irq) =20 static void eoi_pirq(struct irq_data *data) { - evtchn_port_t evtchn =3D evtchn_from_irq(data->irq); + struct irq_info *info =3D info_for_irq(data->irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; struct physdev_eoi eoi =3D { .irq =3D pirq_from_irq(data->irq) }; int rc =3D 0; =20 if (!VALID_EVTCHN(evtchn)) return; =20 - clear_evtchn(evtchn); + event_handler_exit(info); =20 if (pirq_needs_eoi(data->irq)) { rc =3D HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi); @@ -1666,6 +1674,8 @@ void handle_irq_for_port(evtchn_port_t port, struct e= vtchn_loop_ctrl *ctrl) } =20 info =3D info_for_irq(irq); + if (xchg_acquire(&info->is_active, 1)) + return; =20 dev =3D (info->type =3D=3D IRQT_EVTCHN) ? info->u.interdomain : NULL; if (dev) @@ -1853,12 +1863,11 @@ static void disable_dynirq(struct irq_data *data) =20 static void ack_dynirq(struct irq_data *data) { - evtchn_port_t evtchn =3D evtchn_from_irq(data->irq); - - if (!VALID_EVTCHN(evtchn)) - return; + struct irq_info *info =3D info_for_irq(data->irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; =20 - clear_evtchn(evtchn); + if (VALID_EVTCHN(evtchn)) + event_handler_exit(info); } =20 static void mask_ack_dynirq(struct irq_data *data) @@ -1874,7 +1883,7 @@ static void lateeoi_ack_dynirq(struct irq_data *data) =20 if (VALID_EVTCHN(evtchn)) { do_mask(info, EVT_MASK_REASON_EOI_PENDING); - clear_evtchn(evtchn); + event_handler_exit(info); } } =20 @@ -1885,7 +1894,7 @@ static void lateeoi_mask_ack_dynirq(struct irq_data *= data) =20 if (VALID_EVTCHN(evtchn)) { do_mask(info, EVT_MASK_REASON_EXPLICIT); - clear_evtchn(evtchn); + event_handler_exit(info); } } =20 @@ -1998,10 +2007,11 @@ static void restore_cpu_ipis(unsigned int cpu) /* Clear an irq's pending state, in preparation for polling on it */ void xen_clear_irq_pending(int irq) { - evtchn_port_t evtchn =3D evtchn_from_irq(irq); + struct irq_info *info =3D info_for_irq(irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; =20 if (VALID_EVTCHN(evtchn)) - clear_evtchn(evtchn); + event_handler_exit(info); } EXPORT_SYMBOL(xen_clear_irq_pending); void xen_set_irq_pending(int irq) --=20 2.26.2