From nobody Mon Feb 9 09:16:17 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1613749263; cv=none; d=zohomail.com; s=zohoarc; b=QP6yJiOEjeYA38WPlJMruvzKbAIgnbVob6F6zaM73qYCGABtQoxOgJjTJANkIYW50e3jtYmXOdgcdhFM7DobRDvDsfIUflBQ6HK1/hls229UNuNq1wD8SKN3nvlDAKKFRdHnOifDBJtmOZ3OKgZnACCz7CPqolDYzNc2p6p+ORo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1613749263; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=l3hsQtjVhbUJuzJrwS+bXrEjvj+YDAiuRUM066+18HU=; b=hpXlmEUrmEISKcKPa7d2Ympkl90mpvFf0SkHDOQYpYf+irOthSkrkMzDJsD4QrHkPR+xRUyCkzN/mtdrlJAOF6AFWqRnzGXB5bGlgX8pi8RJl4iuMlwLLeE+tESa+QbgNQBKEFnVhmRYK2aZ04UYzbAR0s7WovoAL5B7GSquEkw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1613749263097395.45555827725286; Fri, 19 Feb 2021 07:41:03 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.86932.163564 (Exim 4.92) (envelope-from ) id 1lD7te-0002Il-D8; Fri, 19 Feb 2021 15:40:46 +0000 Received: by outflank-mailman (output) from mailman id 86932.163564; Fri, 19 Feb 2021 15:40:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lD7te-0002Ie-8v; Fri, 19 Feb 2021 15:40:46 +0000 Received: by outflank-mailman (input) for mailman id 86932; Fri, 19 Feb 2021 15:40:45 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lD7tc-0002F6-V2 for xen-devel@lists.xenproject.org; Fri, 19 Feb 2021 15:40:44 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id db1eec34-b373-4506-9779-f75a822cf7ce; Fri, 19 Feb 2021 15:40:39 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7C5A8AF23; Fri, 19 Feb 2021 15:40:38 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: db1eec34-b373-4506-9779-f75a822cf7ce X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613749238; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l3hsQtjVhbUJuzJrwS+bXrEjvj+YDAiuRUM066+18HU=; b=Mc5dLu2iK4ri54BVS8sVlgoCdpzZyxnZb6PrS1izcSrnyoWAfPFIU02dYvCgRltNbGbIHW oI3sqwmM0Q2vgPMX7v3/FBtanmeY4inkYGxGOt//Jb19mpHGoSbfO8NU+nPaL4L7JnOwyJ lXkAGmpRFn162CNsNJmSx3FibIYwqZw= From: Juergen Gross To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , stable@vger.kernel.org, Julien Grall Subject: [PATCH v3 3/8] xen/events: avoid handling the same event on two cpus at the same time Date: Fri, 19 Feb 2021 16:40:25 +0100 Message-Id: <20210219154030.10892-4-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210219154030.10892-1-jgross@suse.com> References: <20210219154030.10892-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" When changing the cpu affinity of an event it can happen today that (with some unlucky timing) the same event will be handled on the old and the new cpu at the same time. Avoid that by adding an "event active" flag to the per-event data and call the handler only if this flag isn't set. Cc: stable@vger.kernel.org Reported-by: Julien Grall Signed-off-by: Juergen Gross Reviewed-by: Julien Grall --- V2: - new patch V3: - use common helper for end of handler action (Julien Grall) - move setting is_active to 0 for lateeoi (Boris Ostrovsky) --- drivers/xen/events/events_base.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_b= ase.c index e157e7506830..9d7ba7623510 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -102,6 +102,7 @@ struct irq_info { #define EVT_MASK_REASON_EXPLICIT 0x01 #define EVT_MASK_REASON_TEMPORARY 0x02 #define EVT_MASK_REASON_EOI_PENDING 0x04 + u8 is_active; /* Is event just being handled? */ unsigned irq; evtchn_port_t evtchn; /* event channel */ unsigned short cpu; /* cpu bound */ @@ -791,6 +792,12 @@ static void xen_evtchn_close(evtchn_port_t port) BUG(); } =20 +static void event_handler_exit(struct irq_info *info) +{ + smp_store_release(&info->is_active, 0); + clear_evtchn(info->evtchn); +} + static void pirq_query_unmask(int irq) { struct physdev_irq_status_query irq_status; @@ -809,14 +816,15 @@ static void pirq_query_unmask(int irq) =20 static void eoi_pirq(struct irq_data *data) { - evtchn_port_t evtchn =3D evtchn_from_irq(data->irq); + struct irq_info *info =3D info_for_irq(data->irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; struct physdev_eoi eoi =3D { .irq =3D pirq_from_irq(data->irq) }; int rc =3D 0; =20 if (!VALID_EVTCHN(evtchn)) return; =20 - clear_evtchn(evtchn); + event_handler_exit(info); =20 if (pirq_needs_eoi(data->irq)) { rc =3D HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi); @@ -1640,6 +1648,8 @@ void handle_irq_for_port(evtchn_port_t port, struct e= vtchn_loop_ctrl *ctrl) } =20 info =3D info_for_irq(irq); + if (xchg_acquire(&info->is_active, 1)) + return; =20 if (ctrl->defer_eoi) { info->eoi_cpu =3D smp_processor_id(); @@ -1823,12 +1833,11 @@ static void disable_dynirq(struct irq_data *data) =20 static void ack_dynirq(struct irq_data *data) { - evtchn_port_t evtchn =3D evtchn_from_irq(data->irq); - - if (!VALID_EVTCHN(evtchn)) - return; + struct irq_info *info =3D info_for_irq(data->irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; =20 - clear_evtchn(evtchn); + if (VALID_EVTCHN(evtchn)) + event_handler_exit(info); } =20 static void mask_ack_dynirq(struct irq_data *data) @@ -1844,7 +1853,7 @@ static void lateeoi_ack_dynirq(struct irq_data *data) =20 if (VALID_EVTCHN(evtchn)) { do_mask(info, EVT_MASK_REASON_EOI_PENDING); - clear_evtchn(evtchn); + event_handler_exit(info); } } =20 @@ -1856,7 +1865,7 @@ static void lateeoi_mask_ack_dynirq(struct irq_data *= data) if (VALID_EVTCHN(evtchn)) { do_mask(info, EVT_MASK_REASON_EXPLICIT | EVT_MASK_REASON_EOI_PENDING); - clear_evtchn(evtchn); + event_handler_exit(info); } } =20 @@ -1969,10 +1978,11 @@ static void restore_cpu_ipis(unsigned int cpu) /* Clear an irq's pending state, in preparation for polling on it */ void xen_clear_irq_pending(int irq) { - evtchn_port_t evtchn =3D evtchn_from_irq(irq); + struct irq_info *info =3D info_for_irq(irq); + evtchn_port_t evtchn =3D info ? info->evtchn : 0; =20 if (VALID_EVTCHN(evtchn)) - clear_evtchn(evtchn); + event_handler_exit(info); } EXPORT_SYMBOL(xen_clear_irq_pending); void xen_set_irq_pending(int irq) --=20 2.26.2