From nobody Mon Feb 9 09:52:38 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1588259779; cv=none; d=zohomail.com; s=zohoarc; b=KDO4ra0Fmln3zih7cnKtIGoXo7QzzGk2cuY8W9/E9GiSUUExJeWwDN7KoGzN0SCS2qVjoZa1+thhP5VW/y6O/izMSSC2AMhjDvwWPuFtLkTsON+MX1XU5d+XcDmN6nqtj02u19yUOH7Q2SBmPrYW7wynr+YGnfdyl/gPwy67lWM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1588259779; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=eoF5MgE3NqHWQM0dBKH+bWvXROHzetJG/BdxGxq1PcY=; b=cn4VRUNrgBAxvAfXdKBZPmLr49Yg4vfVNpOCCgM7sNSV/8mrwtb5tl+tp3BLT7JjaA2t4W22TPEjFaBhNdHgq7wtIwHK2IPTkc0ku7tb6Wsd//6+7lvXZaB0UFJ20vJvCegKOgr16Ugo8FmQQXaeMAvdO/7tDAXrAe67CtWbhLQ= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1588259779809578.0700376194959; Thu, 30 Apr 2020 08:16:19 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jUAuy-0007OP-Ue; Thu, 30 Apr 2020 15:16:04 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jUAux-0007OA-Ou for xen-devel@lists.xenproject.org; Thu, 30 Apr 2020 15:16:03 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 81319f00-8af5-11ea-ae69-bc764e2007e4; Thu, 30 Apr 2020 15:16:03 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 4981FAD31; Thu, 30 Apr 2020 15:16:01 +0000 (UTC) X-Inumbo-ID: 81319f00-8af5-11ea-ae69-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Subject: [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling Date: Thu, 30 Apr 2020 17:15:57 +0200 Message-Id: <20200430151559.1464-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200430151559.1464-1-jgross@suse.com> References: <20200430151559.1464-1-jgross@suse.com> X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , Dario Faggioli , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" With RCU barriers moved from tasklets to normal RCU processing cpu offlining in core scheduling might deadlock due to cpu synchronization required by RCU processing and core scheduling concurrently. Fix that by bailing out from core scheduling synchronization in case of pending RCU work. Additionally the RCU softirq is now required to be of higher priority than the scheduling softirqs in order to do RCU processing before entering the scheduler again, as bailing out from the core scheduling synchronization requires to raise another softirq SCHED_SLAVE, which would bypass RCU processing again. Reported-by: Sergey Dyasli Tested-by: Sergey Dyasli Signed-off-by: Juergen Gross Acked-by: Dario Faggioli --- xen/common/sched/core.c | 10 +++++++--- xen/include/xen/softirq.h | 2 +- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index d94b95285f..a099e37b0f 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -2457,13 +2457,17 @@ static struct sched_unit *sched_wait_rendezvous_in(= struct sched_unit *prev, v =3D unit2vcpu_cpu(prev, cpu); } /* - * Coming from idle might need to do tasklet work. + * Check for any work to be done which might need cpu synchronizat= ion. + * This is either pending RCU work, or tasklet work when coming fr= om + * idle. * In order to avoid deadlocks we can't do that here, but have to - * continue the idle loop. + * schedule the previous vcpu again, which will lead to the desired + * processing to be done. * Undo the rendezvous_in_cnt decrement and schedule another call = of * sched_slave(). */ - if ( is_idle_unit(prev) && sched_tasklet_check_cpu(cpu) ) + if ( rcu_pending(cpu) || + (is_idle_unit(prev) && sched_tasklet_check_cpu(cpu)) ) { struct vcpu *vprev =3D current; =20 diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h index b4724f5c8b..1f6c4783da 100644 --- a/xen/include/xen/softirq.h +++ b/xen/include/xen/softirq.h @@ -4,10 +4,10 @@ /* Low-latency softirqs come first in the following list. */ enum { TIMER_SOFTIRQ =3D 0, + RCU_SOFTIRQ, SCHED_SLAVE_SOFTIRQ, SCHEDULE_SOFTIRQ, NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ, - RCU_SOFTIRQ, TASKLET_SOFTIRQ, NR_COMMON_SOFTIRQS }; --=20 2.16.4