From nobody Fri Apr 26 04:10:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1589470604; cv=none; d=zohomail.com; s=zohoarc; b=AZ57K6sd5SoHY6AIXDm6WlgheOHtxBHk1w6hIkk7D0ANbW0RTUuqjDVhxXipqcMMjcBF60JwssPGngnYjdqKao0Dv1wqkOmomzz8LXcvXks7AJu6+1xJUhHmS8raMqWC+PsjvEqyZdQO0BLTMOYm1E75cnzHWZNppdYAyV2tvrI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1589470604; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=RGmkL+FNO8UE5osky0A7jNUhPX6pYu2+wrqVfUmxuJA=; b=K9XRM7KjyukZYgWe+YRH4/QUusz3JhFs0veJaCBGS4y1yA66eE13YePsK5T8bBN0+LEjvsZOpjAdcEFGnc67J7c6dmpdAqYecQ3Vn1p9JRuE3aKBDdV7ByjO/XZJ8m42CA8qc1ijEG1+XA5Hi/QT+OFGkr2V3hxi2Gpom+fKXqM= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1589470604750235.26921991352276; Thu, 14 May 2020 08:36:44 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZFuQ-0007BW-PW; Thu, 14 May 2020 15:36:30 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZFuP-0007BA-78 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:36:29 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a779f668-95f8-11ea-a4ad-12813bfff9fa; Thu, 14 May 2020 15:36:18 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 2CB67AC90; Thu, 14 May 2020 15:36:20 +0000 (UTC) X-Inumbo-ID: a779f668-95f8-11ea-a4ad-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Subject: [PATCH v3 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling Date: Thu, 14 May 2020 17:36:12 +0200 Message-Id: <20200514153614.2240-2-jgross@suse.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200514153614.2240-1-jgross@suse.com> References: <20200514153614.2240-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Sergey Dyasli , Stefano Stabellini , Julien Grall , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , Dario Faggioli , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Type: text/plain; charset="utf-8" With RCU barriers moved from tasklets to normal RCU processing cpu offlining in core scheduling might deadlock due to cpu synchronization required by RCU processing and core scheduling concurrently. Fix that by bailing out from core scheduling synchronization in case of pending RCU work. Additionally the RCU softirq is now required to be of higher priority than the scheduling softirqs in order to do RCU processing before entering the scheduler again, as bailing out from the core scheduling synchronization requires to raise another softirq SCHED_SLAVE, which would bypass RCU processing again. Reported-by: Sergey Dyasli Tested-by: Sergey Dyasli Signed-off-by: Juergen Gross Acked-by: Dario Faggioli --- V2: - add BUILD_BUG_ON() and comment (Dario Faggioli) --- xen/common/sched/core.c | 13 ++++++++++--- xen/include/xen/softirq.h | 2 +- 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index d94b95285f..5df66cbf9b 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -2457,13 +2457,20 @@ static struct sched_unit *sched_wait_rendezvous_in(= struct sched_unit *prev, v =3D unit2vcpu_cpu(prev, cpu); } /* - * Coming from idle might need to do tasklet work. + * Check for any work to be done which might need cpu synchronizat= ion. + * This is either pending RCU work, or tasklet work when coming fr= om + * idle. It is mandatory that RCU softirqs are of higher priority + * than scheduling ones as otherwise a deadlock might occur. * In order to avoid deadlocks we can't do that here, but have to - * continue the idle loop. + * schedule the previous vcpu again, which will lead to the desired + * processing to be done. * Undo the rendezvous_in_cnt decrement and schedule another call = of * sched_slave(). */ - if ( is_idle_unit(prev) && sched_tasklet_check_cpu(cpu) ) + BUILD_BUG_ON(RCU_SOFTIRQ > SCHED_SLAVE_SOFTIRQ || + RCU_SOFTIRQ > SCHEDULE_SOFTIRQ); + if ( rcu_pending(cpu) || + (is_idle_unit(prev) && sched_tasklet_check_cpu(cpu)) ) { struct vcpu *vprev =3D current; =20 diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h index b4724f5c8b..1f6c4783da 100644 --- a/xen/include/xen/softirq.h +++ b/xen/include/xen/softirq.h @@ -4,10 +4,10 @@ /* Low-latency softirqs come first in the following list. */ enum { TIMER_SOFTIRQ =3D 0, + RCU_SOFTIRQ, SCHED_SLAVE_SOFTIRQ, SCHEDULE_SOFTIRQ, NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ, - RCU_SOFTIRQ, TASKLET_SOFTIRQ, NR_COMMON_SOFTIRQS }; --=20 2.26.1 From nobody Fri Apr 26 04:10:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1589470602; cv=none; d=zohomail.com; s=zohoarc; b=ERs+zj0KKEokxnaUZ2C8HlLKY4ntl4WhLTRFSNrTelZ7f3mSuIVIo+gRdWTSlklHp7u6A0j2AeSKn94yeU2OckFI4HnJeY+vy/d5vra4Pvevs02LrxeN+a6HHIspgE82fUChvVb9qD+e+3+VzZ3QVR8Rt6/lnXWCiIN/dgff9hw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1589470602; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zyYIBqx6BNCF4niHtbqhWRxxcDc9qPWMp1H2zIi86P0=; b=D2ba48UYpIELudm+mlDb9vO1/vzvZpGuft3R2CBYlFWJAP+B+HItYNRWCmFhPfSvWc1qyzeRwAx8eUO74Rzj3yWsmq7cw9tjlysTFqIFya5xgdcciymgddTyBBh01uZW4AFNKp4p7u1wB8ITBFa578YO9gEO0TZ4RCT+Ug+1dK4= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1589470602483462.4039320357298; Thu, 14 May 2020 08:36:42 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZFuG-00079b-9D; Thu, 14 May 2020 15:36:20 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZFuF-00079W-AH for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:36:19 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a7582e02-95f8-11ea-a4ad-12813bfff9fa; Thu, 14 May 2020 15:36:18 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 34574AEC5; Thu, 14 May 2020 15:36:20 +0000 (UTC) X-Inumbo-ID: a7582e02-95f8-11ea-a4ad-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Subject: [PATCH v3 2/3] xen/sched: don't call sync_vcpu_execstate() in sched_unit_migrate_finish() Date: Thu, 14 May 2020 17:36:13 +0200 Message-Id: <20200514153614.2240-3-jgross@suse.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200514153614.2240-1-jgross@suse.com> References: <20200514153614.2240-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Jan Beulich , Dario Faggioli Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Type: text/plain; charset="utf-8" With support of core scheduling sched_unit_migrate_finish() gained a call of sync_vcpu_execstate() as it was believed to be called as a result of vcpu migration in any case. In case of migrating a vcpu away from a physical cpu for a short period of time ionly without ever being scheduled on the selected new cpu this might not be true, so drop the call and let the lazy state syncing do its job. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Dario Faggioli --- V2: - new patch --- xen/common/sched/core.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 5df66cbf9b..cb49a8bc02 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -1078,12 +1078,7 @@ static void sched_unit_migrate_finish(struct sched_u= nit *unit) sched_spin_unlock_double(old_lock, new_lock, flags); =20 if ( old_cpu !=3D new_cpu ) - { - /* Vcpus are moved to other pcpus, commit their states to memory. = */ - for_each_sched_unit_vcpu ( unit, v ) - sync_vcpu_execstate(v); sched_move_irqs(unit); - } =20 /* Wake on new CPU. */ for_each_sched_unit_vcpu ( unit, v ) --=20 2.26.1 From nobody Fri Apr 26 04:10:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1589470616; cv=none; d=zohomail.com; s=zohoarc; b=OSS7NpqhhwFs3JBhJTzFDSF1CQ8jpFQfj1ynwDUTofJBAgV08SFoerlXuXfbDS+No5f+aA+IecPrt0l4+BeC64VFt+XGsgSYRPG9VVjXHt868oSXvtjrbEMkAWDNuCUtaemLqvxuk8jyL7udDkGGbc33s/1w1Dzhaw8C5C0AoUI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1589470616; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=tr5r8gKs/oCzIIdud5tTH6snJErtdlKwesFLPj15Plw=; b=D3xZEtxFxM33lT4VI0K2ot5H+DYg2uWtV400WVTV1vjR2TdwWrx9wjkNNCvGhW4zLa5Jepx8fa5L3Vbl4ymfSOxqNWiC7E/x8JFh9ymIT9hkFjCWpMfUYpYVjkq4frmsEfCHICMlgA9xGHod7ayxE64XcTnifp9bs8+1ZIe00Lc= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1589470616970999.0987715094682; Thu, 14 May 2020 08:36:56 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZFuW-0007D3-2F; Thu, 14 May 2020 15:36:36 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jZFuU-0007Cc-7O for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:36:34 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a76a2437-95f8-11ea-a4ad-12813bfff9fa; Thu, 14 May 2020 15:36:19 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 712D4AEDE; Thu, 14 May 2020 15:36:20 +0000 (UTC) X-Inumbo-ID: a76a2437-95f8-11ea-a4ad-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Subject: [PATCH v3 3/3] xen/sched: fix latent races accessing vcpu->dirty_cpu Date: Thu, 14 May 2020 17:36:14 +0200 Message-Id: <20200514153614.2240-4-jgross@suse.com> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200514153614.2240-1-jgross@suse.com> References: <20200514153614.2240-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Type: text/plain; charset="utf-8" The dirty_cpu field of struct vcpu denotes which cpu still holds data of a vcpu. All accesses to this field should be atomic in case the vcpu could just be running, as it is accessed without any lock held in most cases. Especially sync_local_execstate() and context_switch() for the same vcpu running concurrently have a risk for failing. There are some instances where accesses are not atomically done, and even worse where multiple accesses are done when a single one would be mandated. Correct that in order to avoid potential problems. Add some assertions to verify dirty_cpu is handled properly. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V2: - convert all accesses to v->dirty_cpu to atomic ones (Jan Beulich) - drop cast (Julien Grall) V3: - drop atomic access in vcpu_create() (Jan Beulich) Signed-off-by: Juergen Gross --- xen/arch/x86/domain.c | 16 +++++++++++----- xen/common/keyhandler.c | 2 +- xen/include/xen/sched.h | 2 +- 3 files changed, 13 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index a4428190d5..2e5717b983 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -183,7 +183,7 @@ void startup_cpu_idle_loop(void) =20 ASSERT(is_idle_vcpu(v)); cpumask_set_cpu(v->processor, v->domain->dirty_cpumask); - v->dirty_cpu =3D v->processor; + write_atomic(&v->dirty_cpu, v->processor); =20 reset_stack_and_jump(idle_loop); } @@ -1769,6 +1769,7 @@ static void __context_switch(void) =20 if ( !is_idle_domain(pd) ) { + ASSERT(read_atomic(&p->dirty_cpu) =3D=3D cpu); memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES); vcpu_save_fpu(p); pd->arch.ctxt_switch->from(p); @@ -1832,7 +1833,7 @@ void context_switch(struct vcpu *prev, struct vcpu *n= ext) { unsigned int cpu =3D smp_processor_id(); const struct domain *prevd =3D prev->domain, *nextd =3D next->domain; - unsigned int dirty_cpu =3D next->dirty_cpu; + unsigned int dirty_cpu =3D read_atomic(&next->dirty_cpu); =20 ASSERT(prev !=3D next); ASSERT(local_irq_is_enabled()); @@ -1844,6 +1845,7 @@ void context_switch(struct vcpu *prev, struct vcpu *n= ext) { /* Remote CPU calls __sync_local_execstate() from flush IPI handle= r. */ flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE); + ASSERT(!vcpu_cpu_dirty(next)); } =20 _update_runstate_area(prev); @@ -1956,13 +1958,17 @@ void sync_local_execstate(void) =20 void sync_vcpu_execstate(struct vcpu *v) { - if ( v->dirty_cpu =3D=3D smp_processor_id() ) + unsigned int dirty_cpu =3D read_atomic(&v->dirty_cpu); + + if ( dirty_cpu =3D=3D smp_processor_id() ) sync_local_execstate(); - else if ( vcpu_cpu_dirty(v) ) + else if ( is_vcpu_dirty_cpu(dirty_cpu) ) { /* Remote CPU calls __sync_local_execstate() from flush IPI handle= r. */ - flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE); + flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE); } + ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) || + read_atomic(&v->dirty_cpu) !=3D dirty_cpu); } =20 static int relinquish_memory( diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c index 87bd145374..68364e987d 100644 --- a/xen/common/keyhandler.c +++ b/xen/common/keyhandler.c @@ -316,7 +316,7 @@ static void dump_domains(unsigned char key) vcpu_info(v, evtchn_upcall_pending), !vcpu_event_delivery_is_enabled(v)); if ( vcpu_cpu_dirty(v) ) - printk("dirty_cpu=3D%u", v->dirty_cpu); + printk("dirty_cpu=3D%u", read_atomic(&v->dirty_cpu)); printk("\n"); printk(" pause_count=3D%d pause_flags=3D%lx\n", atomic_read(&v->pause_count), v->pause_flags); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 6101761d25..ac53519d7f 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned int cpu) =20 static inline bool vcpu_cpu_dirty(const struct vcpu *v) { - return is_vcpu_dirty_cpu(v->dirty_cpu); + return is_vcpu_dirty_cpu(read_atomic(&v->dirty_cpu)); } =20 void vcpu_block(void); --=20 2.26.1