From nobody Thu Apr 25 07:57:23 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1570171296; cv=none; d=zoho.com; s=zohoarc; b=QWgeBQeQOH4lVkFh+U/LNSSOS4+G4PR1YqfhzekO6X1FPl3nYbbOgT7kfRlI2GCVjxoFf3Vb+7tS77y0Y0QrBtcwDRUBFB6i0luYnbmZp43DQRmMTDf7NdaHmhewttqnMvN++1mYvZf6o6XHDCTaH51rHF+lXND+pXymFFyXSHM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1570171296; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To:ARC-Authentication-Results; bh=hqWoLTOANVpaXjINF2eCIpYNrYfQJ7LyYjSYbWbzkqQ=; b=Dlb6YJx8jf6F1OgZ9h8Dn6bakkHsk+2wX9POwlSiONayqZ5zJJNHo9boNLqco0F9n8F+MlUkPkaUPy34oym0l4l1TgPbx6jDVY98Yx3dewajz4YhCwmcsB76QMn/8eteFiWkK7YbrZ8ckr2MRQbXRWOl7fdachVuzww32uTksbE= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1570171295997412.5985451541378; Thu, 3 Oct 2019 23:41:35 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iGHGG-0005xS-FN; Fri, 04 Oct 2019 06:40:20 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iGHGF-0005wd-1G for xen-devel@lists.xenproject.org; Fri, 04 Oct 2019 06:40:19 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d17db1fc-e671-11e9-973f-12813bfff9fa; Fri, 04 Oct 2019 06:40:13 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B7FFBB149; Fri, 4 Oct 2019 06:40:12 +0000 (UTC) X-Inumbo-ID: d17db1fc-e671-11e9-973f-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 4 Oct 2019 08:40:10 +0200 Message-Id: <20191004064010.25646-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Subject: [Xen-devel] [PATCH] xen/sched: fix locking in sched_tick_[suspend|resume]() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" sched_tick_suspend() and sched_tick_resume() should not call the scheduler specific timer handlers in case the cpu they are running on is just being moved to or from a cpupool. Use a new percpu lock for that purpose. Reported-by: Sergey Dyasli Signed-off-by: Juergen Gross --- To be applied on top of my core scheduling series. --- xen/common/schedule.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 217fcb09ce..744f8cb5db 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -68,6 +68,9 @@ cpumask_t sched_res_mask; /* Common lock for free cpus. */ static DEFINE_SPINLOCK(sched_free_cpu_lock); =20 +/* Lock for guarding per-scheduler calls against scheduler changes on a cp= u. */ +static DEFINE_PER_CPU(spinlock_t, sched_cpu_lock); + /* Various timer handlers. */ static void s_timer_fn(void *unused); static void vcpu_periodic_timer_fn(void *data); @@ -2472,6 +2475,8 @@ static int cpu_schedule_up(unsigned int cpu) if ( sr =3D=3D NULL ) return -ENOMEM; =20 + spin_lock_init(&per_cpu(sched_cpu_lock, cpu)); + sr->master_cpu =3D cpu; cpumask_copy(sr->cpus, cpumask_of(cpu)); set_sched_res(cpu, sr); @@ -2763,11 +2768,14 @@ int schedule_cpu_add(unsigned int cpu, struct cpupo= ol *c) struct scheduler *new_ops =3D c->sched; struct sched_resource *sr; spinlock_t *old_lock, *new_lock; + spinlock_t *cpu_lock =3D &per_cpu(sched_cpu_lock, cpu); unsigned long flags; int ret =3D 0; =20 rcu_read_lock(&sched_res_rculock); =20 + spin_lock(cpu_lock); + sr =3D get_sched_res(cpu); =20 ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); @@ -2879,6 +2887,8 @@ int schedule_cpu_add(unsigned int cpu, struct cpupool= *c) cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); =20 out: + spin_unlock(cpu_lock); + rcu_read_unlock(&sched_res_rculock); =20 return ret; @@ -2897,12 +2907,15 @@ int schedule_cpu_rm(unsigned int cpu) struct sched_unit *unit; struct scheduler *old_ops; spinlock_t *old_lock; + spinlock_t *cpu_lock =3D &per_cpu(sched_cpu_lock, cpu); unsigned long flags; int idx, ret =3D -ENOMEM; unsigned int cpu_iter; =20 rcu_read_lock(&sched_res_rculock); =20 + spin_lock(cpu_lock); + sr =3D get_sched_res(cpu); old_ops =3D sr->scheduler; =20 @@ -3004,6 +3017,8 @@ int schedule_cpu_rm(unsigned int cpu) sr->cpupool =3D NULL; =20 out: + spin_unlock(cpu_lock); + rcu_read_unlock(&sched_res_rculock); xfree(sr_new); =20 @@ -3084,11 +3099,17 @@ void sched_tick_suspend(void) { struct scheduler *sched; unsigned int cpu =3D smp_processor_id(); + spinlock_t *lock =3D &per_cpu(sched_cpu_lock, cpu); =20 rcu_read_lock(&sched_res_rculock); =20 + spin_lock(lock); + sched =3D get_sched_res(cpu)->scheduler; sched_do_tick_suspend(sched, cpu); + + spin_unlock(lock); + rcu_idle_enter(cpu); rcu_idle_timer_start(); =20 @@ -3099,14 +3120,20 @@ void sched_tick_resume(void) { struct scheduler *sched; unsigned int cpu =3D smp_processor_id(); + spinlock_t *lock =3D &per_cpu(sched_cpu_lock, cpu); =20 rcu_read_lock(&sched_res_rculock); =20 rcu_idle_timer_stop(); rcu_idle_exit(cpu); + + spin_lock(lock); + sched =3D get_sched_res(cpu)->scheduler; sched_do_tick_resume(sched, cpu); =20 + spin_unlock(lock); + rcu_read_unlock(&sched_res_rculock); } =20 --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel