From nobody Tue Dec 23 08:44:28 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1559039743; cv=none; d=zoho.com; s=zohoarc; b=h2t1oTKotvuuBzVk7yBQJ0hMJzdcfKGs3C5pvC6sDTt6CAyNosgZeYaAoxOgweTnjQtRVwdq8KyLxvs37DjaD+xu4iygEFKj2uks9xHrSLDpDy1ENQIxceLEF3knrWK/62yoCx2ZZApIaDHAjQz0gpt62Ux845zGY/lXzeFMzGk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559039743; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=FJwvDm6b+nY9vgUqltgujrFgDQIU8jElIe+CFRZhzJA=; b=L4kk/xMMi53ozXJpvnTKVddwlVLFv2xMxRnfBwcOXH136OcC8xltjFgRB/E8u7SBS3KRnoLETVcYQxSf2Wj5q3B5pwgb+3dNVJsY5nXTBsyRR+RijWHNona2ENfQ7tetMGjGnxc11PY8oyiDRSj8J/0vwtL3AKXfNGry3VNJixI= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1559039743151541.4255613022348; Tue, 28 May 2019 03:35:43 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZR2-0006ee-FV; Tue, 28 May 2019 10:34:24 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZQG-0004x4-8h for xen-devel@lists.xenproject.org; Tue, 28 May 2019 10:33:36 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 08f6329f-8134-11e9-8980-bc764e045a96; Tue, 28 May 2019 10:33:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B0C11B07D; Tue, 28 May 2019 10:33:29 +0000 (UTC) X-Inumbo-ID: 08f6329f-8134-11e9-8980-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 28 May 2019 12:33:04 +0200 Message-Id: <20190528103313.1343-52-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190528103313.1343-1-jgross@suse.com> References: <20190528103313.1343-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 51/60] xen/sched: use one schedule lock for all free cpus X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In order to prepare using always cpu scheduling for free cpus regardless of other cpupools scheduling granularity always use a single fixed lock for all free cpus shared by all schedulers. This will allow to move any number of free cpus to a cpupool guarded by only one lock. This requires to drop ASSERTs regarding the lock in some schedulers. Signed-off-by: Juergen Gross --- xen/common/sched_credit.c | 9 --------- xen/common/sched_null.c | 7 ------- xen/common/schedule.c | 7 +++++-- 3 files changed, 5 insertions(+), 18 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 969ac4cc20..0b14fa9e11 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -616,15 +616,6 @@ csched_init_pdata(const struct scheduler *ops, void *p= data, int cpu) { unsigned long flags; struct csched_private *prv =3D CSCHED_PRIV(ops); - struct sched_resource *sd =3D get_sched_res(cpu); - - /* - * This is called either during during boot, resume or hotplug, in - * case Credit1 is the scheduler chosen at boot. In such cases, the - * scheduler lock for cpu is already pointing to the default per-cpu - * spinlock, as Credit1 needs it, so there is no remapping to be done. - */ - ASSERT(sd->schedule_lock =3D=3D &sd->_lock && !spin_is_locked(&sd->_lo= ck)); =20 spin_lock_irqsave(&prv->lock, flags); init_pdata(prv, pdata, cpu); diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index e9336a2948..1499c82422 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -169,17 +169,10 @@ static void init_pdata(struct null_private *prv, unsi= gned int cpu) static void null_init_pdata(const struct scheduler *ops, void *pdata, int = cpu) { struct null_private *prv =3D null_priv(ops); - struct sched_resource *sd =3D get_sched_res(cpu); =20 /* alloc_pdata is not implemented, so we want this to be NULL. */ ASSERT(!pdata); =20 - /* - * The scheduler lock points already to the default per-cpu spinlock, - * so there is no remapping to be done. - */ - ASSERT(sd->schedule_lock =3D=3D &sd->_lock && !spin_is_locked(&sd->_lo= ck)); - init_pdata(prv, cpu); } =20 diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 7fd83ffd4e..44364ff4d2 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -61,6 +61,9 @@ unsigned int sched_granularity =3D 1; bool sched_disable_smt_switching; const cpumask_t *sched_res_mask =3D &cpumask_all; =20 +/* Common lock for free cpus. */ +static DEFINE_SPINLOCK(sched_free_cpu_lock); + /* Various timer handlers. */ static void s_timer_fn(void *unused); static void vcpu_periodic_timer_fn(void *data); @@ -2149,7 +2152,7 @@ static int cpu_schedule_up(unsigned int cpu) =20 sd->scheduler =3D &ops; spin_lock_init(&sd->_lock); - sd->schedule_lock =3D &sd->_lock; + sd->schedule_lock =3D &sched_free_cpu_lock; init_timer(&sd->s_timer, s_timer_fn, NULL, cpu); atomic_set(&sd->urgent_count, 0); =20 @@ -2488,7 +2491,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) * taking it, finds all the initializations we've done above in place. */ smp_mb(); - sd->schedule_lock =3D new_lock; + sd->schedule_lock =3D c ? new_lock : &sched_free_cpu_lock; =20 /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */ spin_unlock_irq(old_lock); --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel