From nobody Mon Feb 9 20:32:44 2026 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497067; cv=none; d=zohomail.com; s=zohoarc; b=R/KUWqZ1m2EjDtOai3bH+zLHuQ6+9PPj8IkPZokju0DVfKYWpwFT5BLXty06gP/mQtvvUrUfu2OIBHygQqViEnXw0VLSHKULwQGPpJ7zKgXoBoZb1i53OlhOv5Qrf9LzixqhqM0bCJBpB5CRKg4dI4V8yiQKf4/Uv0ZlVCEKF2k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497067; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=HAe37PSJF+oc1DQ6Y1y7Q/l8zV2CAY1Or0MMP5ZukoI=; b=RL9Hr7svTyqjyYLco81UTIn7dsCQ7SjMdEN2uLJjGDY0oqi+Bam8brT7uLQ38yXq5cS/qrsJM+JWaEF1JA6otMGlUTGYT4DS+aD/SqjXdbyxOjW0VSAsl5CQ04z4JrSEp5bIRn+X+4BqCvw77/szem+Bxv2lsTGLJyaUgkJIg5k= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497067025826.8084662997674; Wed, 8 Jan 2020 07:24:27 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBe-0004ZV-1F; Wed, 08 Jan 2020 15:23:58 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBc-0004Yj-HO for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:23:56 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d4fa4174-322a-11ea-9832-bc764e2007e4; Wed, 08 Jan 2020 15:23:33 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 1892BB1E8; Wed, 8 Jan 2020 15:23:32 +0000 (UTC) X-Inumbo-ID: d4fa4174-322a-11ea-9832-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:24 +0100 Message-Id: <20200108152328.27194-6-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Meng Xu , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In rt scheduler there are three instances of cpumasks allocated on the stack. Replace them by using cpumask_scratch. Signed-off-by: Juergen Gross Reviewed-by: Meng Xu --- xen/common/sched/rt.c | 56 ++++++++++++++++++++++++++++++++++-------------= ---- 1 file changed, 37 insertions(+), 19 deletions(-) diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c index 8203b63a9d..d26f77f554 100644 --- a/xen/common/sched/rt.c +++ b/xen/common/sched/rt.c @@ -637,23 +637,38 @@ replq_reinsert(const struct scheduler *ops, struct rt= _unit *svc) * and available resources */ static struct sched_resource * -rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu) { - cpumask_t cpus; + cpumask_t *cpus =3D cpumask_scratch_cpu(locked_cpu); cpumask_t *online; int cpu; =20 online =3D cpupool_domain_master_cpumask(unit->domain); - cpumask_and(&cpus, online, unit->cpu_hard_affinity); + cpumask_and(cpus, online, unit->cpu_hard_affinity); =20 - cpu =3D cpumask_test_cpu(sched_unit_master(unit), &cpus) + cpu =3D cpumask_test_cpu(sched_unit_master(unit), cpus) ? sched_unit_master(unit) - : cpumask_cycle(sched_unit_master(unit), &cpus); - ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) ); + : cpumask_cycle(sched_unit_master(unit), cpus); + ASSERT( !cpumask_empty(cpus) && cpumask_test_cpu(cpu, cpus) ); =20 return get_sched_res(cpu); } =20 +/* + * Pick a valid resource for the unit vc + * Valid resource of an unit is intesection of unit's affinity + * and available resources + */ +static struct sched_resource * +rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +{ + struct sched_resource *res; + + res =3D rt_res_pick_locked(unit, unit->res->master_cpu); + + return res; +} + /* * Init/Free related code */ @@ -886,11 +901,14 @@ rt_unit_insert(const struct scheduler *ops, struct sc= hed_unit *unit) struct rt_unit *svc =3D rt_unit(unit); s_time_t now; spinlock_t *lock; + unsigned int cpu =3D smp_processor_id(); =20 BUG_ON( is_idle_unit(unit) ); =20 /* This is safe because unit isn't yet being scheduled */ - sched_set_res(unit, rt_res_pick(ops, unit)); + lock =3D pcpu_schedule_lock_irq(cpu); + sched_set_res(unit, rt_res_pick_locked(unit, cpu)); + pcpu_schedule_unlock_irq(lock, cpu); =20 lock =3D unit_schedule_lock_irq(unit); =20 @@ -1003,13 +1021,13 @@ burn_budget(const struct scheduler *ops, struct rt_= unit *svc, s_time_t now) * lock is grabbed before calling this function */ static struct rt_unit * -runq_pick(const struct scheduler *ops, const cpumask_t *mask) +runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int= cpu) { struct list_head *runq =3D rt_runq(ops); struct list_head *iter; struct rt_unit *svc =3D NULL; struct rt_unit *iter_svc =3D NULL; - cpumask_t cpu_common; + cpumask_t *cpu_common =3D cpumask_scratch_cpu(cpu); cpumask_t *online; =20 list_for_each ( iter, runq ) @@ -1018,9 +1036,9 @@ runq_pick(const struct scheduler *ops, const cpumask_= t *mask) =20 /* mask cpu_hard_affinity & cpupool & mask */ online =3D cpupool_domain_master_cpumask(iter_svc->unit->domain); - cpumask_and(&cpu_common, online, iter_svc->unit->cpu_hard_affinity= ); - cpumask_and(&cpu_common, mask, &cpu_common); - if ( cpumask_empty(&cpu_common) ) + cpumask_and(cpu_common, online, iter_svc->unit->cpu_hard_affinity); + cpumask_and(cpu_common, mask, cpu_common); + if ( cpumask_empty(cpu_common) ) continue; =20 ASSERT( iter_svc->cur_budget > 0 ); @@ -1092,7 +1110,7 @@ rt_schedule(const struct scheduler *ops, struct sched= _unit *currunit, } else { - snext =3D runq_pick(ops, cpumask_of(sched_cpu)); + snext =3D runq_pick(ops, cpumask_of(sched_cpu), cur_cpu); =20 if ( snext =3D=3D NULL ) snext =3D rt_unit(sched_idle_unit(sched_cpu)); @@ -1186,22 +1204,22 @@ runq_tickle(const struct scheduler *ops, struct rt_= unit *new) struct rt_unit *iter_svc; struct sched_unit *iter_unit; int cpu =3D 0, cpu_to_tickle =3D 0; - cpumask_t not_tickled; + cpumask_t *not_tickled =3D cpumask_scratch_cpu(smp_processor_id()); cpumask_t *online; =20 if ( new =3D=3D NULL || is_idle_unit(new->unit) ) return; =20 online =3D cpupool_domain_master_cpumask(new->unit->domain); - cpumask_and(¬_tickled, online, new->unit->cpu_hard_affinity); - cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); + cpumask_and(not_tickled, online, new->unit->cpu_hard_affinity); + cpumask_andnot(not_tickled, not_tickled, &prv->tickled); =20 /* * 1) If there are any idle CPUs, kick one. * For cache benefit,we first search new->cpu. * The same loop also find the one with lowest priority. */ - cpu =3D cpumask_test_or_cycle(sched_unit_master(new->unit), ¬_tickl= ed); + cpu =3D cpumask_test_or_cycle(sched_unit_master(new->unit), not_tickle= d); while ( cpu!=3D nr_cpu_ids ) { iter_unit =3D curr_on_cpu(cpu); @@ -1216,8 +1234,8 @@ runq_tickle(const struct scheduler *ops, struct rt_un= it *new) compare_unit_priority(iter_svc, latest_deadline_unit) < 0 ) latest_deadline_unit =3D iter_svc; =20 - cpumask_clear_cpu(cpu, ¬_tickled); - cpu =3D cpumask_cycle(cpu, ¬_tickled); + cpumask_clear_cpu(cpu, not_tickled); + cpu =3D cpumask_cycle(cpu, not_tickled); } =20 /* 2) candicate has higher priority, kick out lowest priority unit */ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel