From nobody Tue Nov 11 08:28:56 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569820985; cv=none; d=zoho.com; s=zohoarc; b=flL5yabVs47BXEdu+whkNxrahXrRtWX4hOFGo0UAzNQRA2kab3dNN2m2bdZ4s4VFglT0vjiGzlu1vTDvyEgfe9J62cw6mU7voFaQFrppeuqhAlCqH7HzAPD6NFT1d0XfdY+0eyYBxw1vJFCEz2bMC1iCmfUzJewmCtqsTSH+604= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569820985; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=UpXLqbFXyMwNw2ef8ng6ZsKQk6zDf0Z+/6Y5fLQVsDI=; b=C0qXy+dJzsP19gPiueGvZ+bTiXfA/1ujLWDFah9FPZn1ouruTXL3TJ4tRVp1y7z9WIEo8ZRV4YTG2eDukoAL+iL9grhWWHYE59s0d9uCqE6uLf2KiFOO9t1qEkZkySOcVXOug2mkX4hSEBq7o3GDlS4RHUchAXV7myWHZlyvyig= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569820985344367.1519294945749; Sun, 29 Sep 2019 22:23:05 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8Q-0002Dt-OW; Mon, 30 Sep 2019 05:22:10 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iEo8P-0002Bz-BJ for xen-devel@lists.xenproject.org; Mon, 30 Sep 2019 05:22:09 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 30c96e0e-e342-11e9-8628-bc764e2007e4; Mon, 30 Sep 2019 05:21:44 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B0E86B0B7; Mon, 30 Sep 2019 05:21:41 +0000 (UTC) X-Inumbo-ID: 30c96e0e-e342-11e9-8628-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 30 Sep 2019 07:21:26 +0200 Message-Id: <20190930052135.11257-11-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190930052135.11257-1-jgross@suse.com> References: <20190930052135.11257-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v5 10/19] xen/sched: move per-cpu variable cpupool to struct sched_resource X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Having a pointer to struct cpupool in struct sched_resource instead of per cpu is enough. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Dario Faggioli --- V1: new patch --- xen/common/cpupool.c | 4 +--- xen/common/sched_credit.c | 2 +- xen/common/sched_rt.c | 2 +- xen/common/schedule.c | 8 ++++---- xen/include/xen/sched-if.h | 2 +- 5 files changed, 8 insertions(+), 10 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index 441a26f16c..60a85f50e1 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -34,8 +34,6 @@ static cpumask_t cpupool_locked_cpus; =20 static DEFINE_SPINLOCK(cpupool_lock); =20 -DEFINE_PER_CPU(struct cpupool *, cpupool); - static void free_cpupool_struct(struct cpupool *c) { if ( c ) @@ -504,7 +502,7 @@ static int cpupool_cpu_add(unsigned int cpu) * (or unplugging would have failed) and that is the default behavior * anyway. */ - per_cpu(cpupool, cpu) =3D NULL; + get_sched_res(cpu)->cpupool =3D NULL; ret =3D cpupool_assign_cpu_locked(cpupool0, cpu); =20 spin_unlock(&cpupool_lock); diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 86603adcb6..31fdcd6a2f 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1681,7 +1681,7 @@ static struct csched_unit * csched_load_balance(struct csched_private *prv, int cpu, struct csched_unit *snext, bool *stolen) { - struct cpupool *c =3D per_cpu(cpupool, cpu); + struct cpupool *c =3D get_sched_res(cpu)->cpupool; struct csched_unit *speer; cpumask_t workers; cpumask_t *online; diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index d21c416cae..6e93e50acb 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -774,7 +774,7 @@ rt_deinit_pdata(const struct scheduler *ops, void *pcpu= , int cpu) =20 if ( prv->repl_timer.cpu =3D=3D cpu ) { - struct cpupool *c =3D per_cpu(cpupool, cpu); + struct cpupool *c =3D get_sched_res(cpu)->cpupool; unsigned int new_cpu =3D cpumask_cycle(cpu, cpupool_online_cpumask= (c)); =20 /* diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 5e9cee1f82..249ff8a882 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -1120,7 +1120,7 @@ int cpu_disable_scheduler(unsigned int cpu) cpumask_t online_affinity; int ret =3D 0; =20 - c =3D per_cpu(cpupool, cpu); + c =3D get_sched_res(cpu)->cpupool; if ( c =3D=3D NULL ) return ret; =20 @@ -1189,7 +1189,7 @@ static int cpu_disable_scheduler_check(unsigned int c= pu) struct vcpu *v; struct cpupool *c; =20 - c =3D per_cpu(cpupool, cpu); + c =3D get_sched_res(cpu)->cpupool; if ( c =3D=3D NULL ) return 0; =20 @@ -2554,8 +2554,8 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) void *ppriv, *ppriv_old, *vpriv, *vpriv_old; struct scheduler *old_ops =3D get_sched_res(cpu)->scheduler; struct scheduler *new_ops =3D (c =3D=3D NULL) ? &sched_idle_ops : c->s= ched; - struct cpupool *old_pool =3D per_cpu(cpupool, cpu); struct sched_resource *sd =3D get_sched_res(cpu); + struct cpupool *old_pool =3D sd->cpupool; spinlock_t *old_lock, *new_lock; unsigned long flags; =20 @@ -2637,7 +2637,7 @@ int schedule_cpu_switch(unsigned int cpu, struct cpup= ool *c) sched_free_udata(old_ops, vpriv_old); sched_free_pdata(old_ops, ppriv_old, cpu); =20 - per_cpu(cpupool, cpu) =3D c; + get_sched_res(cpu)->cpupool =3D c; /* When a cpu is added to a pool, trigger it to go pick up some work */ if ( c !=3D NULL ) cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 01821b3e5b..e675061290 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -37,6 +37,7 @@ extern const cpumask_t *sched_res_mask; * one it wants (This may be the one right in front of it).*/ struct sched_resource { struct scheduler *scheduler; + struct cpupool *cpupool; spinlock_t *schedule_lock, _lock; struct sched_unit *curr; @@ -50,7 +51,6 @@ struct sched_resource { const cpumask_t *cpus; /* cpus covered by this struct = */ }; =20 -DECLARE_PER_CPU(struct cpupool *, cpupool); DECLARE_PER_CPU(struct sched_resource *, sched_res); =20 static inline struct sched_resource *get_sched_res(unsigned int cpu) --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel