From nobody Mon Feb 9 10:29:12 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1565362828; cv=none; d=zoho.com; s=zohoarc; b=QKwZlrV0jXtl0IDzbUWNsxbfLkjIJ8av39Zvr8bs7vVoPVT8Lu+5Wsjm1+H4DxLN9vKZJZB0+lg06DJ4xJQcRnryAE2yrs+uUg0heOV7C3VTu8ODMlgpM7uAleiUwav5deN4fMk3H82n/lo/mNWYW7t8U2j310cfxu7Lx/Yx82Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1565362828; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=hdOmgpXPPvcS9IDz35ufX1MJXzXP4dOEWVwhl8R+WJI=; b=ATPNIfNyP50635S3r4hFDUAUJeWLeo9qJa1pEwaO+1lzZQvDMqB0iruVv8UuuPlYpEbwzWlDjvYTtR46N4I+wVKyqRuOZenEZuWm3tAdL2B+E2jYXKMcohMn2WEg9JEEc6IBcXm5rQWOz4wcM62vKJdckuyEgVL7wDDB6kSsbMI= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 156536282895616.658986457525998; Fri, 9 Aug 2019 08:00:28 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6Mk-0008FX-Mm; Fri, 09 Aug 2019 14:59:38 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6M9-0006pV-Ay for xen-devel@lists.xenproject.org; Fri, 09 Aug 2019 14:59:01 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 346050c8-bab6-11e9-be81-a7e08683c8b0; Fri, 09 Aug 2019 14:58:54 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 6C26CB0B7; Fri, 9 Aug 2019 14:58:53 +0000 (UTC) X-Inumbo-ID: 346050c8-bab6-11e9-be81-a7e08683c8b0 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 9 Aug 2019 16:58:28 +0200 Message-Id: <20190809145833.1020-44-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190809145833.1020-1-jgross@suse.com> References: <20190809145833.1020-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 43/48] xen/sched: protect scheduling resource via rcu X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In order to be able to move cpus to cpupools with core scheduling active it is mandatory to merge multiple cpus into one scheduling resource or to split a scheduling resource with multiple cpus in it into multiple scheduling resources. This in turn requires to modify the cpu <-> scheduling resource relation. In order to be able to free unused resources protect struct sched_resource via RCU. This ensures there are no users left when freeing such a resource. Signed-off-by: Juergen Gross --- V1: new patch --- xen/common/cpupool.c | 4 + xen/common/schedule.c | 201 ++++++++++++++++++++++++++++++++++++++++-= ---- xen/include/xen/sched-if.h | 7 +- 3 files changed, 191 insertions(+), 21 deletions(-) diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c index 4749ead846..5d5c8d5430 100644 --- a/xen/common/cpupool.c +++ b/xen/common/cpupool.c @@ -510,8 +510,10 @@ static int cpupool_cpu_add(unsigned int cpu) * (or unplugging would have failed) and that is the default behavior * anyway. */ + rcu_read_lock(&sched_res_rculock); get_sched_res(cpu)->cpupool =3D NULL; ret =3D cpupool_assign_cpu_locked(cpupool0, cpu); + rcu_read_unlock(&sched_res_rculock); =20 spin_unlock(&cpupool_lock); =20 @@ -596,7 +598,9 @@ static void cpupool_cpu_remove_forced(unsigned int cpu) } } =20 + rcu_read_lock(&sched_res_rculock); sched_rm_cpu(cpu); + rcu_read_unlock(&sched_res_rculock); } =20 /* diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 999f6e347b..f95d346330 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -73,6 +73,7 @@ static void poll_timer_fn(void *data); /* This is global for now so that private implementations can reach it */ DEFINE_PER_CPU_READ_MOSTLY(struct sched_resource *, sched_res); static DEFINE_PER_CPU_READ_MOSTLY(unsigned int, sched_res_idx); +DEFINE_RCU_READ_LOCK(sched_res_rculock); =20 /* Scratch space for cpumasks. */ DEFINE_PER_CPU(cpumask_t, cpumask_scratch); @@ -276,17 +277,25 @@ static inline void vcpu_runstate_change( =20 void sched_guest_idle(void (*idle) (void), unsigned int cpu) { + rcu_read_lock(&sched_res_rculock); atomic_inc(&get_sched_res(cpu)->urgent_count); + rcu_read_unlock(&sched_res_rculock); + idle(); + + rcu_read_lock(&sched_res_rculock); atomic_dec(&get_sched_res(cpu)->urgent_count); + rcu_read_unlock(&sched_res_rculock); } =20 void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) { - spinlock_t *lock =3D likely(v =3D=3D current) - ? NULL : unit_schedule_lock_irq(v->sched_unit); + spinlock_t *lock; s_time_t delta; =20 + rcu_read_lock(&sched_res_rculock); + + lock =3D likely(v =3D=3D current) ? NULL : unit_schedule_lock_irq(v->s= ched_unit); memcpy(runstate, &v->runstate, sizeof(*runstate)); delta =3D NOW() - runstate->state_entry_time; if ( delta > 0 ) @@ -294,6 +303,8 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runs= tate_info *runstate) =20 if ( unlikely(lock !=3D NULL) ) unit_schedule_unlock_irq(lock, v->sched_unit); + + rcu_read_unlock(&sched_res_rculock); } =20 uint64_t get_cpu_idle_time(unsigned int cpu) @@ -497,6 +508,8 @@ int sched_init_vcpu(struct vcpu *v) return 0; } =20 + rcu_read_lock(&sched_res_rculock); + /* The first vcpu of an unit can be set via sched_set_res(). */ sched_set_res(unit, get_sched_res(processor)); =20 @@ -504,6 +517,7 @@ int sched_init_vcpu(struct vcpu *v) if ( unit->priv =3D=3D NULL ) { sched_free_unit(unit, v); + rcu_read_unlock(&sched_res_rculock); return 1; } =20 @@ -530,6 +544,8 @@ int sched_init_vcpu(struct vcpu *v) sched_insert_unit(dom_scheduler(d), unit); } =20 + rcu_read_unlock(&sched_res_rculock); + return 0; } =20 @@ -557,6 +573,7 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) void *unitdata; struct scheduler *old_ops; void *old_domdata; + int ret =3D 0; =20 for_each_vcpu ( d, v ) { @@ -564,15 +581,21 @@ int sched_move_domain(struct domain *d, struct cpupoo= l *c) return -EBUSY; } =20 + rcu_read_lock(&sched_res_rculock); + domdata =3D sched_alloc_domdata(c->sched, d); if ( IS_ERR(domdata) ) - return PTR_ERR(domdata); + { + ret =3D PTR_ERR(domdata); + goto out; + } =20 unit_priv =3D xzalloc_array(void *, d->max_vcpus); if ( unit_priv =3D=3D NULL ) { sched_free_domdata(c->sched, domdata); - return -ENOMEM; + ret =3D -ENOMEM; + goto out; } =20 for_each_sched_unit ( d, unit ) @@ -584,7 +607,8 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) xfree(unit_priv[unit->unit_id]); xfree(unit_priv); sched_free_domdata(c->sched, domdata); - return -ENOMEM; + ret =3D -ENOMEM; + goto out; } } =20 @@ -646,7 +670,10 @@ int sched_move_domain(struct domain *d, struct cpupool= *c) =20 xfree(unit_priv); =20 - return 0; +out: + rcu_read_unlock(&sched_res_rculock); + + return ret; } =20 void sched_destroy_vcpu(struct vcpu *v) @@ -664,9 +691,13 @@ void sched_destroy_vcpu(struct vcpu *v) */ if ( unit->vcpu_list =3D=3D v ) { + rcu_read_lock(&sched_res_rculock); + sched_remove_unit(vcpu_scheduler(v), unit); sched_free_vdata(vcpu_scheduler(v), unit->priv); sched_free_unit(unit, v); + + rcu_read_unlock(&sched_res_rculock); } } =20 @@ -684,7 +715,12 @@ int sched_init_domain(struct domain *d, int poolid) SCHED_STAT_CRANK(dom_init); TRACE_1D(TRC_SCHED_DOM_ADD, d->domain_id); =20 + rcu_read_lock(&sched_res_rculock); + sdom =3D sched_alloc_domdata(dom_scheduler(d), d); + + rcu_read_unlock(&sched_res_rculock); + if ( IS_ERR(sdom) ) return PTR_ERR(sdom); =20 @@ -702,9 +738,13 @@ void sched_destroy_domain(struct domain *d) SCHED_STAT_CRANK(dom_destroy); TRACE_1D(TRC_SCHED_DOM_REM, d->domain_id); =20 + rcu_read_lock(&sched_res_rculock); + sched_free_domdata(dom_scheduler(d), d->sched_priv); d->sched_priv =3D NULL; =20 + rcu_read_unlock(&sched_res_rculock); + cpupool_rm_domain(d); } } @@ -738,11 +778,15 @@ void vcpu_sleep_nosync(struct vcpu *v) =20 TRACE_2D(TRC_SCHED_SLEEP, v->domain->domain_id, v->vcpu_id); =20 + rcu_read_lock(&sched_res_rculock); + lock =3D unit_schedule_lock_irqsave(v->sched_unit, &flags); =20 vcpu_sleep_nosync_locked(v); =20 unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); + + rcu_read_unlock(&sched_res_rculock); } =20 void vcpu_sleep_sync(struct vcpu *v) @@ -763,6 +807,8 @@ void vcpu_wake(struct vcpu *v) =20 TRACE_2D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id); =20 + rcu_read_lock(&sched_res_rculock); + lock =3D unit_schedule_lock_irqsave(unit, &flags); =20 if ( likely(vcpu_runnable(v)) ) @@ -783,6 +829,8 @@ void vcpu_wake(struct vcpu *v) } =20 unit_schedule_unlock_irqrestore(lock, flags, unit); + + rcu_read_unlock(&sched_res_rculock); } =20 void vcpu_unblock(struct vcpu *v) @@ -816,6 +864,8 @@ static void sched_unit_move_locked(struct sched_unit *u= nit, unsigned int old_cpu =3D unit->res->processor; struct vcpu *v; =20 + rcu_read_lock(&sched_res_rculock); + /* * Transfer urgency status to new CPU before switching CPUs, as * once the switch occurs, v->is_urgent is no longer protected by @@ -835,6 +885,8 @@ static void sched_unit_move_locked(struct sched_unit *u= nit, * pointer can't change while the current lock is held. */ sched_migrate(unit_scheduler(unit), unit, new_cpu); + + rcu_read_unlock(&sched_res_rculock); } =20 /* @@ -1019,6 +1071,8 @@ void restore_vcpu_affinity(struct domain *d) =20 ASSERT(system_state =3D=3D SYS_STATE_resume); =20 + rcu_read_lock(&sched_res_rculock); + for_each_sched_unit ( d, unit ) { spinlock_t *lock; @@ -1075,6 +1129,8 @@ void restore_vcpu_affinity(struct domain *d) sched_move_irqs(unit); } =20 + rcu_read_unlock(&sched_res_rculock); + domain_update_node_affinity(d); } =20 @@ -1090,9 +1146,11 @@ int cpu_disable_scheduler(unsigned int cpu) cpumask_t online_affinity; int ret =3D 0; =20 + rcu_read_lock(&sched_res_rculock); + c =3D get_sched_res(cpu)->cpupool; if ( c =3D=3D NULL ) - return ret; + goto out; =20 for_each_domain_in_cpupool ( d, c ) { @@ -1150,6 +1208,9 @@ int cpu_disable_scheduler(unsigned int cpu) } } =20 +out: + rcu_read_unlock(&sched_res_rculock); + return ret; } =20 @@ -1183,7 +1244,9 @@ void sched_set_affinity( { struct sched_unit *unit =3D v->sched_unit; =20 + rcu_read_lock(&sched_res_rculock); sched_adjust_affinity(dom_scheduler(unit->domain), unit, hard, soft); + rcu_read_unlock(&sched_res_rculock); =20 if ( hard ) cpumask_copy(unit->cpu_hard_affinity, hard); @@ -1203,6 +1266,8 @@ static int vcpu_set_affinity( spinlock_t *lock; int ret =3D 0; =20 + rcu_read_lock(&sched_res_rculock); + lock =3D unit_schedule_lock_irq(unit); =20 if ( v->affinity_broken ) @@ -1231,6 +1296,8 @@ static int vcpu_set_affinity( =20 sched_unit_migrate_finish(unit); =20 + rcu_read_unlock(&sched_res_rculock); + return ret; } =20 @@ -1357,11 +1424,16 @@ static long do_poll(struct sched_poll *sched_poll) long vcpu_yield(void) { struct vcpu * v=3Dcurrent; - spinlock_t *lock =3D unit_schedule_lock_irq(v->sched_unit); + spinlock_t *lock; =20 + rcu_read_lock(&sched_res_rculock); + + lock =3D unit_schedule_lock_irq(v->sched_unit); sched_yield(vcpu_scheduler(v), v->sched_unit); unit_schedule_unlock_irq(lock, v->sched_unit); =20 + rcu_read_unlock(&sched_res_rculock); + SCHED_STAT_CRANK(vcpu_yield); =20 TRACE_2D(TRC_SCHED_YIELD, current->domain->domain_id, current->vcpu_id= ); @@ -1458,6 +1530,8 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned = int cpu, uint8_t reason) int ret =3D -EINVAL; bool migrate; =20 + rcu_read_lock(&sched_res_rculock); + lock =3D unit_schedule_lock_irq(unit); =20 if ( cpu =3D=3D NR_CPUS ) @@ -1497,6 +1571,8 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned = int cpu, uint8_t reason) if ( migrate ) sched_unit_migrate_finish(unit); =20 + rcu_read_unlock(&sched_res_rculock); + return ret; } =20 @@ -1708,9 +1784,13 @@ long sched_adjust(struct domain *d, struct xen_domct= l_scheduler_op *op) =20 /* NB: the pluggable scheduler code needs to take care * of locking by itself. */ + rcu_read_lock(&sched_res_rculock); + if ( (ret =3D sched_adjust_dom(dom_scheduler(d), d, op)) =3D=3D 0 ) TRACE_1D(TRC_SCHED_ADJDOM, d->domain_id); =20 + rcu_read_unlock(&sched_res_rculock); + return ret; } =20 @@ -1731,9 +1811,13 @@ long sched_adjust_global(struct xen_sysctl_scheduler= _op *op) if ( pool =3D=3D NULL ) return -ESRCH; =20 + rcu_read_lock(&sched_res_rculock); + rc =3D ((op->sched_id =3D=3D pool->sched->sched_id) ? sched_adjust_cpupool(pool->sched, op) : -EINVAL); =20 + rcu_read_unlock(&sched_res_rculock); + cpupool_put(pool); =20 return rc; @@ -1937,7 +2021,11 @@ static void context_saved(struct sched_resource *sd,= struct vcpu *vprev, void sched_context_switched(struct vcpu *vprev, struct vcpu *vnext) { struct sched_unit *next =3D vnext->sched_unit; - struct sched_resource *sd =3D get_sched_res(smp_processor_id()); + struct sched_resource *sd; + + rcu_read_lock(&sched_res_rculock); + + sd =3D get_sched_res(smp_processor_id()); =20 if ( atomic_read(&next->rendezvous_out_cnt) ) { @@ -1958,6 +2046,8 @@ void sched_context_switched(struct vcpu *vprev, struc= t vcpu *vnext) =20 if ( is_idle_vcpu(vprev) && vprev !=3D vnext ) vprev->sched_unit =3D sd->sched_unit_idle; + + rcu_read_unlock(&sched_res_rculock); } =20 static void sched_context_switch(struct vcpu *vprev, struct vcpu *vnext, @@ -1975,6 +2065,8 @@ static void sched_context_switch(struct vcpu *vprev, = struct vcpu *vnext, vnext->sched_unit =3D get_sched_res(smp_processor_id())->sched_unit_idle; =20 + rcu_read_unlock(&sched_res_rculock); + trace_continue_running(vnext); return continue_running(vprev); } @@ -1988,6 +2080,8 @@ static void sched_context_switch(struct vcpu *vprev, = struct vcpu *vnext, =20 vcpu_periodic_timer_work(vnext); =20 + rcu_read_unlock(&sched_res_rculock); + context_switch(vprev, vnext); } =20 @@ -2135,6 +2229,8 @@ static void sched_slave(void) =20 ASSERT_NOT_IN_ATOMIC(); =20 + rcu_read_lock(&sched_res_rculock); + lock =3D pcpu_schedule_lock_irq(cpu); =20 now =3D NOW(); @@ -2158,6 +2254,8 @@ static void sched_slave(void) { pcpu_schedule_unlock_irq(lock, cpu); =20 + rcu_read_unlock(&sched_res_rculock); + /* Check for failed forced context switch. */ if ( do_softirq ) raise_softirq(SCHEDULE_SOFTIRQ); @@ -2188,13 +2286,16 @@ static void schedule(void) struct sched_resource *sd; spinlock_t *lock; int cpu =3D smp_processor_id(); - unsigned int gran =3D get_sched_res(cpu)->granularity; + unsigned int gran; =20 ASSERT_NOT_IN_ATOMIC(); =20 SCHED_STAT_CRANK(sched_run); =20 + rcu_read_lock(&sched_res_rculock); + sd =3D get_sched_res(cpu); + gran =3D sd->granularity; =20 lock =3D pcpu_schedule_lock_irq(cpu); =20 @@ -2206,6 +2307,8 @@ static void schedule(void) */ pcpu_schedule_unlock_irq(lock, cpu); =20 + rcu_read_unlock(&sched_res_rculock); + raise_softirq(SCHEDULE_SOFTIRQ); return sched_slave(); } @@ -2315,14 +2418,27 @@ static int cpu_schedule_up(unsigned int cpu) return 0; } =20 +static void sched_res_free(struct rcu_head *head) +{ + struct sched_resource *sd =3D container_of(head, struct sched_resource= , rcu); + + xfree(sd); +} + static void cpu_schedule_down(unsigned int cpu) { - struct sched_resource *sd =3D get_sched_res(cpu); + struct sched_resource *sd; + + rcu_read_lock(&sched_res_rculock); + + sd =3D get_sched_res(cpu); =20 kill_timer(&sd->s_timer); =20 set_sched_res(cpu, NULL); - xfree(sd); + call_rcu(&sd->rcu, sched_res_free); + + rcu_read_unlock(&sched_res_rculock); } =20 void sched_rm_cpu(unsigned int cpu) @@ -2342,6 +2458,8 @@ static int cpu_schedule_callback( unsigned int cpu =3D (unsigned long)hcpu; int rc =3D 0; =20 + rcu_read_lock(&sched_res_rculock); + /* * From the scheduler perspective, bringing up a pCPU requires * allocating and initializing the per-pCPU scheduler specific data, @@ -2388,6 +2506,8 @@ static int cpu_schedule_callback( break; } =20 + rcu_read_unlock(&sched_res_rculock); + return !rc ? NOTIFY_DONE : notifier_from_errno(rc); } =20 @@ -2477,8 +2597,13 @@ void __init scheduler_init(void) idle_domain->max_vcpus =3D nr_cpu_ids; if ( vcpu_create(idle_domain, 0) =3D=3D NULL ) BUG(); + + rcu_read_lock(&sched_res_rculock); + get_sched_res(0)->curr =3D idle_vcpu[0]->sched_unit; get_sched_res(0)->sched_unit_idle =3D idle_vcpu[0]->sched_unit; + + rcu_read_unlock(&sched_res_rculock); } =20 /* @@ -2491,9 +2616,14 @@ int schedule_cpu_add(unsigned int cpu, struct cpupoo= l *c) struct vcpu *idle; void *ppriv, *vpriv; struct scheduler *new_ops =3D c->sched; - struct sched_resource *sd =3D get_sched_res(cpu); + struct sched_resource *sd; spinlock_t *old_lock, *new_lock; unsigned long flags; + int ret =3D 0; + + rcu_read_lock(&sched_res_rculock); + + sd =3D get_sched_res(cpu); =20 ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); ASSERT(!cpumask_test_cpu(cpu, c->cpu_valid)); @@ -2513,13 +2643,18 @@ int schedule_cpu_add(unsigned int cpu, struct cpupo= ol *c) idle =3D idle_vcpu[cpu]; ppriv =3D sched_alloc_pdata(new_ops, cpu); if ( IS_ERR(ppriv) ) - return PTR_ERR(ppriv); + { + ret =3D PTR_ERR(ppriv); + goto out; + } + vpriv =3D sched_alloc_vdata(new_ops, idle->sched_unit, idle->domain->sched_priv); if ( vpriv =3D=3D NULL ) { sched_free_pdata(new_ops, ppriv, cpu); - return -ENOMEM; + ret =3D -ENOMEM; + goto out; } =20 /* @@ -2558,7 +2693,10 @@ int schedule_cpu_add(unsigned int cpu, struct cpupoo= l *c) /* The cpu is added to a pool, trigger it to go pick up some work */ cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); =20 - return 0; +out: + rcu_read_unlock(&sched_res_rculock); + + return ret; } =20 /* @@ -2571,11 +2709,16 @@ int schedule_cpu_rm(unsigned int cpu) { struct vcpu *idle; void *ppriv_old, *vpriv_old; - struct sched_resource *sd =3D get_sched_res(cpu); - struct scheduler *old_ops =3D sd->scheduler; + struct sched_resource *sd; + struct scheduler *old_ops; spinlock_t *old_lock; unsigned long flags; =20 + rcu_read_lock(&sched_res_rculock); + + sd =3D get_sched_res(cpu); + old_ops =3D sd->scheduler; + ASSERT(sd->cpupool !=3D NULL); ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); ASSERT(!cpumask_test_cpu(cpu, sd->cpupool->cpu_valid)); @@ -2608,6 +2751,8 @@ int schedule_cpu_rm(unsigned int cpu) sd->granularity =3D 1; sd->cpupool =3D NULL; =20 + rcu_read_unlock(&sched_res_rculock); + return 0; } =20 @@ -2656,6 +2801,8 @@ void schedule_dump(struct cpupool *c) =20 /* Locking, if necessary, must be handled withing each scheduler */ =20 + rcu_read_lock(&sched_res_rculock); + if ( c !=3D NULL ) { sched =3D c->sched; @@ -2675,6 +2822,8 @@ void schedule_dump(struct cpupool *c) for_each_cpu (i, cpus) sched_dump_cpu_state(sched, i); } + + rcu_read_unlock(&sched_res_rculock); } =20 void sched_tick_suspend(void) @@ -2682,10 +2831,14 @@ void sched_tick_suspend(void) struct scheduler *sched; unsigned int cpu =3D smp_processor_id(); =20 + rcu_read_lock(&sched_res_rculock); + sched =3D get_sched_res(cpu)->scheduler; sched_do_tick_suspend(sched, cpu); rcu_idle_enter(cpu); rcu_idle_timer_start(); + + rcu_read_unlock(&sched_res_rculock); } =20 void sched_tick_resume(void) @@ -2693,10 +2846,14 @@ void sched_tick_resume(void) struct scheduler *sched; unsigned int cpu =3D smp_processor_id(); =20 + rcu_read_lock(&sched_res_rculock); + rcu_idle_timer_stop(); rcu_idle_exit(cpu); sched =3D get_sched_res(cpu)->scheduler; sched_do_tick_resume(sched, cpu); + + rcu_read_unlock(&sched_res_rculock); } =20 void wait(void) @@ -2711,7 +2868,13 @@ void wait(void) */ bool sched_has_urgent_vcpu(void) { - return atomic_read(&get_sched_res(smp_processor_id())->urgent_count); + int val; + + rcu_read_lock(&sched_res_rculock); + val =3D atomic_read(&get_sched_res(smp_processor_id())->urgent_count); + rcu_read_unlock(&sched_res_rculock); + + return val; } =20 #ifdef CONFIG_COMPAT diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 606a0d4a25..de50b4ebca 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -10,6 +10,7 @@ =20 #include #include +#include =20 /* A global pointer to the initial cpupool (POOL0). */ extern struct cpupool *cpupool0; @@ -58,20 +59,22 @@ struct sched_resource { unsigned int processor; unsigned int granularity; const cpumask_t *cpus; /* cpus covered by this struct = */ + struct rcu_head rcu; }; =20 #define curr_on_cpu(c) (get_sched_res(c)->curr) =20 DECLARE_PER_CPU(struct sched_resource *, sched_res); +extern rcu_read_lock_t sched_res_rculock; =20 static inline struct sched_resource *get_sched_res(unsigned int cpu) { - return per_cpu(sched_res, cpu); + return rcu_dereference(per_cpu(sched_res, cpu)); } =20 static inline void set_sched_res(unsigned int cpu, struct sched_resource *= res) { - per_cpu(sched_res, cpu) =3D res; + rcu_assign_pointer(per_cpu(sched_res, cpu), res); } =20 static inline bool is_idle_unit(const struct sched_unit *unit) --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel