From nobody Tue Nov 11 08:45:41 2025 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1569567752; cv=none; d=zoho.com; s=zohoarc; b=ctyJnqUcE1m+iDhN/fVGrVSTw9ME7Ld884Y58YAFPW2eoP1PUuIAqbTl/RCEOQq1RfP0lYhHLYNdxtDP0p366XfkzlnUtSHJB4hx6xHuCSdpkRUiPbYTZ1bovdZK2HjG1Seig0Vct8ZeUuugZqbZTUmL9dbcgKi1rjLdemdUnh8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1569567752; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=+pYGZn/oyOpyNu4usV8O/jbhKHheA21zyv8jMFIuEPM=; b=O4a77L2UnJ5IIm5f36mEcVxiDZOS9a8P28AfmrMZF8ixSlCXOon0TtZW7PJCyK5InVm+YRlRFd4SdbXLP/q7H5fssOsyWUn6WmU6fr6L5crAAjwgSI+FgVLzstKvBOBdqZt3+C7HCPxKGGnyix9lXCZ0QuI+K1/uqcQ9TyccIj8= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1569567752958938.5736607303592; Fri, 27 Sep 2019 00:02:32 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFs-0003Pf-V3; Fri, 27 Sep 2019 07:01:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iDkFr-0003O9-Qf for xen-devel@lists.xenproject.org; Fri, 27 Sep 2019 07:01:27 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by localhost (Halon) with ESMTPS id 8ff5add4-e0f4-11e9-bf31-bc764e2007e4; Fri, 27 Sep 2019 07:01:00 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 04ABAB027; Fri, 27 Sep 2019 07:00:58 +0000 (UTC) X-Inumbo-ID: 8ff5add4-e0f4-11e9-bf31-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 27 Sep 2019 09:00:15 +0200 Message-Id: <20190927070050.12405-12-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190927070050.12405-1-jgross@suse.com> References: <20190927070050.12405-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v4 11/46] xen/sched: rename scheduler related perf counters X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Meng Xu , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Rename the scheduler related perf counters from vcpu* to unit* where appropriate. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched_credit.c | 32 ++++++++++++++++---------------- xen/common/sched_credit2.c | 18 +++++++++--------- xen/common/sched_null.c | 18 +++++++++--------- xen/common/sched_rt.c | 16 ++++++++-------- xen/include/xen/perfc_defn.h | 30 +++++++++++++++--------------- 5 files changed, 57 insertions(+), 57 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 56e47d5e54..350f9636fa 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -668,7 +668,7 @@ __csched_vcpu_check(struct vcpu *vc) BUG_ON( !is_idle_vcpu(vc) ); } =20 - SCHED_STAT_CRANK(vcpu_check); + SCHED_STAT_CRANK(unit_check); } #define CSCHED_VCPU_CHECK(_vc) (__csched_vcpu_check(_vc)) #else @@ -692,7 +692,7 @@ __csched_vcpu_is_cache_hot(const struct csched_private = *prv, (NOW() - svc->last_sched_time) < prv->vcpu_migr_delay; =20 if ( hot ) - SCHED_STAT_CRANK(vcpu_hot); + SCHED_STAT_CRANK(unit_hot); =20 return hot; } @@ -881,7 +881,7 @@ __csched_vcpu_acct_start(struct csched_private *prv, st= ruct csched_unit *svc) if ( list_empty(&svc->active_vcpu_elem) ) { SCHED_VCPU_STAT_CRANK(svc, state_active); - SCHED_STAT_CRANK(acct_vcpu_active); + SCHED_STAT_CRANK(acct_unit_active); =20 sdom->active_vcpu_count++; list_add(&svc->active_vcpu_elem, &sdom->active_vcpu); @@ -908,7 +908,7 @@ __csched_vcpu_acct_stop_locked(struct csched_private *p= rv, BUG_ON( list_empty(&svc->active_vcpu_elem) ); =20 SCHED_VCPU_STAT_CRANK(svc, state_idle); - SCHED_STAT_CRANK(acct_vcpu_idle); + SCHED_STAT_CRANK(acct_unit_idle); =20 BUG_ON( prv->weight < sdom->weight ); sdom->active_vcpu_count--; @@ -1010,7 +1010,7 @@ csched_alloc_udata(const struct scheduler *ops, struc= t sched_unit *unit, svc->pri =3D is_idle_domain(vc->domain) ? CSCHED_PRI_IDLE : CSCHED_PRI_TS_UNDER; SCHED_VCPU_STATS_RESET(svc); - SCHED_STAT_CRANK(vcpu_alloc); + SCHED_STAT_CRANK(unit_alloc); return svc; } =20 @@ -1038,7 +1038,7 @@ csched_unit_insert(const struct scheduler *ops, struc= t sched_unit *unit) =20 unit_schedule_unlock_irq(lock, unit); =20 - SCHED_STAT_CRANK(vcpu_insert); + SCHED_STAT_CRANK(unit_insert); } =20 static void @@ -1058,13 +1058,13 @@ csched_unit_remove(const struct scheduler *ops, str= uct sched_unit *unit) struct csched_unit * const svc =3D CSCHED_UNIT(unit); struct csched_dom * const sdom =3D svc->sdom; =20 - SCHED_STAT_CRANK(vcpu_remove); + SCHED_STAT_CRANK(unit_remove); =20 ASSERT(!__vcpu_on_runq(svc)); =20 if ( test_and_clear_bit(CSCHED_FLAG_VCPU_PARKED, &svc->flags) ) { - SCHED_STAT_CRANK(vcpu_unpark); + SCHED_STAT_CRANK(unit_unpark); vcpu_unpause(svc->vcpu); } =20 @@ -1085,7 +1085,7 @@ csched_unit_sleep(const struct scheduler *ops, struct= sched_unit *unit) struct csched_unit * const svc =3D CSCHED_UNIT(unit); unsigned int cpu =3D vc->processor; =20 - SCHED_STAT_CRANK(vcpu_sleep); + SCHED_STAT_CRANK(unit_sleep); =20 BUG_ON( is_idle_vcpu(vc) ); =20 @@ -1114,19 +1114,19 @@ csched_unit_wake(const struct scheduler *ops, struc= t sched_unit *unit) =20 if ( unlikely(curr_on_cpu(vc->processor) =3D=3D unit) ) { - SCHED_STAT_CRANK(vcpu_wake_running); + SCHED_STAT_CRANK(unit_wake_running); return; } if ( unlikely(__vcpu_on_runq(svc)) ) { - SCHED_STAT_CRANK(vcpu_wake_onrunq); + SCHED_STAT_CRANK(unit_wake_onrunq); return; } =20 if ( likely(vcpu_runnable(vc)) ) - SCHED_STAT_CRANK(vcpu_wake_runnable); + SCHED_STAT_CRANK(unit_wake_runnable); else - SCHED_STAT_CRANK(vcpu_wake_not_runnable); + SCHED_STAT_CRANK(unit_wake_not_runnable); =20 /* * We temporarly boost the priority of awaking VCPUs! @@ -1156,7 +1156,7 @@ csched_unit_wake(const struct scheduler *ops, struct = sched_unit *unit) !test_bit(CSCHED_FLAG_VCPU_PARKED, &svc->flags) ) { TRACE_2D(TRC_CSCHED_BOOST_START, vc->domain->domain_id, vc->vcpu_i= d); - SCHED_STAT_CRANK(vcpu_boost); + SCHED_STAT_CRANK(unit_boost); svc->pri =3D CSCHED_PRI_TS_BOOST; } =20 @@ -1515,7 +1515,7 @@ csched_acct(void* dummy) credit < -credit_cap && !test_and_set_bit(CSCHED_FLAG_VCPU_PARKED, &svc->flag= s) ) { - SCHED_STAT_CRANK(vcpu_park); + SCHED_STAT_CRANK(unit_park); vcpu_pause_nosync(svc->vcpu); } =20 @@ -1539,7 +1539,7 @@ csched_acct(void* dummy) * call to make sure the VCPU's priority is not boosted * if it is woken up here. */ - SCHED_STAT_CRANK(vcpu_unpark); + SCHED_STAT_CRANK(unit_unpark); vcpu_unpause(svc->vcpu); clear_bit(CSCHED_FLAG_VCPU_PARKED, &svc->flags); } diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 4c0f31733d..7b0872eba5 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -2020,7 +2020,7 @@ csched2_vcpu_check(struct vcpu *vc) { BUG_ON( !is_idle_vcpu(vc) ); } - SCHED_STAT_CRANK(vcpu_check); + SCHED_STAT_CRANK(unit_check); } #define CSCHED2_VCPU_CHECK(_vc) (csched2_vcpu_check(_vc)) #else @@ -2067,7 +2067,7 @@ csched2_alloc_udata(const struct scheduler *ops, stru= ct sched_unit *unit, svc->budget_quota =3D 0; INIT_LIST_HEAD(&svc->parked_elem); =20 - SCHED_STAT_CRANK(vcpu_alloc); + SCHED_STAT_CRANK(unit_alloc); =20 return svc; } @@ -2079,7 +2079,7 @@ csched2_unit_sleep(const struct scheduler *ops, struc= t sched_unit *unit) struct csched2_unit * const svc =3D csched2_unit(unit); =20 ASSERT(!is_idle_vcpu(vc)); - SCHED_STAT_CRANK(vcpu_sleep); + SCHED_STAT_CRANK(unit_sleep); =20 if ( curr_on_cpu(vc->processor) =3D=3D unit ) { @@ -2109,20 +2109,20 @@ csched2_unit_wake(const struct scheduler *ops, stru= ct sched_unit *unit) =20 if ( unlikely(curr_on_cpu(cpu) =3D=3D unit) ) { - SCHED_STAT_CRANK(vcpu_wake_running); + SCHED_STAT_CRANK(unit_wake_running); goto out; } =20 if ( unlikely(vcpu_on_runq(svc)) ) { - SCHED_STAT_CRANK(vcpu_wake_onrunq); + SCHED_STAT_CRANK(unit_wake_onrunq); goto out; } =20 if ( likely(vcpu_runnable(vc)) ) - SCHED_STAT_CRANK(vcpu_wake_runnable); + SCHED_STAT_CRANK(unit_wake_runnable); else - SCHED_STAT_CRANK(vcpu_wake_not_runnable); + SCHED_STAT_CRANK(unit_wake_not_runnable); =20 /* If the context hasn't been saved for this vcpu yet, we can't put it= on * another runqueue. Instead, we set a flag so that it will be put on= the runqueue @@ -3138,7 +3138,7 @@ csched2_unit_insert(const struct scheduler *ops, stru= ct sched_unit *unit) =20 sdom->nr_vcpus++; =20 - SCHED_STAT_CRANK(vcpu_insert); + SCHED_STAT_CRANK(unit_insert); =20 CSCHED2_VCPU_CHECK(vc); } @@ -3161,7 +3161,7 @@ csched2_unit_remove(const struct scheduler *ops, stru= ct sched_unit *unit) ASSERT(!is_idle_vcpu(vc)); ASSERT(list_empty(&svc->runq_elem)); =20 - SCHED_STAT_CRANK(vcpu_remove); + SCHED_STAT_CRANK(unit_remove); =20 /* Remove from runqueue */ lock =3D unit_schedule_lock_irq(unit); diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 23e029a4dd..06acaf9f90 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -199,7 +199,7 @@ static void *null_alloc_udata(const struct scheduler *o= ps, INIT_LIST_HEAD(&nvc->waitq_elem); nvc->vcpu =3D v; =20 - SCHED_STAT_CRANK(vcpu_alloc); + SCHED_STAT_CRANK(unit_alloc); =20 return nvc; } @@ -502,7 +502,7 @@ static void null_unit_insert(const struct scheduler *op= s, } spin_unlock_irq(lock); =20 - SCHED_STAT_CRANK(vcpu_insert); + SCHED_STAT_CRANK(unit_insert); } =20 static void null_unit_remove(const struct scheduler *ops, @@ -540,7 +540,7 @@ static void null_unit_remove(const struct scheduler *op= s, out: unit_schedule_unlock_irq(lock, unit); =20 - SCHED_STAT_CRANK(vcpu_remove); + SCHED_STAT_CRANK(unit_remove); } =20 static void null_unit_wake(const struct scheduler *ops, @@ -555,21 +555,21 @@ static void null_unit_wake(const struct scheduler *op= s, =20 if ( unlikely(curr_on_cpu(cpu) =3D=3D unit) ) { - SCHED_STAT_CRANK(vcpu_wake_running); + SCHED_STAT_CRANK(unit_wake_running); return; } =20 if ( unlikely(!list_empty(&nvc->waitq_elem)) ) { /* Not exactly "on runq", but close enough for reusing the counter= */ - SCHED_STAT_CRANK(vcpu_wake_onrunq); + SCHED_STAT_CRANK(unit_wake_onrunq); return; } =20 if ( likely(vcpu_runnable(v)) ) - SCHED_STAT_CRANK(vcpu_wake_runnable); + SCHED_STAT_CRANK(unit_wake_runnable); else - SCHED_STAT_CRANK(vcpu_wake_not_runnable); + SCHED_STAT_CRANK(unit_wake_not_runnable); =20 /* * If a vcpu is neither on a pCPU nor in the waitqueue, it means it was @@ -649,7 +649,7 @@ static void null_unit_sleep(const struct scheduler *ops, if ( likely(!tickled && curr_on_cpu(cpu) =3D=3D unit) ) cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); =20 - SCHED_STAT_CRANK(vcpu_sleep); + SCHED_STAT_CRANK(unit_sleep); } =20 static struct sched_resource * @@ -770,7 +770,7 @@ static inline void null_vcpu_check(struct vcpu *v) else BUG_ON(!is_idle_vcpu(v)); =20 - SCHED_STAT_CRANK(vcpu_check); + SCHED_STAT_CRANK(unit_check); } #define NULL_VCPU_CHECK(v) (null_vcpu_check(v)) #else diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index db24a70a91..3fbe8dad8d 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -861,7 +861,7 @@ rt_alloc_udata(const struct scheduler *ops, struct sche= d_unit *unit, void *dd) if ( !is_idle_vcpu(vc) ) svc->budget =3D RTDS_DEFAULT_BUDGET; =20 - SCHED_STAT_CRANK(vcpu_alloc); + SCHED_STAT_CRANK(unit_alloc); =20 return svc; } @@ -910,7 +910,7 @@ rt_unit_insert(const struct scheduler *ops, struct sche= d_unit *unit) } unit_schedule_unlock_irq(lock, unit); =20 - SCHED_STAT_CRANK(vcpu_insert); + SCHED_STAT_CRANK(unit_insert); } =20 /* @@ -923,7 +923,7 @@ rt_unit_remove(const struct scheduler *ops, struct sche= d_unit *unit) struct rt_dom * const sdom =3D svc->sdom; spinlock_t *lock; =20 - SCHED_STAT_CRANK(vcpu_remove); + SCHED_STAT_CRANK(unit_remove); =20 BUG_ON( sdom =3D=3D NULL ); =20 @@ -1145,7 +1145,7 @@ rt_unit_sleep(const struct scheduler *ops, struct sch= ed_unit *unit) struct rt_unit * const svc =3D rt_unit(unit); =20 BUG_ON( is_idle_vcpu(vc) ); - SCHED_STAT_CRANK(vcpu_sleep); + SCHED_STAT_CRANK(unit_sleep); =20 if ( curr_on_cpu(vc->processor) =3D=3D unit ) cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); @@ -1266,21 +1266,21 @@ rt_unit_wake(const struct scheduler *ops, struct sc= hed_unit *unit) =20 if ( unlikely(curr_on_cpu(vc->processor) =3D=3D unit) ) { - SCHED_STAT_CRANK(vcpu_wake_running); + SCHED_STAT_CRANK(unit_wake_running); return; } =20 /* on RunQ/DepletedQ, just update info is ok */ if ( unlikely(vcpu_on_q(svc)) ) { - SCHED_STAT_CRANK(vcpu_wake_onrunq); + SCHED_STAT_CRANK(unit_wake_onrunq); return; } =20 if ( likely(vcpu_runnable(vc)) ) - SCHED_STAT_CRANK(vcpu_wake_runnable); + SCHED_STAT_CRANK(unit_wake_runnable); else - SCHED_STAT_CRANK(vcpu_wake_not_runnable); + SCHED_STAT_CRANK(unit_wake_not_runnable); =20 /* * If a deadline passed while svc was asleep/blocked, we need new diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h index 1ad4384080..08b182ccd9 100644 --- a/xen/include/xen/perfc_defn.h +++ b/xen/include/xen/perfc_defn.h @@ -21,20 +21,20 @@ PERFCOUNTER(sched_ctx, "sched: context swi= tches") PERFCOUNTER(schedule, "sched: specific scheduler") PERFCOUNTER(dom_init, "sched: dom_init") PERFCOUNTER(dom_destroy, "sched: dom_destroy") -PERFCOUNTER(vcpu_alloc, "sched: vcpu_alloc") -PERFCOUNTER(vcpu_insert, "sched: vcpu_insert") -PERFCOUNTER(vcpu_remove, "sched: vcpu_remove") -PERFCOUNTER(vcpu_sleep, "sched: vcpu_sleep") PERFCOUNTER(vcpu_yield, "sched: vcpu_yield") -PERFCOUNTER(vcpu_wake_running, "sched: vcpu_wake_running") -PERFCOUNTER(vcpu_wake_onrunq, "sched: vcpu_wake_onrunq") -PERFCOUNTER(vcpu_wake_runnable, "sched: vcpu_wake_runnable") -PERFCOUNTER(vcpu_wake_not_runnable, "sched: vcpu_wake_not_runnable") +PERFCOUNTER(unit_alloc, "sched: unit_alloc") +PERFCOUNTER(unit_insert, "sched: unit_insert") +PERFCOUNTER(unit_remove, "sched: unit_remove") +PERFCOUNTER(unit_sleep, "sched: unit_sleep") +PERFCOUNTER(unit_wake_running, "sched: unit_wake_running") +PERFCOUNTER(unit_wake_onrunq, "sched: unit_wake_onrunq") +PERFCOUNTER(unit_wake_runnable, "sched: unit_wake_runnable") +PERFCOUNTER(unit_wake_not_runnable, "sched: unit_wake_not_runnable") PERFCOUNTER(tickled_no_cpu, "sched: tickled_no_cpu") PERFCOUNTER(tickled_idle_cpu, "sched: tickled_idle_cpu") PERFCOUNTER(tickled_idle_cpu_excl, "sched: tickled_idle_cpu_exclusive") PERFCOUNTER(tickled_busy_cpu, "sched: tickled_busy_cpu") -PERFCOUNTER(vcpu_check, "sched: vcpu_check") +PERFCOUNTER(unit_check, "sched: unit_check") =20 /* credit specific counters */ PERFCOUNTER(delay_ms, "csched: delay") @@ -43,11 +43,11 @@ PERFCOUNTER(acct_no_work, "csched: acct_no_wo= rk") PERFCOUNTER(acct_balance, "csched: acct_balance") PERFCOUNTER(acct_reorder, "csched: acct_reorder") PERFCOUNTER(acct_min_credit, "csched: acct_min_credit") -PERFCOUNTER(acct_vcpu_active, "csched: acct_vcpu_active") -PERFCOUNTER(acct_vcpu_idle, "csched: acct_vcpu_idle") -PERFCOUNTER(vcpu_boost, "csched: vcpu_boost") -PERFCOUNTER(vcpu_park, "csched: vcpu_park") -PERFCOUNTER(vcpu_unpark, "csched: vcpu_unpark") +PERFCOUNTER(acct_unit_active, "csched: acct_unit_active") +PERFCOUNTER(acct_unit_idle, "csched: acct_unit_idle") +PERFCOUNTER(unit_boost, "csched: unit_boost") +PERFCOUNTER(unit_park, "csched: unit_park") +PERFCOUNTER(unit_unpark, "csched: unit_unpark") PERFCOUNTER(load_balance_idle, "csched: load_balance_idle") PERFCOUNTER(load_balance_over, "csched: load_balance_over") PERFCOUNTER(load_balance_other, "csched: load_balance_other") @@ -57,7 +57,7 @@ PERFCOUNTER(steal_peer_idle, "csched: steal_peer_i= dle") PERFCOUNTER(migrate_queued, "csched: migrate_queued") PERFCOUNTER(migrate_running, "csched: migrate_running") PERFCOUNTER(migrate_kicked_away, "csched: migrate_kicked_away") -PERFCOUNTER(vcpu_hot, "csched: vcpu_hot") +PERFCOUNTER(unit_hot, "csched: unit_hot") =20 /* credit2 specific counters */ PERFCOUNTER(burn_credits_t2c, "csched2: burn_credits_t2c") --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel