From nobody Mon Feb 9 09:16:19 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1557125912; cv=none; d=zoho.com; s=zohoarc; b=NCyHY8X1k3o/077L/5omEL72Gz+aWTnJBeYCWTCCENJE2UGVf30M2gLTVteGpJ/36i+piC/lDuYxkaY2Wj1meWSpmhK45orJvhqRm6JE4hzNBrOFqxEmnU0bLxacR26OYr7WZsSdXo9lxZnfEWcyzQB4MmwY1aDppGowkFLVqK0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557125912; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=EFXW/wbDKzvFqjekFcdDCjnRYm+Ibs4x5TmDqH8rOW4=; b=gYWoTXwHPD1I9MxYqmCqc0LlatAIjCS8RRj4I7cZ/byBofeV7YbskE20KC0wBhBkSiVr2l6oRkbBTN8vCFaXd6n56+VMAfGZfWaVRjuw4lVc9z7sgqW4zqe+jIgLvWiYqn+cpWyMIKQ5grPi9+Fg2e8fxWx49jZtP+xEBVeDzsE= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1557125912023839.6548400916831; Sun, 5 May 2019 23:58:32 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYp-0002OG-CP; Mon, 06 May 2019 06:57:15 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hNXYi-00028X-3a for xen-devel@lists.xenproject.org; Mon, 06 May 2019 06:57:08 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 250093d4-6fcc-11e9-ab88-e374a13293b4; Mon, 06 May 2019 06:57:00 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 36C47AF25; Mon, 6 May 2019 06:56:55 +0000 (UTC) X-Inumbo-ID: 250093d4-6fcc-11e9-ab88-e374a13293b4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 6 May 2019 08:56:23 +0200 Message-Id: <20190506065644.7415-25-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190506065644.7415-1-jgross@suse.com> References: <20190506065644.7415-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC V2 24/45] xen: let vcpu_create() select processor X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Dario Faggioli , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Today there are two distinct scenarios for vcpu_create(): either for creation of idle-domain vcpus (vcpuid =3D=3D processor) or for creation of "normal" domain vcpus (including dom0), where the caller selects the initial processor on a round-robin scheme of the allowed processors (allowed being based on cpupool and affinities). Instead of passing the initial processor to vcpu_create() and passing on to sched_init_vcpu() let sched_init_vcpu() do the processor selection. For supporting dom0 vcpu creation use the node_affinity of the domain as a base for selecting the processors. User domains will have initially all nodes set, so this is no different behavior compared to today. Signed-off-by: Juergen Gross Acked-by: Andrew Cooper --- RFC V2: add ASSERT(), modify error message (Andrew Cooper) --- xen/arch/arm/domain_build.c | 13 ++++++------- xen/arch/x86/dom0_build.c | 10 +++------- xen/arch/x86/hvm/dom0_build.c | 9 ++------- xen/arch/x86/pv/dom0_build.c | 10 ++-------- xen/common/domain.c | 5 ++--- xen/common/domctl.c | 10 ++-------- xen/common/schedule.c | 34 +++++++++++++++++++++++++++++++--- xen/include/asm-x86/dom0_build.h | 3 +-- xen/include/xen/domain.h | 3 +-- xen/include/xen/sched.h | 2 +- 10 files changed, 51 insertions(+), 48 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index d9836779d1..86a6e4bf7b 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -80,7 +80,7 @@ unsigned int __init dom0_max_vcpus(void) =20 struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0) { - return vcpu_create(dom0, 0, 0); + return vcpu_create(dom0, 0); } =20 static unsigned int __init get_allocation_size(paddr_t size) @@ -1923,7 +1923,7 @@ static void __init find_gnttab_region(struct domain *= d, =20 static int __init construct_domain(struct domain *d, struct kernel_info *k= info) { - int i, cpu; + int i; struct vcpu *v =3D d->vcpu[0]; struct cpu_user_regs *regs =3D &v->arch.cpu_info->guest_cpu_user_regs; =20 @@ -1986,12 +1986,11 @@ static int __init construct_domain(struct domain *d= , struct kernel_info *kinfo) } #endif =20 - for ( i =3D 1, cpu =3D 0; i < d->max_vcpus; i++ ) + for ( i =3D 1; i < d->max_vcpus; i++ ) { - cpu =3D cpumask_cycle(cpu, &cpu_online_map); - if ( vcpu_create(d, i, cpu) =3D=3D NULL ) + if ( vcpu_create(d, i) =3D=3D NULL ) { - printk("Failed to allocate dom0 vcpu %d on pcpu %d\n", i, cpu); + printk("Failed to allocate d0v%u\n", i); break; } =20 @@ -2026,7 +2025,7 @@ static int __init construct_domU(struct domain *d, =20 kinfo.vpl011 =3D dt_property_read_bool(node, "vpl011"); =20 - if ( vcpu_create(d, 0, 0) =3D=3D NULL ) + if ( vcpu_create(d, 0) =3D=3D NULL ) return -ENOMEM; d->max_pages =3D ~0U; =20 diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c index 73f5407b0d..e550db8b03 100644 --- a/xen/arch/x86/dom0_build.c +++ b/xen/arch/x86/dom0_build.c @@ -198,12 +198,9 @@ custom_param("dom0_nodes", parse_dom0_nodes); =20 static cpumask_t __initdata dom0_cpus; =20 -struct vcpu *__init dom0_setup_vcpu(struct domain *d, - unsigned int vcpu_id, - unsigned int prev_cpu) +struct vcpu *__init dom0_setup_vcpu(struct domain *d, unsigned int vcpu_id) { - unsigned int cpu =3D cpumask_cycle(prev_cpu, &dom0_cpus); - struct vcpu *v =3D vcpu_create(d, vcpu_id, cpu); + struct vcpu *v =3D vcpu_create(d, vcpu_id); =20 if ( v ) { @@ -273,8 +270,7 @@ struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom= 0) dom0->node_affinity =3D dom0_nodes; dom0->auto_node_affinity =3D !dom0_nr_pxms; =20 - return dom0_setup_vcpu(dom0, 0, - cpumask_last(&dom0_cpus) /* so it wraps around = to first pcpu */); + return dom0_setup_vcpu(dom0, 0); } =20 #ifdef CONFIG_SHADOW_PAGING diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index aa599f09ef..15166bbaa9 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -614,7 +614,7 @@ static int __init pvh_setup_cpus(struct domain *d, padd= r_t entry, paddr_t start_info) { struct vcpu *v =3D d->vcpu[0]; - unsigned int cpu =3D v->processor, i; + unsigned int i; int rc; /* * This sets the vCPU state according to the state described in @@ -636,12 +636,7 @@ static int __init pvh_setup_cpus(struct domain *d, pad= dr_t entry, }; =20 for ( i =3D 1; i < d->max_vcpus; i++ ) - { - const struct vcpu *p =3D dom0_setup_vcpu(d, i, cpu); - - if ( p ) - cpu =3D p->processor; - } + dom0_setup_vcpu(d, i); =20 domain_update_node_affinity(d); =20 diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index cef2d42254..800b3e6b7d 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -285,7 +285,7 @@ int __init dom0_construct_pv(struct domain *d, module_t *initrd, char *cmdline) { - int i, cpu, rc, compatible, order, machine; + int i, rc, compatible, order, machine; struct cpu_user_regs *regs; unsigned long pfn, mfn; unsigned long nr_pages; @@ -693,14 +693,8 @@ int __init dom0_construct_pv(struct domain *d, =20 printk("Dom%u has maximum %u VCPUs\n", d->domain_id, d->max_vcpus); =20 - cpu =3D v->processor; for ( i =3D 1; i < d->max_vcpus; i++ ) - { - const struct vcpu *p =3D dom0_setup_vcpu(d, i, cpu); - - if ( p ) - cpu =3D p->processor; - } + dom0_setup_vcpu(d, i); =20 domain_update_node_affinity(d); d->arch.paging.mode =3D 0; diff --git a/xen/common/domain.c b/xen/common/domain.c index 1c0abda66f..78a838fab3 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -129,8 +129,7 @@ static void vcpu_destroy(struct vcpu *v) free_vcpu_struct(v); } =20 -struct vcpu *vcpu_create( - struct domain *d, unsigned int vcpu_id, unsigned int cpu_id) +struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id) { struct vcpu *v; =20 @@ -162,7 +161,7 @@ struct vcpu *vcpu_create( init_waitqueue_vcpu(v); } =20 - if ( sched_init_vcpu(v, cpu_id) !=3D 0 ) + if ( sched_init_vcpu(v) !=3D 0 ) goto fail_wq; =20 if ( arch_vcpu_create(v) !=3D 0 ) diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 8464713d2b..3f86a336cc 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -540,8 +540,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_d= omctl) =20 case XEN_DOMCTL_max_vcpus: { - unsigned int i, max =3D op->u.max_vcpus.max, cpu; - cpumask_t *online; + unsigned int i, max =3D op->u.max_vcpus.max; =20 ret =3D -EINVAL; if ( (d =3D=3D current->domain) || /* no domain_pause() */ @@ -552,18 +551,13 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u= _domctl) domain_pause(d); =20 ret =3D -ENOMEM; - online =3D cpupool_domain_cpumask(d); =20 for ( i =3D 0; i < max; i++ ) { if ( d->vcpu[i] !=3D NULL ) continue; =20 - cpu =3D (i =3D=3D 0) ? - cpumask_any(online) : - cpumask_cycle(d->vcpu[i-1]->processor, online); - - if ( vcpu_create(d, i, cpu) =3D=3D NULL ) + if ( vcpu_create(d, i) =3D=3D NULL ) goto maxvcpu_out; } =20 diff --git a/xen/common/schedule.c b/xen/common/schedule.c index dfd261d029..9c54811e86 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -314,14 +314,42 @@ static struct sched_item *sched_alloc_item(struct vcp= u *v) return NULL; } =20 -int sched_init_vcpu(struct vcpu *v, unsigned int processor) +static unsigned int sched_select_initial_cpu(struct vcpu *v) +{ + struct domain *d =3D v->domain; + nodeid_t node; + cpumask_t cpus; + + cpumask_clear(&cpus); + for_each_node_mask ( node, d->node_affinity ) + cpumask_or(&cpus, &cpus, &node_to_cpumask(node)); + cpumask_and(&cpus, &cpus, cpupool_domain_cpumask(d)); + if ( cpumask_empty(&cpus) ) + cpumask_copy(&cpus, cpupool_domain_cpumask(d)); + + if ( v->vcpu_id =3D=3D 0 ) + return cpumask_first(&cpus); + + /* We can rely on previous vcpu being available. */ + ASSERT(!is_idle_domain(d)); + + return cpumask_cycle(d->vcpu[v->vcpu_id - 1]->processor, &cpus); +} + +int sched_init_vcpu(struct vcpu *v) { struct domain *d =3D v->domain; struct sched_item *item; + unsigned int processor; =20 if ( (item =3D sched_alloc_item(v)) =3D=3D NULL ) return 1; =20 + if ( is_idle_domain(d) ) + processor =3D v->vcpu_id; + else + processor =3D sched_select_initial_cpu(v); + sched_set_res(item, per_cpu(sched_res, processor)); =20 /* Initialise the per-vcpu timers. */ @@ -1673,7 +1701,7 @@ static int cpu_schedule_up(unsigned int cpu) return 0; =20 if ( idle_vcpu[cpu] =3D=3D NULL ) - vcpu_create(idle_vcpu[0]->domain, cpu, cpu); + vcpu_create(idle_vcpu[0]->domain, cpu); else { struct vcpu *idle =3D idle_vcpu[cpu]; @@ -1867,7 +1895,7 @@ void __init scheduler_init(void) BUG_ON(nr_cpu_ids > ARRAY_SIZE(idle_vcpu)); idle_domain->vcpu =3D idle_vcpu; idle_domain->max_vcpus =3D nr_cpu_ids; - if ( vcpu_create(idle_domain, 0, 0) =3D=3D NULL ) + if ( vcpu_create(idle_domain, 0) =3D=3D NULL ) BUG(); this_cpu(sched_res)->curr =3D idle_vcpu[0]->sched_item; this_cpu(sched_res)->sched_priv =3D sched_alloc_pdata(&ops, 0); diff --git a/xen/include/asm-x86/dom0_build.h b/xen/include/asm-x86/dom0_bu= ild.h index 33a5483739..3eb4b036e1 100644 --- a/xen/include/asm-x86/dom0_build.h +++ b/xen/include/asm-x86/dom0_build.h @@ -11,8 +11,7 @@ extern unsigned int dom0_memflags; unsigned long dom0_compute_nr_pages(struct domain *d, struct elf_dom_parms *parms, unsigned long initrd_len); -struct vcpu *dom0_setup_vcpu(struct domain *d, unsigned int vcpu_id, - unsigned int cpu); +struct vcpu *dom0_setup_vcpu(struct domain *d, unsigned int vcpu_id); int dom0_setup_permissions(struct domain *d); =20 int dom0_construct_pv(struct domain *d, const module_t *image, diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index d1bfc82f57..a6e929685c 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -13,8 +13,7 @@ typedef union { struct compat_vcpu_guest_context *cmp; } vcpu_guest_context_u __attribute__((__transparent_union__)); =20 -struct vcpu *vcpu_create( - struct domain *d, unsigned int vcpu_id, unsigned int cpu_id); +struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id); =20 unsigned int dom0_max_vcpus(void); struct vcpu *alloc_dom0_vcpu0(struct domain *dom0); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index da117365af..8052f98780 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -663,7 +663,7 @@ void __domain_crash(struct domain *d); void noreturn asm_domain_crash_synchronous(unsigned long addr); =20 void scheduler_init(void); -int sched_init_vcpu(struct vcpu *v, unsigned int processor); +int sched_init_vcpu(struct vcpu *v); void sched_destroy_vcpu(struct vcpu *v); int sched_init_domain(struct domain *d, int poolid); void sched_destroy_domain(struct domain *d); --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel