From nobody Mon Feb 9 21:21:49 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1660561510; cv=none; d=zohomail.com; s=zohoarc; b=XTYLjbvRzpB9L9VkKjfl9Qf391OzRQ87NgvNn4hdECZPmidNTUJ3poveV8N08JspA07RVWSxP+vCKejkZfyyYJ2u54yFJvVl/K5NuCtWXpEmywZC0BwMtS1uqAb+vT2Y/NUOUR0LIbBVht176Rhz0w6NsUylGPl0p+/vsx6KcJI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1660561510; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=liuF5z62h6Zyt9FvDgpE6TztPWAGZMbvacA9LSHauHw=; b=UTX27Cqx5nNwrkzM+yMBLk0lzVzWXk/jqD6qVRjAFkL5AoZHOddGKSeBTbr4sI8SEt40NNYaGFtH2zFgnjECnZkhfp0igTglAS8YPBHr9nvDr2Qm8v1rWE66gvFJ2G0bBXGpHNEzw7jhl/K1hwqphBSB2ruk1fw8+wd7JfHhj/I= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1660561510347177.35461053760503; Mon, 15 Aug 2022 04:05:10 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.387256.623425 (Exim 4.92) (envelope-from ) id 1oNXtJ-0001k4-TN; Mon, 15 Aug 2022 11:04:17 +0000 Received: by outflank-mailman (output) from mailman id 387256.623425; Mon, 15 Aug 2022 11:04:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oNXtJ-0001jx-PX; Mon, 15 Aug 2022 11:04:17 +0000 Received: by outflank-mailman (input) for mailman id 387256; Mon, 15 Aug 2022 11:04:15 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oNXtH-0001TU-Ij for xen-devel@lists.xenproject.org; Mon, 15 Aug 2022 11:04:15 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id ff00fe8b-1c89-11ed-bd2e-47488cf2e6aa; Mon, 15 Aug 2022 13:04:13 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 32B702012C; Mon, 15 Aug 2022 11:04:13 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DB99813A93; Mon, 15 Aug 2022 11:04:12 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id iDZONCwo+mLHBgAAMHmgww (envelope-from ); Mon, 15 Aug 2022 11:04:12 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ff00fe8b-1c89-11ed-bd2e-47488cf2e6aa DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1660561453; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=liuF5z62h6Zyt9FvDgpE6TztPWAGZMbvacA9LSHauHw=; b=bUSUrYi6OSB2OGNNu1WnuxpTee7e2w1RqAcmjTfcyU4aYRlxeo5DBO+dIVRN5CNx2PhxjR plFyRpvjklZyfKxHevIZ29fEk+ooCvS4Dl8H3PzzAy/yOvV6fhqJpdg3LI7KHipKKU+0U5 3K4Bd05jMVEq7FscgvS/a6LbFbEKkUw= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v2 1/3] xen/sched: introduce cpupool_update_node_affinity() Date: Mon, 15 Aug 2022 13:04:08 +0200 Message-Id: <20220815110410.19872-2-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220815110410.19872-1-jgross@suse.com> References: <20220815110410.19872-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1660561511839100007 Content-Type: text/plain; charset="utf-8" For updating the node affinities of all domains in a cpupool add a new function cpupool_update_node_affinity(). In order to avoid multiple allocations of cpumasks carve out memory allocation and freeing from domain_update_node_affinity() into new helpers, which can be used by cpupool_update_node_affinity(). Modify domain_update_node_affinity() to take an additional parameter for passing the allocated memory in and to allocate and free the memory via the new helpers in case NULL was passed. This will help later to pre-allocate the cpumasks in order to avoid allocations in stop-machine context. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V2: - move helpers to core.c (Jan Beulich) - allocate/free memory in domain_update_node_aff() if NULL was passed in (Jan Beulich) --- xen/common/sched/core.c | 54 ++++++++++++++++++++++++++------------ xen/common/sched/cpupool.c | 39 +++++++++++++++------------ xen/common/sched/private.h | 7 +++++ xen/include/xen/sched.h | 9 ++++++- 4 files changed, 74 insertions(+), 35 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index ff1ddc7624..085a9dd335 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -1824,9 +1824,28 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t = cmd, return ret; } =20 -void domain_update_node_affinity(struct domain *d) +bool update_node_aff_alloc(struct affinity_masks *affinity) { - cpumask_var_t dom_cpumask, dom_cpumask_soft; + if ( !alloc_cpumask_var(&affinity->hard) ) + return false; + if ( !alloc_cpumask_var(&affinity->soft) ) + { + free_cpumask_var(affinity->hard); + return false; + } + + return true; +} + +void update_node_aff_free(struct affinity_masks *affinity) +{ + free_cpumask_var(affinity->soft); + free_cpumask_var(affinity->hard); +} + +void domain_update_node_aff(struct domain *d, struct affinity_masks *affin= ity) +{ + struct affinity_masks masks =3D { }; cpumask_t *dom_affinity; const cpumask_t *online; struct sched_unit *unit; @@ -1836,14 +1855,16 @@ void domain_update_node_affinity(struct domain *d) if ( !d->vcpu || !d->vcpu[0] ) return; =20 - if ( !zalloc_cpumask_var(&dom_cpumask) ) - return; - if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) + if ( !affinity ) { - free_cpumask_var(dom_cpumask); - return; + affinity =3D &masks; + if ( !update_node_aff_alloc(affinity) ) + return; } =20 + cpumask_clear(affinity->hard); + cpumask_clear(affinity->soft); + online =3D cpupool_domain_master_cpumask(d); =20 spin_lock(&d->node_affinity_lock); @@ -1864,22 +1885,21 @@ void domain_update_node_affinity(struct domain *d) */ for_each_sched_unit ( d, unit ) { - cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity); - cpumask_or(dom_cpumask_soft, dom_cpumask_soft, - unit->cpu_soft_affinity); + cpumask_or(affinity->hard, affinity->hard, unit->cpu_hard_affi= nity); + cpumask_or(affinity->soft, affinity->soft, unit->cpu_soft_affi= nity); } /* Filter out non-online cpus */ - cpumask_and(dom_cpumask, dom_cpumask, online); - ASSERT(!cpumask_empty(dom_cpumask)); + cpumask_and(affinity->hard, affinity->hard, online); + ASSERT(!cpumask_empty(affinity->hard)); /* And compute the intersection between hard, online and soft */ - cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); + cpumask_and(affinity->soft, affinity->soft, affinity->hard); =20 /* * If not empty, the intersection of hard, soft and online is the * narrowest set we want. If empty, we fall back to hard&online. */ - dom_affinity =3D cpumask_empty(dom_cpumask_soft) ? - dom_cpumask : dom_cpumask_soft; + dom_affinity =3D cpumask_empty(affinity->soft) ? affinity->hard + : affinity->soft; =20 nodes_clear(d->node_affinity); for_each_cpu ( cpu, dom_affinity ) @@ -1888,8 +1908,8 @@ void domain_update_node_affinity(struct domain *d) =20 spin_unlock(&d->node_affinity_lock); =20 - free_cpumask_var(dom_cpumask_soft); - free_cpumask_var(dom_cpumask); + if ( affinity =3D=3D &masks ) + update_node_aff_free(affinity); } =20 typedef long ret_t; diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 2afe54f54d..58e082eb4c 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -410,6 +410,25 @@ int cpupool_move_domain(struct domain *d, struct cpupo= ol *c) return ret; } =20 +/* Update affinities of all domains in a cpupool. */ +static void cpupool_update_node_affinity(const struct cpupool *c) +{ + struct affinity_masks masks; + struct domain *d; + + if ( !update_node_aff_alloc(&masks) ) + return; + + rcu_read_lock(&domlist_read_lock); + + for_each_domain_in_cpupool(d, c) + domain_update_node_aff(d, &masks); + + rcu_read_unlock(&domlist_read_lock); + + update_node_aff_free(&masks); +} + /* * assign a specific cpu to a cpupool * cpupool_lock must be held @@ -417,7 +436,6 @@ int cpupool_move_domain(struct domain *d, struct cpupoo= l *c) static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu) { int ret; - struct domain *d; const cpumask_t *cpus; =20 cpus =3D sched_get_opt_cpumask(c->gran, cpu); @@ -442,12 +460,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c= , unsigned int cpu) =20 rcu_read_unlock(&sched_res_rculock); =20 - rcu_read_lock(&domlist_read_lock); - for_each_domain_in_cpupool(d, c) - { - domain_update_node_affinity(d); - } - rcu_read_unlock(&domlist_read_lock); + cpupool_update_node_affinity(c); =20 return 0; } @@ -456,18 +469,14 @@ static int cpupool_unassign_cpu_finish(struct cpupool= *c) { int cpu =3D cpupool_moving_cpu; const cpumask_t *cpus; - struct domain *d; int ret; =20 if ( c !=3D cpupool_cpu_moving ) return -EADDRNOTAVAIL; =20 - /* - * We need this for scanning the domain list, both in - * cpu_disable_scheduler(), and at the bottom of this function. - */ rcu_read_lock(&domlist_read_lock); ret =3D cpu_disable_scheduler(cpu); + rcu_read_unlock(&domlist_read_lock); =20 rcu_read_lock(&sched_res_rculock); cpus =3D get_sched_res(cpu)->cpus; @@ -494,11 +503,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool = *c) } rcu_read_unlock(&sched_res_rculock); =20 - for_each_domain_in_cpupool(d, c) - { - domain_update_node_affinity(d); - } - rcu_read_unlock(&domlist_read_lock); + cpupool_update_node_affinity(c); =20 return ret; } diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index a870320146..38251b1f7b 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -593,6 +593,13 @@ affinity_balance_cpumask(const struct sched_unit *unit= , int step, cpumask_copy(mask, unit->cpu_hard_affinity); } =20 +struct affinity_masks { + cpumask_var_t hard; + cpumask_var_t soft; +}; + +bool update_node_aff_alloc(struct affinity_masks *affinity); +void update_node_aff_free(struct affinity_masks *affinity); void sched_rm_cpu(unsigned int cpu); const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int c= pu); void schedule_dump(struct cpupool *c); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index e2b3b6daa3..666264b8c3 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -663,8 +663,15 @@ static inline void get_knownalive_domain(struct domain= *d) ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED)); } =20 +struct affinity_masks; + int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity); -void domain_update_node_affinity(struct domain *d); +void domain_update_node_aff(struct domain *d, struct affinity_masks *affin= ity); + +static inline void domain_update_node_affinity(struct domain *d) +{ + domain_update_node_aff(d, NULL); +} =20 /* * To be implemented by each architecture, sanity checking the configurati= on --=20 2.35.3