From nobody Tue May 21 08:51:02 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1662361248; cv=none; d=zohomail.com; s=zohoarc; b=TZouyLuSEL3n7LPbfFkHg8vkk6fwWCSM7kLJFBkEZbTByJV04cZNwAIYttAh7XUDLkYBO5G/nsQI2E9eBlV4T9+1dYrLp7aaSQBcL3ourkQbSbuDvSl2bQxgceFMa1/aikeFcoKEreXvGnBDrehyQee0AUG4VLWalqJtaFrpAbI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662361248; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=PcIcHlXy1U4J1S/sT61vK8cPPB3NmDByuTHbbmO7RMI=; b=By68tgzwAR6UpeFuJ+tbJHxMLhCKCN+tmtII/FxWVpVZFGInKbIBUDsyoqMVwEzbVjgs5QUtqdDv9aSYmOb2XUN/Jo7Hsjuey+4LfLneEW/6vAzk2c+zStJeK5t26RcoO80ZVDLqJ9HDwv3QzGjhdLFIutcpk2DFHUhiYkEZKhM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 166236124843895.24876700507514; Mon, 5 Sep 2022 00:00:48 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.398491.639347 (Exim 4.92) (envelope-from ) id 1oV65d-00059v-4O; Mon, 05 Sep 2022 07:00:13 +0000 Received: by outflank-mailman (output) from mailman id 398491.639347; Mon, 05 Sep 2022 07:00:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oV65c-00058x-RI; Mon, 05 Sep 2022 07:00:12 +0000 Received: by outflank-mailman (input) for mailman id 398491; Mon, 05 Sep 2022 07:00:11 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oV65a-0004y1-R9 for xen-devel@lists.xenproject.org; Mon, 05 Sep 2022 07:00:10 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 6081d192-2ce8-11ed-a016-b9edf5238543; Mon, 05 Sep 2022 09:00:08 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 148BC385D1; Mon, 5 Sep 2022 07:00:08 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C4225139C7; Mon, 5 Sep 2022 07:00:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id WKefLneeFWNlZAAAMHmgww (envelope-from ); Mon, 05 Sep 2022 07:00:07 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6081d192-2ce8-11ed-a016-b9edf5238543 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1662361208; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PcIcHlXy1U4J1S/sT61vK8cPPB3NmDByuTHbbmO7RMI=; b=Ef8liysk5Otlq0hfvLRZL7AjQ/kygkaLeqxNhX1yHMU0RUgDLikfndEVHkCCllQhLxvO++ rjuqsHqsQ55gXYu+sgr9zSUNXMSNWxoFr0Rhkc7EBfYx4gROigSpECsP6h8K84WQ5vj/Lr NSBfFLvIshtbYB5FFxE/Tcpmfzr754I= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Henry.Wang@arm.com, Juergen Gross , George Dunlap , Dario Faggioli , Andrew Cooper , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v4 1/3] xen/sched: introduce cpupool_update_node_affinity() Date: Mon, 5 Sep 2022 09:00:03 +0200 Message-Id: <20220905070005.16788-2-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220905070005.16788-1-jgross@suse.com> References: <20220905070005.16788-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1662361250453100001 Content-Type: text/plain; charset="utf-8" For updating the node affinities of all domains in a cpupool add a new function cpupool_update_node_affinity(). In order to avoid multiple allocations of cpumasks carve out memory allocation and freeing from domain_update_node_affinity() into new helpers, which can be used by cpupool_update_node_affinity(). Modify domain_update_node_affinity() to take an additional parameter for passing the allocated memory in and to allocate and free the memory via the new helpers in case NULL was passed. This will help later to pre-allocate the cpumasks in order to avoid allocations in stop-machine context. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V2: - move helpers to core.c (Jan Beulich) - allocate/free memory in domain_update_node_aff() if NULL was passed in (Jan Beulich) V3: - remove pointless initializer (Jan Beulich) V4: - rename alloc/free helpers (Andrew Cooper) --- xen/common/sched/core.c | 54 ++++++++++++++++++++++++++------------ xen/common/sched/cpupool.c | 39 +++++++++++++++------------ xen/common/sched/private.h | 7 +++++ xen/include/xen/sched.h | 9 ++++++- 4 files changed, 74 insertions(+), 35 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index ff1ddc7624..5f1a265889 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -1824,9 +1824,28 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t = cmd, return ret; } =20 -void domain_update_node_affinity(struct domain *d) +bool alloc_affinity_masks(struct affinity_masks *affinity) { - cpumask_var_t dom_cpumask, dom_cpumask_soft; + if ( !alloc_cpumask_var(&affinity->hard) ) + return false; + if ( !alloc_cpumask_var(&affinity->soft) ) + { + free_cpumask_var(affinity->hard); + return false; + } + + return true; +} + +void free_affinity_masks(struct affinity_masks *affinity) +{ + free_cpumask_var(affinity->soft); + free_cpumask_var(affinity->hard); +} + +void domain_update_node_aff(struct domain *d, struct affinity_masks *affin= ity) +{ + struct affinity_masks masks; cpumask_t *dom_affinity; const cpumask_t *online; struct sched_unit *unit; @@ -1836,14 +1855,16 @@ void domain_update_node_affinity(struct domain *d) if ( !d->vcpu || !d->vcpu[0] ) return; =20 - if ( !zalloc_cpumask_var(&dom_cpumask) ) - return; - if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) + if ( !affinity ) { - free_cpumask_var(dom_cpumask); - return; + affinity =3D &masks; + if ( !alloc_affinity_masks(affinity) ) + return; } =20 + cpumask_clear(affinity->hard); + cpumask_clear(affinity->soft); + online =3D cpupool_domain_master_cpumask(d); =20 spin_lock(&d->node_affinity_lock); @@ -1864,22 +1885,21 @@ void domain_update_node_affinity(struct domain *d) */ for_each_sched_unit ( d, unit ) { - cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity); - cpumask_or(dom_cpumask_soft, dom_cpumask_soft, - unit->cpu_soft_affinity); + cpumask_or(affinity->hard, affinity->hard, unit->cpu_hard_affi= nity); + cpumask_or(affinity->soft, affinity->soft, unit->cpu_soft_affi= nity); } /* Filter out non-online cpus */ - cpumask_and(dom_cpumask, dom_cpumask, online); - ASSERT(!cpumask_empty(dom_cpumask)); + cpumask_and(affinity->hard, affinity->hard, online); + ASSERT(!cpumask_empty(affinity->hard)); /* And compute the intersection between hard, online and soft */ - cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); + cpumask_and(affinity->soft, affinity->soft, affinity->hard); =20 /* * If not empty, the intersection of hard, soft and online is the * narrowest set we want. If empty, we fall back to hard&online. */ - dom_affinity =3D cpumask_empty(dom_cpumask_soft) ? - dom_cpumask : dom_cpumask_soft; + dom_affinity =3D cpumask_empty(affinity->soft) ? affinity->hard + : affinity->soft; =20 nodes_clear(d->node_affinity); for_each_cpu ( cpu, dom_affinity ) @@ -1888,8 +1908,8 @@ void domain_update_node_affinity(struct domain *d) =20 spin_unlock(&d->node_affinity_lock); =20 - free_cpumask_var(dom_cpumask_soft); - free_cpumask_var(dom_cpumask); + if ( affinity =3D=3D &masks ) + free_affinity_masks(affinity); } =20 typedef long ret_t; diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 2afe54f54d..aac3a269b7 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -410,6 +410,25 @@ int cpupool_move_domain(struct domain *d, struct cpupo= ol *c) return ret; } =20 +/* Update affinities of all domains in a cpupool. */ +static void cpupool_update_node_affinity(const struct cpupool *c) +{ + struct affinity_masks masks; + struct domain *d; + + if ( !alloc_affinity_masks(&masks) ) + return; + + rcu_read_lock(&domlist_read_lock); + + for_each_domain_in_cpupool(d, c) + domain_update_node_aff(d, &masks); + + rcu_read_unlock(&domlist_read_lock); + + free_affinity_masks(&masks); +} + /* * assign a specific cpu to a cpupool * cpupool_lock must be held @@ -417,7 +436,6 @@ int cpupool_move_domain(struct domain *d, struct cpupoo= l *c) static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu) { int ret; - struct domain *d; const cpumask_t *cpus; =20 cpus =3D sched_get_opt_cpumask(c->gran, cpu); @@ -442,12 +460,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c= , unsigned int cpu) =20 rcu_read_unlock(&sched_res_rculock); =20 - rcu_read_lock(&domlist_read_lock); - for_each_domain_in_cpupool(d, c) - { - domain_update_node_affinity(d); - } - rcu_read_unlock(&domlist_read_lock); + cpupool_update_node_affinity(c); =20 return 0; } @@ -456,18 +469,14 @@ static int cpupool_unassign_cpu_finish(struct cpupool= *c) { int cpu =3D cpupool_moving_cpu; const cpumask_t *cpus; - struct domain *d; int ret; =20 if ( c !=3D cpupool_cpu_moving ) return -EADDRNOTAVAIL; =20 - /* - * We need this for scanning the domain list, both in - * cpu_disable_scheduler(), and at the bottom of this function. - */ rcu_read_lock(&domlist_read_lock); ret =3D cpu_disable_scheduler(cpu); + rcu_read_unlock(&domlist_read_lock); =20 rcu_read_lock(&sched_res_rculock); cpus =3D get_sched_res(cpu)->cpus; @@ -494,11 +503,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool = *c) } rcu_read_unlock(&sched_res_rculock); =20 - for_each_domain_in_cpupool(d, c) - { - domain_update_node_affinity(d); - } - rcu_read_unlock(&domlist_read_lock); + cpupool_update_node_affinity(c); =20 return ret; } diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index a870320146..2b04b01a0c 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -593,6 +593,13 @@ affinity_balance_cpumask(const struct sched_unit *unit= , int step, cpumask_copy(mask, unit->cpu_hard_affinity); } =20 +struct affinity_masks { + cpumask_var_t hard; + cpumask_var_t soft; +}; + +bool alloc_affinity_masks(struct affinity_masks *affinity); +void free_affinity_masks(struct affinity_masks *affinity); void sched_rm_cpu(unsigned int cpu); const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int c= pu); void schedule_dump(struct cpupool *c); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 1cf629e7ec..81f1fcba2a 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -666,8 +666,15 @@ static inline void get_knownalive_domain(struct domain= *d) ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED)); } =20 +struct affinity_masks; + int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity); -void domain_update_node_affinity(struct domain *d); +void domain_update_node_aff(struct domain *d, struct affinity_masks *affin= ity); + +static inline void domain_update_node_affinity(struct domain *d) +{ + domain_update_node_aff(d, NULL); +} =20 /* * To be implemented by each architecture, sanity checking the configurati= on --=20 2.35.3 From nobody Tue May 21 08:51:02 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1662361252; cv=none; d=zohomail.com; s=zohoarc; b=KvqpCvQm6rCdetSgLruIwuN7k4udpiWqh9QYwend3A0FgtpPIX3fKRyoxbIX58aic/AfaAqXNO7O4KyIHq2R/SBfRUgqMtdsE9+fKlJcXd7eTlPTvQx1YL5kFFFy/9uC9yH6GW/fQVbEhGW+S2zh5rqEXcXEIzqQBiCYoOmDgBo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662361252; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=KRjMVm9WMpGjwgu7kxWkc6BMWjYWXuyMzdLcjTHzkko=; b=a7dC+lnjlvnKK2rHfoa/MEZUBjhIvoQi3cwg9pvtFxS5StTCRWK5V9ac3drKsaO8JgHGyBJGqFiL8Uf3ELLzziGixLELraq7pxBMRa7s5s8YqiDe+08cgcOh2rVLoHDwFI8m+48jgVQNDsYXxTwGlvkpwqjE4zDn9pZasOMXiVA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1662361252815397.2071687814032; Mon, 5 Sep 2022 00:00:52 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.398490.639336 (Exim 4.92) (envelope-from ) id 1oV65c-00052S-LV; Mon, 05 Sep 2022 07:00:12 +0000 Received: by outflank-mailman (output) from mailman id 398490.639336; Mon, 05 Sep 2022 07:00:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oV65c-00050x-G0; Mon, 05 Sep 2022 07:00:12 +0000 Received: by outflank-mailman (input) for mailman id 398490; Mon, 05 Sep 2022 07:00:10 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oV65a-0004y0-LR for xen-devel@lists.xenproject.org; Mon, 05 Sep 2022 07:00:10 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 60a6a986-2ce8-11ed-af93-0125da4c0113; Mon, 05 Sep 2022 09:00:08 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 479FA385AF; Mon, 5 Sep 2022 07:00:08 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1C48A139C7; Mon, 5 Sep 2022 07:00:08 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id UGeYBXieFWNlZAAAMHmgww (envelope-from ); Mon, 05 Sep 2022 07:00:08 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 60a6a986-2ce8-11ed-af93-0125da4c0113 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1662361208; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KRjMVm9WMpGjwgu7kxWkc6BMWjYWXuyMzdLcjTHzkko=; b=PqGscHGVAJCBn5s/ioJGti4iNxBfglC9aE9lEb98YD20oqfVSWIdPscBS659MfXGjrV2CV ToH9QcEm6CdpWzKzBQxWlsfw6YPlwJKfGScXRam6HedU1KJTgjwc7kDxaMyHN686d38MJG oWlKv0bYA0VIxRmgp/0xzNfWUfiM7Eg= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Henry.Wang@arm.com, Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH v4 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm() Date: Mon, 5 Sep 2022 09:00:04 +0200 Message-Id: <20220905070005.16788-3-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220905070005.16788-1-jgross@suse.com> References: <20220905070005.16788-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1662361254436100005 Content-Type: text/plain; charset="utf-8" In order to prepare not allocating or freeing memory from schedule_cpu_rm(), move this functionality to dedicated functions. For now call those functions from schedule_cpu_rm(). No change of behavior expected. Signed-off-by: Juergen Gross --- V2: - add const (Jan Beulich) - use "unsigned int" for loop index (Jan Beulich) - use xmalloc_flex_struct() (Jan Beulich) - use XFREE() (Jan Beulich) - hold rcu lock longer (Jan Beulich) - add ASSERT() (Jan Beulich) V3: - added comment for schedule_cpu_rm_alloc() (Jan Beulich) V4: - rename alloc/free helpers and make them public (Andrew Cooper) - rephrase comment (Andrew Cooper) --- xen/common/sched/core.c | 143 ++++++++++++++++++++++--------------- xen/common/sched/private.h | 11 +++ 2 files changed, 98 insertions(+), 56 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 5f1a265889..588826cdbd 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -3237,6 +3237,75 @@ out: return ret; } =20 +/* + * Allocate all memory needed for free_cpu_rm_data(), as allocations cannot + * be made in stop_machine() context. + * + * Between alloc_cpu_rm_data() and the real cpu removal action the relevant + * contents of struct sched_resource can't change, as the cpu in question = is + * locked against any other movement to or from cpupools, and the data cop= ied + * by alloc_cpu_rm_data() is modified only in case the cpu in question is + * being moved from or to a cpupool. + */ +struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu) +{ + struct cpu_rm_data *data; + const struct sched_resource *sr; + unsigned int idx; + + rcu_read_lock(&sched_res_rculock); + + sr =3D get_sched_res(cpu); + data =3D xmalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity -= 1); + if ( !data ) + goto out; + + data->old_ops =3D sr->scheduler; + data->vpriv_old =3D idle_vcpu[cpu]->sched_unit->priv; + data->ppriv_old =3D sr->sched_priv; + + for ( idx =3D 0; idx < sr->granularity - 1; idx++ ) + { + data->sr[idx] =3D sched_alloc_res(); + if ( data->sr[idx] ) + { + data->sr[idx]->sched_unit_idle =3D sched_alloc_unit_mem(); + if ( !data->sr[idx]->sched_unit_idle ) + { + sched_res_free(&data->sr[idx]->rcu); + data->sr[idx] =3D NULL; + } + } + if ( !data->sr[idx] ) + { + while ( idx > 0 ) + sched_res_free(&data->sr[--idx]->rcu); + XFREE(data); + goto out; + } + + data->sr[idx]->curr =3D data->sr[idx]->sched_unit_idle; + data->sr[idx]->scheduler =3D &sched_idle_ops; + data->sr[idx]->granularity =3D 1; + + /* We want the lock not to change when replacing the resource. */ + data->sr[idx]->schedule_lock =3D sr->schedule_lock; + } + + out: + rcu_read_unlock(&sched_res_rculock); + + return data; +} + +void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu) +{ + sched_free_udata(mem->old_ops, mem->vpriv_old); + sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu); + + xfree(mem); +} + /* * Remove a pCPU from its cpupool. Its scheduler becomes &sched_idle_ops * (the idle scheduler). @@ -3245,53 +3314,23 @@ out: */ int schedule_cpu_rm(unsigned int cpu) { - void *ppriv_old, *vpriv_old; - struct sched_resource *sr, **sr_new =3D NULL; + struct sched_resource *sr; + struct cpu_rm_data *data; struct sched_unit *unit; - struct scheduler *old_ops; spinlock_t *old_lock; unsigned long flags; - int idx, ret =3D -ENOMEM; + int idx =3D 0; unsigned int cpu_iter; =20 + data =3D alloc_cpu_rm_data(cpu); + if ( !data ) + return -ENOMEM; + rcu_read_lock(&sched_res_rculock); =20 sr =3D get_sched_res(cpu); - old_ops =3D sr->scheduler; - - if ( sr->granularity > 1 ) - { - sr_new =3D xmalloc_array(struct sched_resource *, sr->granularity = - 1); - if ( !sr_new ) - goto out; - for ( idx =3D 0; idx < sr->granularity - 1; idx++ ) - { - sr_new[idx] =3D sched_alloc_res(); - if ( sr_new[idx] ) - { - sr_new[idx]->sched_unit_idle =3D sched_alloc_unit_mem(); - if ( !sr_new[idx]->sched_unit_idle ) - { - sched_res_free(&sr_new[idx]->rcu); - sr_new[idx] =3D NULL; - } - } - if ( !sr_new[idx] ) - { - for ( idx--; idx >=3D 0; idx-- ) - sched_res_free(&sr_new[idx]->rcu); - goto out; - } - sr_new[idx]->curr =3D sr_new[idx]->sched_unit_idle; - sr_new[idx]->scheduler =3D &sched_idle_ops; - sr_new[idx]->granularity =3D 1; =20 - /* We want the lock not to change when replacing the resource.= */ - sr_new[idx]->schedule_lock =3D sr->schedule_lock; - } - } - - ret =3D 0; + ASSERT(sr->granularity); ASSERT(sr->cpupool !=3D NULL); ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); ASSERT(!cpumask_test_cpu(cpu, sr->cpupool->cpu_valid)); @@ -3299,10 +3338,6 @@ int schedule_cpu_rm(unsigned int cpu) /* See comment in schedule_cpu_add() regarding lock switching. */ old_lock =3D pcpu_schedule_lock_irqsave(cpu, &flags); =20 - vpriv_old =3D idle_vcpu[cpu]->sched_unit->priv; - ppriv_old =3D sr->sched_priv; - - idx =3D 0; for_each_cpu ( cpu_iter, sr->cpus ) { per_cpu(sched_res_idx, cpu_iter) =3D 0; @@ -3316,27 +3351,27 @@ int schedule_cpu_rm(unsigned int cpu) else { /* Initialize unit. */ - unit =3D sr_new[idx]->sched_unit_idle; - unit->res =3D sr_new[idx]; + unit =3D data->sr[idx]->sched_unit_idle; + unit->res =3D data->sr[idx]; unit->is_running =3D true; sched_unit_add_vcpu(unit, idle_vcpu[cpu_iter]); sched_domain_insert_unit(unit, idle_vcpu[cpu_iter]->domain); =20 /* Adjust cpu masks of resources (old and new). */ cpumask_clear_cpu(cpu_iter, sr->cpus); - cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus); + cpumask_set_cpu(cpu_iter, data->sr[idx]->cpus); cpumask_set_cpu(cpu_iter, &sched_res_mask); =20 /* Init timer. */ - init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter); + init_timer(&data->sr[idx]->s_timer, s_timer_fn, NULL, cpu_iter= ); =20 /* Last resource initializations and insert resource pointer. = */ - sr_new[idx]->master_cpu =3D cpu_iter; - set_sched_res(cpu_iter, sr_new[idx]); + data->sr[idx]->master_cpu =3D cpu_iter; + set_sched_res(cpu_iter, data->sr[idx]); =20 /* Last action: set the new lock pointer. */ smp_mb(); - sr_new[idx]->schedule_lock =3D &sched_free_cpu_lock; + data->sr[idx]->schedule_lock =3D &sched_free_cpu_lock; =20 idx++; } @@ -3352,16 +3387,12 @@ int schedule_cpu_rm(unsigned int cpu) /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */ spin_unlock_irqrestore(old_lock, flags); =20 - sched_deinit_pdata(old_ops, ppriv_old, cpu); - - sched_free_udata(old_ops, vpriv_old); - sched_free_pdata(old_ops, ppriv_old, cpu); + sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu); =20 -out: rcu_read_unlock(&sched_res_rculock); - xfree(sr_new); + free_cpu_rm_data(data, cpu); =20 - return ret; + return 0; } =20 struct scheduler *scheduler_get_default(void) diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index 2b04b01a0c..e286849a13 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -600,6 +600,15 @@ struct affinity_masks { =20 bool alloc_affinity_masks(struct affinity_masks *affinity); void free_affinity_masks(struct affinity_masks *affinity); + +/* Memory allocation related data for schedule_cpu_rm(). */ +struct cpu_rm_data { + const struct scheduler *old_ops; + void *ppriv_old; + void *vpriv_old; + struct sched_resource *sr[]; +}; + void sched_rm_cpu(unsigned int cpu); const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int c= pu); void schedule_dump(struct cpupool *c); @@ -608,6 +617,8 @@ struct scheduler *scheduler_alloc(unsigned int sched_id= ); void scheduler_free(struct scheduler *sched); int cpu_disable_scheduler(unsigned int cpu); int schedule_cpu_add(unsigned int cpu, struct cpupool *c); +struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu); +void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu); int schedule_cpu_rm(unsigned int cpu); int sched_move_domain(struct domain *d, struct cpupool *c); struct cpupool *cpupool_get_by_id(unsigned int poolid); --=20 2.35.3 From nobody Tue May 21 08:51:02 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1662361244; cv=none; d=zohomail.com; s=zohoarc; b=fbJej+iVwgeNpZwc7WkiDr2giqXabQUN/EhHm/F+6I2lFawFtOlvpE1yXJWsLEidRsgnE0nIgKNiDpIGSqlkQi0JIYtW+6yYNHSbwRRWSMsdlTcH16fekZq43PvbpwaoEbvWChBezjSdhCkT7LBnWzwW+/uGDyzLivcOUrPwzOQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662361244; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Q2H8ajQ0HI/UBXsBcjmQNLbg3xZkwAorHJLO/n1m1pk=; b=gn77XP49Pf6pIEoLCSk64ZREt5JGYywqaRz+xvRAA5QMJ3ZfHrfy0IjJHPGexdZBHmK4dj8SK4lwnRBAg3+7wUy0QFrjUZ823KnZglrk1Hlm6Vjsd/Lt/z4E8o9Ssd3I4UERXqSIitf6ns+kYsLYc1d5IHophSsnl+UUydiI1CI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1662361244872367.7946979891542; Mon, 5 Sep 2022 00:00:44 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.398492.639354 (Exim 4.92) (envelope-from ) id 1oV65d-0005M5-Lc; Mon, 05 Sep 2022 07:00:13 +0000 Received: by outflank-mailman (output) from mailman id 398492.639354; Mon, 05 Sep 2022 07:00:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oV65d-0005JK-CR; Mon, 05 Sep 2022 07:00:13 +0000 Received: by outflank-mailman (input) for mailman id 398492; Mon, 05 Sep 2022 07:00:12 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oV65b-0004y1-RI for xen-devel@lists.xenproject.org; Mon, 05 Sep 2022 07:00:12 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 60ce29a3-2ce8-11ed-a016-b9edf5238543; Mon, 05 Sep 2022 09:00:08 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 8D5465FA47; Mon, 5 Sep 2022 07:00:08 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4EC7E139C7; Mon, 5 Sep 2022 07:00:08 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id aJHvEXieFWNlZAAAMHmgww (envelope-from ); Mon, 05 Sep 2022 07:00:08 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 60ce29a3-2ce8-11ed-a016-b9edf5238543 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1662361208; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q2H8ajQ0HI/UBXsBcjmQNLbg3xZkwAorHJLO/n1m1pk=; b=lCDIElhW05K9jsPGmmMOfov6AdxOQgnzDYOLOVIApJydT/7QsGJ+E397rcWhgYJjY+Qify Zo1HNE/ZyqVTs1qPiRQDFeRsGyWCJwlJV0SYRLin/azSVcqiAP8GwLdnmqePcj37f1Hf3v pEEl8ezUNborW2Hy2zKW49g2wSRRgXI= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Henry.Wang@arm.com, Juergen Gross , George Dunlap , Dario Faggioli , Gao Ruifeng , Jan Beulich Subject: [PATCH v4 3/3] xen/sched: fix cpu hotplug Date: Mon, 5 Sep 2022 09:00:05 +0200 Message-Id: <20220905070005.16788-4-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220905070005.16788-1-jgross@suse.com> References: <20220905070005.16788-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1662361246690100001 Content-Type: text/plain; charset="utf-8" Cpu unplugging is calling schedule_cpu_rm() via stop_machine_run() with interrupts disabled, thus any memory allocation or freeing must be avoided. Since commit 5047cd1d5dea ("xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in xmalloc()") this restriction is being enforced via an assertion, which will now fail. Fix this by allocating needed memory before entering stop_machine_run() and freeing any memory only after having finished stop_machine_run(). Fixes: 1ec410112cdd ("xen/sched: support differing granularity in schedule_= cpu_[add/rm]()") Reported-by: Gao Ruifeng Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V2: - move affinity mask allocation into schedule_cpu_rm_alloc() (Jan Beulich) --- xen/common/sched/core.c | 25 +++++++++++--- xen/common/sched/cpupool.c | 69 +++++++++++++++++++++++++++++--------- xen/common/sched/private.h | 5 +-- 3 files changed, 77 insertions(+), 22 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 588826cdbd..acdf073c3f 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -3247,7 +3247,7 @@ out: * by alloc_cpu_rm_data() is modified only in case the cpu in question is * being moved from or to a cpupool. */ -struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu) +struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc) { struct cpu_rm_data *data; const struct sched_resource *sr; @@ -3260,6 +3260,17 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int c= pu) if ( !data ) goto out; =20 + if ( aff_alloc ) + { + if ( !alloc_affinity_masks(&data->affinity) ) + { + XFREE(data); + goto out; + } + } + else + memset(&data->affinity, 0, sizeof(data->affinity)); + data->old_ops =3D sr->scheduler; data->vpriv_old =3D idle_vcpu[cpu]->sched_unit->priv; data->ppriv_old =3D sr->sched_priv; @@ -3280,6 +3291,7 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cp= u) { while ( idx > 0 ) sched_res_free(&data->sr[--idx]->rcu); + free_affinity_masks(&data->affinity); XFREE(data); goto out; } @@ -3302,6 +3314,7 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsign= ed int cpu) { sched_free_udata(mem->old_ops, mem->vpriv_old); sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu); + free_affinity_masks(&mem->affinity); =20 xfree(mem); } @@ -3312,17 +3325,18 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsi= gned int cpu) * The cpu is already marked as "free" and not valid any longer for its * cpupool. */ -int schedule_cpu_rm(unsigned int cpu) +int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *data) { struct sched_resource *sr; - struct cpu_rm_data *data; struct sched_unit *unit; spinlock_t *old_lock; unsigned long flags; int idx =3D 0; unsigned int cpu_iter; + bool freemem =3D !data; =20 - data =3D alloc_cpu_rm_data(cpu); + if ( !data ) + data =3D alloc_cpu_rm_data(cpu, false); if ( !data ) return -ENOMEM; =20 @@ -3390,7 +3404,8 @@ int schedule_cpu_rm(unsigned int cpu) sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu); =20 rcu_read_unlock(&sched_res_rculock); - free_cpu_rm_data(data, cpu); + if ( freemem ) + free_cpu_rm_data(data, cpu); =20 return 0; } diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index aac3a269b7..b2c6f520c3 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -411,22 +411,28 @@ int cpupool_move_domain(struct domain *d, struct cpup= ool *c) } =20 /* Update affinities of all domains in a cpupool. */ -static void cpupool_update_node_affinity(const struct cpupool *c) +static void cpupool_update_node_affinity(const struct cpupool *c, + struct affinity_masks *masks) { - struct affinity_masks masks; + struct affinity_masks local_masks; struct domain *d; =20 - if ( !alloc_affinity_masks(&masks) ) - return; + if ( !masks ) + { + if ( !alloc_affinity_masks(&local_masks) ) + return; + masks =3D &local_masks; + } =20 rcu_read_lock(&domlist_read_lock); =20 for_each_domain_in_cpupool(d, c) - domain_update_node_aff(d, &masks); + domain_update_node_aff(d, masks); =20 rcu_read_unlock(&domlist_read_lock); =20 - free_affinity_masks(&masks); + if ( masks =3D=3D &local_masks ) + free_affinity_masks(masks); } =20 /* @@ -460,15 +466,17 @@ static int cpupool_assign_cpu_locked(struct cpupool *= c, unsigned int cpu) =20 rcu_read_unlock(&sched_res_rculock); =20 - cpupool_update_node_affinity(c); + cpupool_update_node_affinity(c, NULL); =20 return 0; } =20 -static int cpupool_unassign_cpu_finish(struct cpupool *c) +static int cpupool_unassign_cpu_finish(struct cpupool *c, + struct cpu_rm_data *mem) { int cpu =3D cpupool_moving_cpu; const cpumask_t *cpus; + struct affinity_masks *masks =3D mem ? &mem->affinity : NULL; int ret; =20 if ( c !=3D cpupool_cpu_moving ) @@ -491,7 +499,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *= c) */ if ( !ret ) { - ret =3D schedule_cpu_rm(cpu); + ret =3D schedule_cpu_rm(cpu, mem); if ( ret ) cpumask_andnot(&cpupool_free_cpus, &cpupool_free_cpus, cpus); else @@ -503,7 +511,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *= c) } rcu_read_unlock(&sched_res_rculock); =20 - cpupool_update_node_affinity(c); + cpupool_update_node_affinity(c, masks); =20 return ret; } @@ -567,7 +575,7 @@ static long cf_check cpupool_unassign_cpu_helper(void *= info) cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu); spin_lock(&cpupool_lock); =20 - ret =3D cpupool_unassign_cpu_finish(c); + ret =3D cpupool_unassign_cpu_finish(c, NULL); =20 spin_unlock(&cpupool_lock); debugtrace_printk("cpupool_unassign_cpu ret=3D%ld\n", ret); @@ -714,7 +722,7 @@ static int cpupool_cpu_add(unsigned int cpu) * This function is called in stop_machine context, so we can be sure no * non-idle vcpu is active on the system. */ -static void cpupool_cpu_remove(unsigned int cpu) +static void cpupool_cpu_remove(unsigned int cpu, struct cpu_rm_data *mem) { int ret; =20 @@ -722,7 +730,7 @@ static void cpupool_cpu_remove(unsigned int cpu) =20 if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) ) { - ret =3D cpupool_unassign_cpu_finish(cpupool0); + ret =3D cpupool_unassign_cpu_finish(cpupool0, mem); BUG_ON(ret); } cpumask_clear_cpu(cpu, &cpupool_free_cpus); @@ -788,7 +796,7 @@ static void cpupool_cpu_remove_forced(unsigned int cpu) { ret =3D cpupool_unassign_cpu_start(c, master_cpu); BUG_ON(ret); - ret =3D cpupool_unassign_cpu_finish(c); + ret =3D cpupool_unassign_cpu_finish(c, NULL); BUG_ON(ret); } } @@ -1006,12 +1014,24 @@ void cf_check dump_runq(unsigned char key) static int cf_check cpu_callback( struct notifier_block *nfb, unsigned long action, void *hcpu) { + static struct cpu_rm_data *mem; + unsigned int cpu =3D (unsigned long)hcpu; int rc =3D 0; =20 switch ( action ) { case CPU_DOWN_FAILED: + if ( system_state <=3D SYS_STATE_active ) + { + if ( mem ) + { + free_cpu_rm_data(mem, cpu); + mem =3D NULL; + } + rc =3D cpupool_cpu_add(cpu); + } + break; case CPU_ONLINE: if ( system_state <=3D SYS_STATE_active ) rc =3D cpupool_cpu_add(cpu); @@ -1019,12 +1039,31 @@ static int cf_check cpu_callback( case CPU_DOWN_PREPARE: /* Suspend/Resume don't change assignments of cpus to cpupools. */ if ( system_state <=3D SYS_STATE_active ) + { rc =3D cpupool_cpu_remove_prologue(cpu); + if ( !rc ) + { + ASSERT(!mem); + mem =3D alloc_cpu_rm_data(cpu, true); + rc =3D mem ? 0 : -ENOMEM; + } + } break; case CPU_DYING: /* Suspend/Resume don't change assignments of cpus to cpupools. */ if ( system_state <=3D SYS_STATE_active ) - cpupool_cpu_remove(cpu); + { + ASSERT(mem); + cpupool_cpu_remove(cpu, mem); + } + break; + case CPU_DEAD: + if ( system_state <=3D SYS_STATE_active ) + { + ASSERT(mem); + free_cpu_rm_data(mem, cpu); + mem =3D NULL; + } break; case CPU_RESUME_FAILED: cpupool_cpu_remove_forced(cpu); diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index e286849a13..0126a4bb9e 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -603,6 +603,7 @@ void free_affinity_masks(struct affinity_masks *affinit= y); =20 /* Memory allocation related data for schedule_cpu_rm(). */ struct cpu_rm_data { + struct affinity_masks affinity; const struct scheduler *old_ops; void *ppriv_old; void *vpriv_old; @@ -617,9 +618,9 @@ struct scheduler *scheduler_alloc(unsigned int sched_id= ); void scheduler_free(struct scheduler *sched); int cpu_disable_scheduler(unsigned int cpu); int schedule_cpu_add(unsigned int cpu, struct cpupool *c); -struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu); +struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc); void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu); -int schedule_cpu_rm(unsigned int cpu); +int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *mem); int sched_move_domain(struct domain *d, struct cpupool *c); struct cpupool *cpupool_get_by_id(unsigned int poolid); void cpupool_put(struct cpupool *pool); --=20 2.35.3