From nobody Fri May 3 12:20:54 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1659446921; cv=none; d=zohomail.com; s=zohoarc; b=ejj4Gisn+kErGyAwoktEedr+BMjYcqAW5xaa5h4VsmKyL8TCADQOCW0MjwQ2I9BOJ7cexG5x2IjCzbvj14qUzRH/aCALD3MhUU3G1EcdhygVvfjKQk0DBaQthgGKq73vT4iWjGqTNxBTvTrmWo3HZ3g732F7KmzKoQRvAzzW/I8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1659446921; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=yj8R2JmYTH+KzF/O2iIv9Z3fhDfAQu02NhWeFYfLre8=; b=XUtgZifC0hIDXzC5LY9GSx6BWwkOCxSD1+79aK/ThVt/FEbBp64pUfSYlW+uJ8SKpV+zJV/VDtcbOZDS+pDdQFY/zXnzpYjE2/h96bk0T9OZlqxhwUEBundCawvW9iw/VutJ/MS6lWrgOhuGBV9u+FQ2IYhRv8UOJVbGOvA3jIE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 165944692118323.086127551110167; Tue, 2 Aug 2022 06:28:41 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.379319.612671 (Exim 4.92) (envelope-from ) id 1oIrwY-0002vL-Lu; Tue, 02 Aug 2022 13:28:18 +0000 Received: by outflank-mailman (output) from mailman id 379319.612671; Tue, 02 Aug 2022 13:28:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oIrwY-0002vE-HR; Tue, 02 Aug 2022 13:28:18 +0000 Received: by outflank-mailman (input) for mailman id 379319; Tue, 02 Aug 2022 13:28:16 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oIrwW-0001vX-Lw for xen-devel@lists.xenproject.org; Tue, 02 Aug 2022 13:28:16 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id f6de6b48-1266-11ed-bd2d-47488cf2e6aa; Tue, 02 Aug 2022 15:28:15 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id AC8E622874; Tue, 2 Aug 2022 13:27:49 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7E0C01345B; Tue, 2 Aug 2022 13:27:49 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id mEBLHVUm6WLWWQAAMHmgww (envelope-from ); Tue, 02 Aug 2022 13:27:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f6de6b48-1266-11ed-bd2d-47488cf2e6aa DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1659446869; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yj8R2JmYTH+KzF/O2iIv9Z3fhDfAQu02NhWeFYfLre8=; b=lwLUtaEeuVk+gFRisncvqqkx5444Nn4lM9t61vxL75pUlEkb/cMRlj6SC0kewvhQAWO0Zg 3boCcpP5Ojl+20YuGlGSNd6X0zHymzv2Q/lcJweet7VR93uKNGeyI6cO0+/3FeaANPIkP4 jrhv4WXHD1WqKhZyiQxT9IJ0s5QEtC8= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity() Date: Tue, 2 Aug 2022 15:27:45 +0200 Message-Id: <20220802132747.22507-2-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220802132747.22507-1-jgross@suse.com> References: <20220802132747.22507-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1659446923156100001 Content-Type: text/plain; charset="utf-8" For updating the node affinities of all domains in a cpupool add a new function cpupool_update_node_affinity(). In order to avoid multiple allocations of cpumasks split domain_update_node_affinity() into a wrapper doing the needed allocations and a work function, which can be called by cpupool_update_node_affinity(), too. This will help later to pre-allocate the cpumasks in order to avoid allocations in stop-machine context. Signed-off-by: Juergen Gross --- xen/common/sched/core.c | 61 ++++++++++++++++++++----------------- xen/common/sched/cpupool.c | 62 +++++++++++++++++++++++++++----------- xen/common/sched/private.h | 8 +++++ 3 files changed, 87 insertions(+), 44 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index f689b55783..c8d1034d3d 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t= cmd, return ret; } =20 -void domain_update_node_affinity(struct domain *d) +void domain_update_node_affinity_noalloc(struct domain *d, + const cpumask_t *online, + struct affinity_masks *affinity) { - cpumask_var_t dom_cpumask, dom_cpumask_soft; cpumask_t *dom_affinity; - const cpumask_t *online; struct sched_unit *unit; unsigned int cpu; =20 - /* Do we have vcpus already? If not, no need to update node-affinity. = */ - if ( !d->vcpu || !d->vcpu[0] ) - return; - - if ( !zalloc_cpumask_var(&dom_cpumask) ) - return; - if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) - { - free_cpumask_var(dom_cpumask); - return; - } - - online =3D cpupool_domain_master_cpumask(d); - spin_lock(&d->node_affinity_lock); =20 /* @@ -1830,22 +1816,21 @@ void domain_update_node_affinity(struct domain *d) */ for_each_sched_unit ( d, unit ) { - cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity); - cpumask_or(dom_cpumask_soft, dom_cpumask_soft, - unit->cpu_soft_affinity); + cpumask_or(affinity->hard, affinity->hard, unit->cpu_hard_affi= nity); + cpumask_or(affinity->soft, affinity->soft, unit->cpu_soft_affi= nity); } /* Filter out non-online cpus */ - cpumask_and(dom_cpumask, dom_cpumask, online); - ASSERT(!cpumask_empty(dom_cpumask)); + cpumask_and(affinity->hard, affinity->hard, online); + ASSERT(!cpumask_empty(affinity->hard)); /* And compute the intersection between hard, online and soft */ - cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); + cpumask_and(affinity->soft, affinity->soft, affinity->hard); =20 /* * If not empty, the intersection of hard, soft and online is the * narrowest set we want. If empty, we fall back to hard&online. */ - dom_affinity =3D cpumask_empty(dom_cpumask_soft) ? - dom_cpumask : dom_cpumask_soft; + dom_affinity =3D cpumask_empty(affinity->soft) ? affinity->hard + : affinity->soft; =20 nodes_clear(d->node_affinity); for_each_cpu ( cpu, dom_affinity ) @@ -1853,9 +1838,31 @@ void domain_update_node_affinity(struct domain *d) } =20 spin_unlock(&d->node_affinity_lock); +} + +void domain_update_node_affinity(struct domain *d) +{ + struct affinity_masks masks; + const cpumask_t *online; + + /* Do we have vcpus already? If not, no need to update node-affinity. = */ + if ( !d->vcpu || !d->vcpu[0] ) + return; + + if ( !zalloc_cpumask_var(&masks.hard) ) + return; + if ( !zalloc_cpumask_var(&masks.soft) ) + { + free_cpumask_var(masks.hard); + return; + } + + online =3D cpupool_domain_master_cpumask(d); + + domain_update_node_affinity_noalloc(d, online, &masks); =20 - free_cpumask_var(dom_cpumask_soft); - free_cpumask_var(dom_cpumask); + free_cpumask_var(masks.soft); + free_cpumask_var(masks.hard); } =20 typedef long ret_t; diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 2afe54f54d..1463dcd767 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct cpupo= ol *c) return ret; } =20 +/* Update affinities of all domains in a cpupool. */ +static int cpupool_alloc_affin_masks(struct affinity_masks *masks) +{ + if ( !alloc_cpumask_var(&masks->hard) ) + return -ENOMEM; + if ( alloc_cpumask_var(&masks->soft) ) + return 0; + + free_cpumask_var(masks->hard); + return -ENOMEM; +} + +static void cpupool_free_affin_masks(struct affinity_masks *masks) +{ + free_cpumask_var(masks->soft); + free_cpumask_var(masks->hard); +} + +static void cpupool_update_node_affinity(const struct cpupool *c) +{ + const cpumask_t *online =3D c->res_valid; + struct affinity_masks masks; + struct domain *d; + + if ( cpupool_alloc_affin_masks(&masks) ) + return; + + rcu_read_lock(&domlist_read_lock); + for_each_domain_in_cpupool(d, c) + { + if ( d->vcpu && d->vcpu[0] ) + { + cpumask_clear(masks.hard); + cpumask_clear(masks.soft); + domain_update_node_affinity_noalloc(d, online, &masks); + } + } + rcu_read_unlock(&domlist_read_lock); + + cpupool_free_affin_masks(&masks); +} + /* * assign a specific cpu to a cpupool * cpupool_lock must be held @@ -417,7 +459,6 @@ int cpupool_move_domain(struct domain *d, struct cpupoo= l *c) static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu) { int ret; - struct domain *d; const cpumask_t *cpus; =20 cpus =3D sched_get_opt_cpumask(c->gran, cpu); @@ -442,12 +483,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c= , unsigned int cpu) =20 rcu_read_unlock(&sched_res_rculock); =20 - rcu_read_lock(&domlist_read_lock); - for_each_domain_in_cpupool(d, c) - { - domain_update_node_affinity(d); - } - rcu_read_unlock(&domlist_read_lock); + cpupool_update_node_affinity(c); =20 return 0; } @@ -456,18 +492,14 @@ static int cpupool_unassign_cpu_finish(struct cpupool= *c) { int cpu =3D cpupool_moving_cpu; const cpumask_t *cpus; - struct domain *d; int ret; =20 if ( c !=3D cpupool_cpu_moving ) return -EADDRNOTAVAIL; =20 - /* - * We need this for scanning the domain list, both in - * cpu_disable_scheduler(), and at the bottom of this function. - */ rcu_read_lock(&domlist_read_lock); ret =3D cpu_disable_scheduler(cpu); + rcu_read_unlock(&domlist_read_lock); =20 rcu_read_lock(&sched_res_rculock); cpus =3D get_sched_res(cpu)->cpus; @@ -494,11 +526,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool = *c) } rcu_read_unlock(&sched_res_rculock); =20 - for_each_domain_in_cpupool(d, c) - { - domain_update_node_affinity(d); - } - rcu_read_unlock(&domlist_read_lock); + cpupool_update_node_affinity(c); =20 return ret; } diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index a870320146..de0cf63ce8 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -593,6 +593,14 @@ affinity_balance_cpumask(const struct sched_unit *unit= , int step, cpumask_copy(mask, unit->cpu_hard_affinity); } =20 +struct affinity_masks { + cpumask_var_t hard; + cpumask_var_t soft; +}; + +void domain_update_node_affinity_noalloc(struct domain *d, + const cpumask_t *online, + struct affinity_masks *affinity); void sched_rm_cpu(unsigned int cpu); const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int c= pu); void schedule_dump(struct cpupool *c); --=20 2.35.3 From nobody Fri May 3 12:20:54 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1659446895; cv=none; d=zohomail.com; s=zohoarc; b=hLSko5RoGZm4FBFa/cizEAYaibUBjz3Re9QsCxcduXLMMmdqGPLpZCAdDRMCog40jtihmC/xpVWdDu+7tGW8Xd/0g9tIS5W0kMr+RXY2SAublItHt2ze7OKv88ZNhFPoSwnvcXQndVu+9PMxSLlaTlxiMrYhZF6Re0AQCAasxC0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1659446895; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=r055KlFFG7wgfZTYdhnXl5L+qkf5yrkGetDrG95QZRU=; b=ixiJDslk4RC8TdDTJBrHD9KzjlAxKZDcpLqkBfjtHhYNC6+3Lm9eJ0qVksWElIrNa+Qc9JXY9iv3j1OUP5pTkQpJtR4IHOzEKt1mI5E3dJFv0QPee+hlnn8AGgWdlCw76ZMDLJM6Y0hvkk/Ed/OWv5hDnQRgUvKHetRtFEnBL2E= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1659446895340479.005747718566; Tue, 2 Aug 2022 06:28:15 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.379310.612650 (Exim 4.92) (envelope-from ) id 1oIrw9-0001vp-19; Tue, 02 Aug 2022 13:27:53 +0000 Received: by outflank-mailman (output) from mailman id 379310.612650; Tue, 02 Aug 2022 13:27:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oIrw8-0001vi-ST; Tue, 02 Aug 2022 13:27:52 +0000 Received: by outflank-mailman (input) for mailman id 379310; Tue, 02 Aug 2022 13:27:51 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oIrw7-0001vX-LG for xen-devel@lists.xenproject.org; Tue, 02 Aug 2022 13:27:51 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id e7964cae-1266-11ed-bd2d-47488cf2e6aa; Tue, 02 Aug 2022 15:27:50 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id DF0691FECD; Tue, 2 Aug 2022 13:27:49 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B3B2F1345B; Tue, 2 Aug 2022 13:27:49 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 2JSGKlUm6WLWWQAAMHmgww (envelope-from ); Tue, 02 Aug 2022 13:27:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e7964cae-1266-11ed-bd2d-47488cf2e6aa DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1659446869; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r055KlFFG7wgfZTYdhnXl5L+qkf5yrkGetDrG95QZRU=; b=D9Nk4HC7bTVYz3Rki3dxqiER5QExFkNZOTkSQnHBogzwviGA26NwXBWapTUVEKWxGFvqq3 49Hmb+mjkIkdphYJgsoqhhd3PiMLosqdCjKYwCrhpYiyzFCmnU1HpNhscIzaJrceClAU7b L45vuWnyGN4XjN4cv4kIdbm2HOpgHpQ= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm() Date: Tue, 2 Aug 2022 15:27:46 +0200 Message-Id: <20220802132747.22507-3-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220802132747.22507-1-jgross@suse.com> References: <20220802132747.22507-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1659446897489100001 Content-Type: text/plain; charset="utf-8" In order to prepare not allocating or freeing memory from schedule_cpu_rm(), move this functionality to dedicated functions. For now call those functions from schedule_cpu_rm(). No change of behavior expected. Signed-off-by: Juergen Gross --- xen/common/sched/core.c | 133 +++++++++++++++++++++---------------- xen/common/sched/private.h | 8 +++ 2 files changed, 85 insertions(+), 56 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index c8d1034d3d..d6ff4f4921 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -3190,6 +3190,66 @@ out: return ret; } =20 +static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu) +{ + struct cpu_rm_data *data; + struct sched_resource *sr; + int idx; + + rcu_read_lock(&sched_res_rculock); + + sr =3D get_sched_res(cpu); + data =3D xzalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity -= 1); + if ( !data ) + goto out; + + data->old_ops =3D sr->scheduler; + data->vpriv_old =3D idle_vcpu[cpu]->sched_unit->priv; + data->ppriv_old =3D sr->sched_priv; + + for ( idx =3D 0; idx < sr->granularity - 1; idx++ ) + { + data->sr[idx] =3D sched_alloc_res(); + if ( data->sr[idx] ) + { + data->sr[idx]->sched_unit_idle =3D sched_alloc_unit_mem(); + if ( !data->sr[idx]->sched_unit_idle ) + { + sched_res_free(&data->sr[idx]->rcu); + data->sr[idx] =3D NULL; + } + } + if ( !data->sr[idx] ) + { + for ( idx--; idx >=3D 0; idx-- ) + sched_res_free(&data->sr[idx]->rcu); + xfree(data); + data =3D NULL; + goto out; + } + + data->sr[idx]->curr =3D data->sr[idx]->sched_unit_idle; + data->sr[idx]->scheduler =3D &sched_idle_ops; + data->sr[idx]->granularity =3D 1; + + /* We want the lock not to change when replacing the resource. */ + data->sr[idx]->schedule_lock =3D sr->schedule_lock; + } + + out: + rcu_read_unlock(&sched_res_rculock); + + return data; +} + +static void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu) +{ + sched_free_udata(mem->old_ops, mem->vpriv_old); + sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu); + + xfree(mem); +} + /* * Remove a pCPU from its cpupool. Its scheduler becomes &sched_idle_ops * (the idle scheduler). @@ -3198,53 +3258,22 @@ out: */ int schedule_cpu_rm(unsigned int cpu) { - void *ppriv_old, *vpriv_old; - struct sched_resource *sr, **sr_new =3D NULL; + struct sched_resource *sr; + struct cpu_rm_data *data; struct sched_unit *unit; - struct scheduler *old_ops; spinlock_t *old_lock; unsigned long flags; - int idx, ret =3D -ENOMEM; + int idx =3D 0; unsigned int cpu_iter; =20 + data =3D schedule_cpu_rm_alloc(cpu); + if ( !data ) + return -ENOMEM; + rcu_read_lock(&sched_res_rculock); =20 sr =3D get_sched_res(cpu); - old_ops =3D sr->scheduler; =20 - if ( sr->granularity > 1 ) - { - sr_new =3D xmalloc_array(struct sched_resource *, sr->granularity = - 1); - if ( !sr_new ) - goto out; - for ( idx =3D 0; idx < sr->granularity - 1; idx++ ) - { - sr_new[idx] =3D sched_alloc_res(); - if ( sr_new[idx] ) - { - sr_new[idx]->sched_unit_idle =3D sched_alloc_unit_mem(); - if ( !sr_new[idx]->sched_unit_idle ) - { - sched_res_free(&sr_new[idx]->rcu); - sr_new[idx] =3D NULL; - } - } - if ( !sr_new[idx] ) - { - for ( idx--; idx >=3D 0; idx-- ) - sched_res_free(&sr_new[idx]->rcu); - goto out; - } - sr_new[idx]->curr =3D sr_new[idx]->sched_unit_idle; - sr_new[idx]->scheduler =3D &sched_idle_ops; - sr_new[idx]->granularity =3D 1; - - /* We want the lock not to change when replacing the resource.= */ - sr_new[idx]->schedule_lock =3D sr->schedule_lock; - } - } - - ret =3D 0; ASSERT(sr->cpupool !=3D NULL); ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); ASSERT(!cpumask_test_cpu(cpu, sr->cpupool->cpu_valid)); @@ -3252,10 +3281,6 @@ int schedule_cpu_rm(unsigned int cpu) /* See comment in schedule_cpu_add() regarding lock switching. */ old_lock =3D pcpu_schedule_lock_irqsave(cpu, &flags); =20 - vpriv_old =3D idle_vcpu[cpu]->sched_unit->priv; - ppriv_old =3D sr->sched_priv; - - idx =3D 0; for_each_cpu ( cpu_iter, sr->cpus ) { per_cpu(sched_res_idx, cpu_iter) =3D 0; @@ -3269,27 +3294,27 @@ int schedule_cpu_rm(unsigned int cpu) else { /* Initialize unit. */ - unit =3D sr_new[idx]->sched_unit_idle; - unit->res =3D sr_new[idx]; + unit =3D data->sr[idx]->sched_unit_idle; + unit->res =3D data->sr[idx]; unit->is_running =3D true; sched_unit_add_vcpu(unit, idle_vcpu[cpu_iter]); sched_domain_insert_unit(unit, idle_vcpu[cpu_iter]->domain); =20 /* Adjust cpu masks of resources (old and new). */ cpumask_clear_cpu(cpu_iter, sr->cpus); - cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus); + cpumask_set_cpu(cpu_iter, data->sr[idx]->cpus); cpumask_set_cpu(cpu_iter, &sched_res_mask); =20 /* Init timer. */ - init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter); + init_timer(&data->sr[idx]->s_timer, s_timer_fn, NULL, cpu_iter= ); =20 /* Last resource initializations and insert resource pointer. = */ - sr_new[idx]->master_cpu =3D cpu_iter; - set_sched_res(cpu_iter, sr_new[idx]); + data->sr[idx]->master_cpu =3D cpu_iter; + set_sched_res(cpu_iter, data->sr[idx]); =20 /* Last action: set the new lock pointer. */ smp_mb(); - sr_new[idx]->schedule_lock =3D &sched_free_cpu_lock; + data->sr[idx]->schedule_lock =3D &sched_free_cpu_lock; =20 idx++; } @@ -3305,16 +3330,12 @@ int schedule_cpu_rm(unsigned int cpu) /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */ spin_unlock_irqrestore(old_lock, flags); =20 - sched_deinit_pdata(old_ops, ppriv_old, cpu); - - sched_free_udata(old_ops, vpriv_old); - sched_free_pdata(old_ops, ppriv_old, cpu); + sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu); =20 -out: rcu_read_unlock(&sched_res_rculock); - xfree(sr_new); + schedule_cpu_rm_free(data, cpu); =20 - return ret; + return 0; } =20 struct scheduler *scheduler_get_default(void) diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index de0cf63ce8..c626ad4907 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -598,6 +598,14 @@ struct affinity_masks { cpumask_var_t soft; }; =20 +/* Memory allocation related data for schedule_cpu_rm(). */ +struct cpu_rm_data { + struct scheduler *old_ops; + void *ppriv_old; + void *vpriv_old; + struct sched_resource *sr[]; +}; + void domain_update_node_affinity_noalloc(struct domain *d, const cpumask_t *online, struct affinity_masks *affinity); --=20 2.35.3 From nobody Fri May 3 12:20:54 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1659447408; cv=none; d=zohomail.com; s=zohoarc; b=Vr2au1PM/z3rO9UvbvsE8f6NwzJoE8uShtSgWZVBOGNpMs1EoYeGHfgRLaicyGPE0ealHJttLPuvt/phkswclVI2K5roeAel03Wj9d9YKMTS8nyogNoxGNgftn2XPDLTMTCxJbNybICQAaXYzaUcBcjwwWY+nBuLJXpqJ3AQmlk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1659447408; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=yxhfYvFG5L28ncEPs8ILtxibkaHQd3HYF/JgY5i75BY=; b=XjeyUkUhIx0w0hcOnpEGxh2OgPvhGbER43EB1JzuggSSce78ctNL/lp7tMq38HwxaBxXIXYDtINjelFcllbEm8WFtiVWw/526An8px+zbYG/PTBZowZSu5E4AqKdMH8byzx/jqp9zmMX3A2fQ4fFYH5Ny5L6vd0SQRaGd7eCaw0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1659447408831188.2417096690457; Tue, 2 Aug 2022 06:36:48 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.379327.612682 (Exim 4.92) (envelope-from ) id 1oIs4O-0004b8-EJ; Tue, 02 Aug 2022 13:36:24 +0000 Received: by outflank-mailman (output) from mailman id 379327.612682; Tue, 02 Aug 2022 13:36:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oIs4O-0004b1-Bb; Tue, 02 Aug 2022 13:36:24 +0000 Received: by outflank-mailman (input) for mailman id 379327; Tue, 02 Aug 2022 13:36:23 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oIs4N-0004av-5c for xen-devel@lists.xenproject.org; Tue, 02 Aug 2022 13:36:23 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 189c781a-1268-11ed-bd2d-47488cf2e6aa; Tue, 02 Aug 2022 15:36:22 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 7773F1FD3A; Tue, 2 Aug 2022 13:36:21 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 396951345B; Tue, 2 Aug 2022 13:36:21 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id nlWmDFUo6WLkXQAAMHmgww (envelope-from ); Tue, 02 Aug 2022 13:36:21 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 189c781a-1268-11ed-bd2d-47488cf2e6aa DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1659447381; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yxhfYvFG5L28ncEPs8ILtxibkaHQd3HYF/JgY5i75BY=; b=b3IxQFs7pm5TJYFxJon3cr8h1MiPCDNz75Eqm/ylqOqTDkoYGrOMu7gXGhFli6f7j/tZm7 HOB7hxhqSrYoTpeVqS5jNkFiIUVLZCwDRvYfDYf2ECt+Q3vTIxyciMxLP7W3oZA11GWvaP 0DSm1fquKkW7WbaDmD+qxs3QLWYSeno= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli , Gao Ruifeng Subject: [PATCH 3/3] xen/sched: fix cpu hotplug Date: Tue, 2 Aug 2022 15:36:19 +0200 Message-Id: <20220802133619.22965-1-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220802132747.22507-1-jgross@suse.com> References: <20220802132747.22507-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1659447409369100003 Content-Type: text/plain; charset="utf-8" Cpu cpu unplugging is calling schedule_cpu_rm() via stop_machine_run() with interrupts disabled, thus any memory allocation or freeing must be avoided. Since commit 5047cd1d5dea ("xen/common: Use enhanced ASSERT_ALLOC_CONTEXT in xmalloc()") this restriction is being enforced via an assertion, which will now fail. Before that commit cpu unplugging in normal configurations was working just by chance as only the cpu performing schedule_cpu_rm() was doing active work. With core scheduling enabled, however, failures could result from memory allocations not being properly propagated to other cpus' TLBs. Fix this mess by allocating needed memory before entering stop_machine_run() and freeing any memory only after having finished stop_machine_run(). Fixes: 1ec410112cdd ("xen/sched: support differing granularity in schedule_= cpu_[add/rm]()") Reported-by: Gao Ruifeng Signed-off-by: Juergen Gross --- xen/common/sched/core.c | 14 ++++--- xen/common/sched/cpupool.c | 77 +++++++++++++++++++++++++++++--------- xen/common/sched/private.h | 5 ++- 3 files changed, 72 insertions(+), 24 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index d6ff4f4921..1473cef372 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -3190,7 +3190,7 @@ out: return ret; } =20 -static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu) +struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu) { struct cpu_rm_data *data; struct sched_resource *sr; @@ -3242,7 +3242,7 @@ static struct cpu_rm_data *schedule_cpu_rm_alloc(unsi= gned int cpu) return data; } =20 -static void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu) +void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu) { sched_free_udata(mem->old_ops, mem->vpriv_old); sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu); @@ -3256,17 +3256,18 @@ static void schedule_cpu_rm_free(struct cpu_rm_data= *mem, unsigned int cpu) * The cpu is already marked as "free" and not valid any longer for its * cpupool. */ -int schedule_cpu_rm(unsigned int cpu) +int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *data) { struct sched_resource *sr; - struct cpu_rm_data *data; struct sched_unit *unit; spinlock_t *old_lock; unsigned long flags; int idx =3D 0; unsigned int cpu_iter; + bool freemem =3D !data; =20 - data =3D schedule_cpu_rm_alloc(cpu); + if ( !data ) + data =3D schedule_cpu_rm_alloc(cpu); if ( !data ) return -ENOMEM; =20 @@ -3333,7 +3334,8 @@ int schedule_cpu_rm(unsigned int cpu) sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu); =20 rcu_read_unlock(&sched_res_rculock); - schedule_cpu_rm_free(data, cpu); + if ( freemem ) + schedule_cpu_rm_free(data, cpu); =20 return 0; } diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 1463dcd767..d9dadedea3 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -419,6 +419,8 @@ static int cpupool_alloc_affin_masks(struct affinity_ma= sks *masks) return 0; =20 free_cpumask_var(masks->hard); + memset(masks, 0, sizeof(*masks)); + return -ENOMEM; } =20 @@ -428,28 +430,34 @@ static void cpupool_free_affin_masks(struct affinity_= masks *masks) free_cpumask_var(masks->hard); } =20 -static void cpupool_update_node_affinity(const struct cpupool *c) +static void cpupool_update_node_affinity(const struct cpupool *c, + struct affinity_masks *masks) { const cpumask_t *online =3D c->res_valid; - struct affinity_masks masks; + struct affinity_masks local_masks; struct domain *d; =20 - if ( cpupool_alloc_affin_masks(&masks) ) - return; + if ( !masks ) + { + if ( cpupool_alloc_affin_masks(&local_masks) ) + return; + masks =3D &local_masks; + } =20 rcu_read_lock(&domlist_read_lock); for_each_domain_in_cpupool(d, c) { if ( d->vcpu && d->vcpu[0] ) { - cpumask_clear(masks.hard); - cpumask_clear(masks.soft); - domain_update_node_affinity_noalloc(d, online, &masks); + cpumask_clear(masks->hard); + cpumask_clear(masks->soft); + domain_update_node_affinity_noalloc(d, online, masks); } } rcu_read_unlock(&domlist_read_lock); =20 - cpupool_free_affin_masks(&masks); + if ( masks =3D=3D &local_masks ) + cpupool_free_affin_masks(&local_masks); } =20 /* @@ -483,15 +491,17 @@ static int cpupool_assign_cpu_locked(struct cpupool *= c, unsigned int cpu) =20 rcu_read_unlock(&sched_res_rculock); =20 - cpupool_update_node_affinity(c); + cpupool_update_node_affinity(c, NULL); =20 return 0; } =20 -static int cpupool_unassign_cpu_finish(struct cpupool *c) +static int cpupool_unassign_cpu_finish(struct cpupool *c, + struct cpu_rm_data *mem) { int cpu =3D cpupool_moving_cpu; const cpumask_t *cpus; + struct affinity_masks *masks =3D mem ? &mem->affinity : NULL; int ret; =20 if ( c !=3D cpupool_cpu_moving ) @@ -514,7 +524,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *= c) */ if ( !ret ) { - ret =3D schedule_cpu_rm(cpu); + ret =3D schedule_cpu_rm(cpu, mem); if ( ret ) cpumask_andnot(&cpupool_free_cpus, &cpupool_free_cpus, cpus); else @@ -526,7 +536,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *= c) } rcu_read_unlock(&sched_res_rculock); =20 - cpupool_update_node_affinity(c); + cpupool_update_node_affinity(c, masks); =20 return ret; } @@ -590,7 +600,7 @@ static long cf_check cpupool_unassign_cpu_helper(void *= info) cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu); spin_lock(&cpupool_lock); =20 - ret =3D cpupool_unassign_cpu_finish(c); + ret =3D cpupool_unassign_cpu_finish(c, NULL); =20 spin_unlock(&cpupool_lock); debugtrace_printk("cpupool_unassign_cpu ret=3D%ld\n", ret); @@ -737,7 +747,7 @@ static int cpupool_cpu_add(unsigned int cpu) * This function is called in stop_machine context, so we can be sure no * non-idle vcpu is active on the system. */ -static void cpupool_cpu_remove(unsigned int cpu) +static void cpupool_cpu_remove(unsigned int cpu, struct cpu_rm_data *mem) { int ret; =20 @@ -745,7 +755,7 @@ static void cpupool_cpu_remove(unsigned int cpu) =20 if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) ) { - ret =3D cpupool_unassign_cpu_finish(cpupool0); + ret =3D cpupool_unassign_cpu_finish(cpupool0, mem); BUG_ON(ret); } cpumask_clear_cpu(cpu, &cpupool_free_cpus); @@ -811,7 +821,7 @@ static void cpupool_cpu_remove_forced(unsigned int cpu) { ret =3D cpupool_unassign_cpu_start(c, master_cpu); BUG_ON(ret); - ret =3D cpupool_unassign_cpu_finish(c); + ret =3D cpupool_unassign_cpu_finish(c, NULL); BUG_ON(ret); } } @@ -1031,10 +1041,23 @@ static int cf_check cpu_callback( { unsigned int cpu =3D (unsigned long)hcpu; int rc =3D 0; + static struct cpu_rm_data *mem; =20 switch ( action ) { case CPU_DOWN_FAILED: + if ( system_state <=3D SYS_STATE_active ) + { + if ( mem ) + { + if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) ) + cpupool_free_affin_masks(&mem->affinity); + schedule_cpu_rm_free(mem, cpu); + mem =3D NULL; + } + rc =3D cpupool_cpu_add(cpu); + } + break; case CPU_ONLINE: if ( system_state <=3D SYS_STATE_active ) rc =3D cpupool_cpu_add(cpu); @@ -1042,12 +1065,32 @@ static int cf_check cpu_callback( case CPU_DOWN_PREPARE: /* Suspend/Resume don't change assignments of cpus to cpupools. */ if ( system_state <=3D SYS_STATE_active ) + { rc =3D cpupool_cpu_remove_prologue(cpu); + if ( !rc ) + { + ASSERT(!mem); + mem =3D schedule_cpu_rm_alloc(cpu); + rc =3D mem ? cpupool_alloc_affin_masks(&mem->affinity) : -= ENOMEM; + } + } break; case CPU_DYING: /* Suspend/Resume don't change assignments of cpus to cpupools. */ if ( system_state <=3D SYS_STATE_active ) - cpupool_cpu_remove(cpu); + { + ASSERT(mem); + cpupool_cpu_remove(cpu, mem); + } + break; + case CPU_DEAD: + if ( system_state <=3D SYS_STATE_active ) + { + ASSERT(mem); + cpupool_free_affin_masks(&mem->affinity); + schedule_cpu_rm_free(mem, cpu); + mem =3D NULL; + } break; case CPU_RESUME_FAILED: cpupool_cpu_remove_forced(cpu); diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index c626ad4907..f5bf41226c 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -600,6 +600,7 @@ struct affinity_masks { =20 /* Memory allocation related data for schedule_cpu_rm(). */ struct cpu_rm_data { + struct affinity_masks affinity; struct scheduler *old_ops; void *ppriv_old; void *vpriv_old; @@ -617,7 +618,9 @@ struct scheduler *scheduler_alloc(unsigned int sched_id= ); void scheduler_free(struct scheduler *sched); int cpu_disable_scheduler(unsigned int cpu); int schedule_cpu_add(unsigned int cpu, struct cpupool *c); -int schedule_cpu_rm(unsigned int cpu); +struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu); +void schedule_cpu_rm_free(struct cpu_rm_data *mem, unsigned int cpu); +int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *mem); int sched_move_domain(struct domain *d, struct cpupool *c); struct cpupool *cpupool_get_by_id(unsigned int poolid); void cpupool_put(struct cpupool *pool); --=20 2.35.3