From nobody Sat Apr 27 03:27:18 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1557315174; cv=none; d=zoho.com; s=zohoarc; b=EhdtYNBQA0JY5fqHasCXbDCXKlJXQmKKCccVyniBVUlPRdOrReqZDIChqtoi2XjMgDCDAw/n8+Q2sgkCWOb7gMexGKdS/OVo34MULfYQmO+odh5X/hvAZ5JZrIZHZAFdyM3JQP4r6khBJsdtL4eOFil5ISkYK/AujM006ikYHaE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557315174; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To:ARC-Authentication-Results; bh=/poh6IBoKeGRP9UPeOD3dbqz11PpMzoQtaX9stp+a2Q=; b=Z4fWl2u+xViUiMq+J1tb3u5u39p4PNMf9JugMUfXSGG8BGkhGG3ei8z/s74d/Pynvj6hYPgrFEZXBHeoelFpUxUa/76ewJCuLNxPty5fM7U7Un3bqLjDAylnLRpwy4TlvyKRYXvDObEkqehMjZ1Jl1tFKS6lACaYhZwgebIqujs= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1557315174542946.0099575164961; Wed, 8 May 2019 04:32:54 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hOKnV-0006Og-R3; Wed, 08 May 2019 11:31:41 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hOKnU-0006Ob-M2 for xen-devel@lists.xenproject.org; Wed, 08 May 2019 11:31:40 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d6d5d484-7184-11e9-b98b-47160759b404; Wed, 08 May 2019 11:31:37 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id AA8D5AC3A; Wed, 8 May 2019 11:31:35 +0000 (UTC) X-Inumbo-ID: d6d5d484-7184-11e9-b98b-47160759b404 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 May 2019 13:31:32 +0200 Message-Id: <20190508113132.19198-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Subject: [Xen-devel] [PATCH] xen/sched: fix csched2_deinit_pdata() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Commit 753ba43d6d16e688 ("xen/sched: fix credit2 smt idle handling") introduced a regression when switching cpus between cpupools. When assigning a cpu to a cpupool with credit2 being the default scheduler csched2_deinit_pdata() is called for the credit2 private data after the new scheduler's private data has been hooked to the per-cpu scheduler data. Unfortunately csched2_deinit_pdata() will cycle through all per-cpu scheduler areas it knows of for removing the cpu from the respective sibling masks including the area of the just moved cpu. This will (depending on the new scheduler) either clobber the data of the new scheduler or in case of sched_rt lead to a crash. Avoid that by removing the cpu from the list of active cpus in credit2 data first. The opposite problem is occurring when removing a cpu from a cpupool: init_pdata() of credit2 will access the per-cpu data of the old scheduler. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched_credit2.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 6958b265fc..9c1c3b4e08 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -3813,22 +3813,21 @@ init_pdata(struct csched2_private *prv, struct csch= ed2_pcpu *spc, activate_runqueue(prv, spc->runq_id); } =20 - __cpumask_set_cpu(cpu, &rqd->idle); - __cpumask_set_cpu(cpu, &rqd->active); - __cpumask_set_cpu(cpu, &prv->initialized); - __cpumask_set_cpu(cpu, &rqd->smt_idle); + __cpumask_set_cpu(cpu, &spc->sibling_mask); =20 - /* On the boot cpu we are called before cpu_sibling_mask has been set = up. */ - if ( cpu =3D=3D 0 && system_state < SYS_STATE_active ) - __cpumask_set_cpu(cpu, &csched2_pcpu(cpu)->sibling_mask); - else + if ( cpumask_weight(&rqd->active) > 0 ) for_each_cpu ( rcpu, per_cpu(cpu_sibling_mask, cpu) ) if ( cpumask_test_cpu(rcpu, &rqd->active) ) { __cpumask_set_cpu(cpu, &csched2_pcpu(rcpu)->sibling_mask); - __cpumask_set_cpu(rcpu, &csched2_pcpu(cpu)->sibling_mask); + __cpumask_set_cpu(rcpu, &spc->sibling_mask); } =20 + __cpumask_set_cpu(cpu, &rqd->idle); + __cpumask_set_cpu(cpu, &rqd->active); + __cpumask_set_cpu(cpu, &prv->initialized); + __cpumask_set_cpu(cpu, &rqd->smt_idle); + if ( cpumask_weight(&rqd->active) =3D=3D 1 ) rqd->pick_bias =3D cpu; =20 @@ -3937,13 +3936,13 @@ csched2_deinit_pdata(const struct scheduler *ops, v= oid *pcpu, int cpu) =20 printk(XENLOG_INFO "Removing cpu %d from runqueue %d\n", cpu, spc->run= q_id); =20 - for_each_cpu ( rcpu, &rqd->active ) - __cpumask_clear_cpu(cpu, &csched2_pcpu(rcpu)->sibling_mask); - __cpumask_clear_cpu(cpu, &rqd->idle); __cpumask_clear_cpu(cpu, &rqd->smt_idle); __cpumask_clear_cpu(cpu, &rqd->active); =20 + for_each_cpu ( rcpu, &rqd->active ) + __cpumask_clear_cpu(cpu, &csched2_pcpu(rcpu)->sibling_mask); + if ( cpumask_empty(&rqd->active) ) { printk(XENLOG_INFO " No cpus left on runqueue, disabling\n"); --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel