From nobody Sat Apr 27 11:01:02 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1585337493; cv=none; d=zohomail.com; s=zohoarc; b=i3Mj8xPNeLOjCBzv2Dek+3fy6BOrB1t9HZ44i5vVsGfWv9ZG/LsFlVb3vqKBQey7suCStLU0mz0nQrzOuD6+aXmyse1mKuVJPjHbhtw5sOr/SydMOYaBmT3EXtiBfbhC8o4C4C0U3hMX0P+ZAkNZ30dodFlyKZxOSe7sanPRLaI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1585337493; h=Content-Type:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=6xN+j2bCix6BhUr4bEwlwMblwAbTUxrLUkDqaQj7iYA=; b=RyrJN+VpCIjeQ/rVNBUz48swH0wD6UkDPz6Xzt/hKntfjONPQeGDvvXFJoekF8xTtA0sNzmmXOzJsTsgXPhZD+aG2QEgeBDvnbuk6Uj9NkJVmrL9qjP+A0fMRbcrk2RMWRt0Kkfz+dU4yN13nePrsmUT4PXx5WbznFq6Dx5Ok2Q= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1585337493149716.7893369477573; Fri, 27 Mar 2020 12:31:33 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jHugc-0003cv-1D; Fri, 27 Mar 2020 19:30:34 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jHuga-0003cq-Dt for xen-devel@lists.xenproject.org; Fri, 27 Mar 2020 19:30:32 +0000 Received: from webmail.dornerworks.com (unknown [12.207.209.150]) by us1-rack-iad1.inumbo.com (Halon) with ESMTP id 6b859d04-7061-11ea-bec1-bc764e2007e4; Fri, 27 Mar 2020 19:30:31 +0000 (UTC) X-Inumbo-ID: 6b859d04-7061-11ea-bec1-bc764e2007e4 From: Jeff Kubascik To: , George Dunlap , Dario Faggioli Date: Fri, 27 Mar 2020 15:30:23 -0400 Message-ID: <20200327193023.506-1-jeff.kubascik@dornerworks.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Originating-IP: [172.27.13.208] X-ClientProxiedBy: Mcbain.dw.local (172.27.1.45) To Mcbain.dw.local (172.27.1.45) X-spam-status: No, score=-2.9 required=3.5 tests=ALL_TRUSTED, BAYES_00, MAILSHELL_SCORE_0_4 X-Spam-Flag: NO Subject: [Xen-devel] [PATCH] sched/core: Fix bug when moving a domain between cpupools X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stewart Hildebrand , Nathan Studer Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For each UNIT, sched_set_affinity is called before unit->priv is updated to the new cpupool private UNIT data structure. The issue is sched_set_affinity will call the adjust_affinity method of the cpupool. If defined, the new cpupool may use unit->priv (e.g. credit), which at this point still references the old cpupool private UNIT data structure. This change fixes the bug by moving the switch of unit->priv earler in the function. Signed-off-by: Jeff Kubascik Acked-by: Dario Faggioli Reviewed-by: Juergen Gross --- Hello, I've been working on updating the arinc653 scheduler to support multicore for a few months now. In the process of testing, I came across this obscure bug in the core scheduler code that took me a few weeks to track down. This bug resulted in the credit scheduler writing past the end of the arinc653 private UNIT data structure into the TLSF allocator bhdr structure of the adjacent region. This required some deep diving into the TLSF allocator code to trace the bug back to this point. Sincerely, Jeff Kubascik --- xen/common/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 7e8e7d2c39..ea572a345a 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -686,6 +686,7 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) unsigned int unit_p =3D new_p; =20 unitdata =3D unit->priv; + unit->priv =3D unit_priv[unit_idx]; =20 for_each_sched_unit_vcpu ( unit, v ) { @@ -707,7 +708,6 @@ int sched_move_domain(struct domain *d, struct cpupool = *c) */ spin_unlock_irq(lock); =20 - unit->priv =3D unit_priv[unit_idx]; if ( !d->is_dying ) sched_move_irqs(unit); =20 --=20 2.17.1