From nobody Fri Mar 29 12:33:38 2024 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497068; cv=none; d=zohomail.com; s=zohoarc; b=iyl5eOEF5yTkPJ8UCb8nuewDakHY6CJzSr27T88ZFtFnNAQIDO7a4anTeAvaUvZNBMDHTZ0sCaaN4m4q/9mj4PG4wKGkfmAICQLY418K5BoDjL5Ay2YOhmO9nfePp38sQkR2mr34943l/jj6CSgvjX6JoBdSWP3MC8aoFgctfGI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497068; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=I0USoKpG6U7Ss4k/YMGn0xACPOkiWkIDb5HOWoraS68=; b=VFkiyhtCsjdOY+fpJNfZB6kxwNfxKZmbBrEJQ6JI89unO1rZSeYZWYx7Pxb3iFkN1+IO5Uw+XSf+UAExApgigehejbsgI08aBb9qJs7o+iaP8qxymrJ4WoN/l0zzfX/d8DiqMzeoD4VE8NtD5KUOCT1vm5Oqf+vjhgs2Dk5uIx8= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497068099425.4446672986221; Wed, 8 Jan 2020 07:24:28 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBO-0004QL-Gp; Wed, 08 Jan 2020 15:23:42 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBN-0004PM-HT for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:23:41 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d43e4834-322a-11ea-9832-bc764e2007e4; Wed, 08 Jan 2020 15:23:32 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 66A8BB1E1; Wed, 8 Jan 2020 15:23:31 +0000 (UTC) X-Inumbo-ID: d43e4834-322a-11ea-9832-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:20 +0100 Message-Id: <20200108152328.27194-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Josh Whitehead , Meng Xu , Jan Beulich , Stewart Hildebrand MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Move sched*c and cpupool.c to a new directory common/sched. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V2: - renamed sources (Dario Faggioli, Andrew Cooper) --- MAINTAINERS | 8 +-- xen/common/Kconfig | 66 +------------------= ---- xen/common/Makefile | 8 +-- xen/common/sched/Kconfig | 65 +++++++++++++++++++= +++ xen/common/sched/Makefile | 7 +++ xen/common/{sched_arinc653.c =3D> sched/arinc653.c} | 0 xen/common/{compat/schedule.c =3D> sched/compat.c} | 2 +- xen/common/{schedule.c =3D> sched/core.c} | 2 +- xen/common/{ =3D> sched}/cpupool.c | 0 xen/common/{sched_credit.c =3D> sched/credit.c} | 0 xen/common/{sched_credit2.c =3D> sched/credit2.c} | 0 xen/common/{sched_null.c =3D> sched/null.c} | 0 xen/common/{sched_rt.c =3D> sched/rt.c} | 0 13 files changed, 80 insertions(+), 78 deletions(-) create mode 100644 xen/common/sched/Kconfig create mode 100644 xen/common/sched/Makefile rename xen/common/{sched_arinc653.c =3D> sched/arinc653.c} (100%) rename xen/common/{compat/schedule.c =3D> sched/compat.c} (97%) rename xen/common/{schedule.c =3D> sched/core.c} (99%) rename xen/common/{ =3D> sched}/cpupool.c (100%) rename xen/common/{sched_credit.c =3D> sched/credit.c} (100%) rename xen/common/{sched_credit2.c =3D> sched/credit2.c} (100%) rename xen/common/{sched_null.c =3D> sched/null.c} (100%) rename xen/common/{sched_rt.c =3D> sched/rt.c} (100%) diff --git a/MAINTAINERS b/MAINTAINERS index eaea4620e2..9d2ac631ba 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -174,7 +174,7 @@ M: Josh Whitehead M: Stewart Hildebrand S: Supported L: DornerWorks Xen-Devel -F: xen/common/sched_arinc653.c +F: xen/common/sched/arinc653.c F: tools/libxc/xc_arinc653.c =20 ARM (W/ VIRTUALISATION EXTENSIONS) ARCHITECTURE @@ -212,7 +212,7 @@ CPU POOLS M: Juergen Gross M: Dario Faggioli S: Supported -F: xen/common/cpupool.c +F: xen/common/sched/cpupool.c =20 DEVICE TREE M: Stefano Stabellini @@ -378,13 +378,13 @@ RTDS SCHEDULER M: Dario Faggioli M: Meng Xu S: Supported -F: xen/common/sched_rt.c +F: xen/common/sched/rt.c =20 SCHEDULING M: George Dunlap M: Dario Faggioli S: Supported -F: xen/common/sched* +F: xen/common/sched/ =20 SEABIOS UPSTREAM M: Wei Liu diff --git a/xen/common/Kconfig b/xen/common/Kconfig index b3d161d057..9d6d09eb37 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -275,71 +275,7 @@ config ARGO =20 If unsure, say N. =20 -menu "Schedulers" - visible if EXPERT =3D "y" - -config SCHED_CREDIT - bool "Credit scheduler support" - default y - ---help--- - The traditional credit scheduler is a general purpose scheduler. - -config SCHED_CREDIT2 - bool "Credit2 scheduler support" - default y - ---help--- - The credit2 scheduler is a general purpose scheduler that is - optimized for lower latency and higher VM density. - -config SCHED_RTDS - bool "RTDS scheduler support (EXPERIMENTAL)" - default y - ---help--- - The RTDS scheduler is a soft and firm real-time scheduler for - multicore, targeted for embedded, automotive, graphics and gaming - in the cloud, and general low-latency workloads. - -config SCHED_ARINC653 - bool "ARINC653 scheduler support (EXPERIMENTAL)" - default DEBUG - ---help--- - The ARINC653 scheduler is a hard real-time scheduler for single - cores, targeted for avionics, drones, and medical devices. - -config SCHED_NULL - bool "Null scheduler support (EXPERIMENTAL)" - default y - ---help--- - The null scheduler is a static, zero overhead scheduler, - for when there always are less vCPUs than pCPUs, typically - in embedded or HPC scenarios. - -choice - prompt "Default Scheduler?" - default SCHED_CREDIT2_DEFAULT - - config SCHED_CREDIT_DEFAULT - bool "Credit Scheduler" if SCHED_CREDIT - config SCHED_CREDIT2_DEFAULT - bool "Credit2 Scheduler" if SCHED_CREDIT2 - config SCHED_RTDS_DEFAULT - bool "RT Scheduler" if SCHED_RTDS - config SCHED_ARINC653_DEFAULT - bool "ARINC653 Scheduler" if SCHED_ARINC653 - config SCHED_NULL_DEFAULT - bool "Null Scheduler" if SCHED_NULL -endchoice - -config SCHED_DEFAULT - string - default "credit" if SCHED_CREDIT_DEFAULT - default "credit2" if SCHED_CREDIT2_DEFAULT - default "rtds" if SCHED_RTDS_DEFAULT - default "arinc653" if SCHED_ARINC653_DEFAULT - default "null" if SCHED_NULL_DEFAULT - default "credit2" - -endmenu +source "common/sched/Kconfig" =20 config CRYPTO bool diff --git a/xen/common/Makefile b/xen/common/Makefile index 62b34e69e9..2abb8250b0 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -3,7 +3,6 @@ obj-y +=3D bitmap.o obj-y +=3D bsearch.o obj-$(CONFIG_CORE_PARKING) +=3D core_parking.o obj-y +=3D cpu.o -obj-y +=3D cpupool.o obj-$(CONFIG_DEBUG_TRACE) +=3D debugtrace.o obj-$(CONFIG_HAS_DEVICE_TREE) +=3D device_tree.o obj-y +=3D domctl.o @@ -38,12 +37,6 @@ obj-y +=3D radix-tree.o obj-y +=3D rbtree.o obj-y +=3D rcupdate.o obj-y +=3D rwlock.o -obj-$(CONFIG_SCHED_ARINC653) +=3D sched_arinc653.o -obj-$(CONFIG_SCHED_CREDIT) +=3D sched_credit.o -obj-$(CONFIG_SCHED_CREDIT2) +=3D sched_credit2.o -obj-$(CONFIG_SCHED_RTDS) +=3D sched_rt.o -obj-$(CONFIG_SCHED_NULL) +=3D sched_null.o -obj-y +=3D schedule.o obj-y +=3D shutdown.o obj-y +=3D softirq.o obj-y +=3D sort.o @@ -74,6 +67,7 @@ obj-$(CONFIG_COMPAT) +=3D $(addprefix compat/,domain.o ke= rnel.o memory.o multicall extra-y :=3D symbols-dummy.o =20 subdir-$(CONFIG_COVERAGE) +=3D coverage +subdir-y +=3D sched subdir-$(CONFIG_UBSAN) +=3D ubsan =20 subdir-$(CONFIG_NEEDS_LIBELF) +=3D libelf diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig new file mode 100644 index 0000000000..883ac87cab --- /dev/null +++ b/xen/common/sched/Kconfig @@ -0,0 +1,65 @@ +menu "Schedulers" + visible if EXPERT =3D "y" + +config SCHED_CREDIT + bool "Credit scheduler support" + default y + ---help--- + The traditional credit scheduler is a general purpose scheduler. + +config SCHED_CREDIT2 + bool "Credit2 scheduler support" + default y + ---help--- + The credit2 scheduler is a general purpose scheduler that is + optimized for lower latency and higher VM density. + +config SCHED_RTDS + bool "RTDS scheduler support (EXPERIMENTAL)" + default y + ---help--- + The RTDS scheduler is a soft and firm real-time scheduler for + multicore, targeted for embedded, automotive, graphics and gaming + in the cloud, and general low-latency workloads. + +config SCHED_ARINC653 + bool "ARINC653 scheduler support (EXPERIMENTAL)" + default DEBUG + ---help--- + The ARINC653 scheduler is a hard real-time scheduler for single + cores, targeted for avionics, drones, and medical devices. + +config SCHED_NULL + bool "Null scheduler support (EXPERIMENTAL)" + default y + ---help--- + The null scheduler is a static, zero overhead scheduler, + for when there always are less vCPUs than pCPUs, typically + in embedded or HPC scenarios. + +choice + prompt "Default Scheduler?" + default SCHED_CREDIT2_DEFAULT + + config SCHED_CREDIT_DEFAULT + bool "Credit Scheduler" if SCHED_CREDIT + config SCHED_CREDIT2_DEFAULT + bool "Credit2 Scheduler" if SCHED_CREDIT2 + config SCHED_RTDS_DEFAULT + bool "RT Scheduler" if SCHED_RTDS + config SCHED_ARINC653_DEFAULT + bool "ARINC653 Scheduler" if SCHED_ARINC653 + config SCHED_NULL_DEFAULT + bool "Null Scheduler" if SCHED_NULL +endchoice + +config SCHED_DEFAULT + string + default "credit" if SCHED_CREDIT_DEFAULT + default "credit2" if SCHED_CREDIT2_DEFAULT + default "rtds" if SCHED_RTDS_DEFAULT + default "arinc653" if SCHED_ARINC653_DEFAULT + default "null" if SCHED_NULL_DEFAULT + default "credit2" + +endmenu diff --git a/xen/common/sched/Makefile b/xen/common/sched/Makefile new file mode 100644 index 0000000000..3537f2a68d --- /dev/null +++ b/xen/common/sched/Makefile @@ -0,0 +1,7 @@ +obj-y +=3D cpupool.o +obj-$(CONFIG_SCHED_ARINC653) +=3D arinc653.o +obj-$(CONFIG_SCHED_CREDIT) +=3D credit.o +obj-$(CONFIG_SCHED_CREDIT2) +=3D credit2.o +obj-$(CONFIG_SCHED_RTDS) +=3D rt.o +obj-$(CONFIG_SCHED_NULL) +=3D null.o +obj-y +=3D core.o diff --git a/xen/common/sched_arinc653.c b/xen/common/sched/arinc653.c similarity index 100% rename from xen/common/sched_arinc653.c rename to xen/common/sched/arinc653.c diff --git a/xen/common/compat/schedule.c b/xen/common/sched/compat.c similarity index 97% rename from xen/common/compat/schedule.c rename to xen/common/sched/compat.c index 8b6e6f107d..040b4caca2 100644 --- a/xen/common/compat/schedule.c +++ b/xen/common/sched/compat.c @@ -37,7 +37,7 @@ static int compat_poll(struct compat_sched_poll *compat) #define do_poll compat_poll #define sched_poll compat_sched_poll =20 -#include "../schedule.c" +#include "core.c" =20 int compat_set_timer_op(u32 lo, s32 hi) { diff --git a/xen/common/schedule.c b/xen/common/sched/core.c similarity index 99% rename from xen/common/schedule.c rename to xen/common/sched/core.c index 54a07ff9e8..4d8eb4c617 100644 --- a/xen/common/schedule.c +++ b/xen/common/sched/core.c @@ -3128,7 +3128,7 @@ void __init sched_setup_dom0_vcpus(struct domain *d) #endif =20 #ifdef CONFIG_COMPAT -#include "compat/schedule.c" +#include "compat.c" #endif =20 #endif /* !COMPAT */ diff --git a/xen/common/cpupool.c b/xen/common/sched/cpupool.c similarity index 100% rename from xen/common/cpupool.c rename to xen/common/sched/cpupool.c diff --git a/xen/common/sched_credit.c b/xen/common/sched/credit.c similarity index 100% rename from xen/common/sched_credit.c rename to xen/common/sched/credit.c diff --git a/xen/common/sched_credit2.c b/xen/common/sched/credit2.c similarity index 100% rename from xen/common/sched_credit2.c rename to xen/common/sched/credit2.c diff --git a/xen/common/sched_null.c b/xen/common/sched/null.c similarity index 100% rename from xen/common/sched_null.c rename to xen/common/sched/null.c diff --git a/xen/common/sched_rt.c b/xen/common/sched/rt.c similarity index 100% rename from xen/common/sched_rt.c rename to xen/common/sched/rt.c --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Mar 29 12:33:38 2024 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497064; cv=none; d=zohomail.com; s=zohoarc; b=PnOXTSxXQhNX1hY/S1sUues1alkvr33oG0gvbTld+rEiY7qMyYxJTRSHAOvr9Wb6PLZDA79WgTgNL593t1y5C4ix5ABklPgt4EHTcnqiyx0lGuJRlxB2WBVU79D0HeO9k17EqDbYpPXRhgOlOAdgIJlnGd07x1TX4oOIgKosWCo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497064; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=RxViTmGs+SVgxjXDwxuv1Ebtz3DqiqVeIN4FwoCQerY=; b=H+Q+UGc5/vV6tfv4vUOgEUSVJLHSJHuPNGwi9qXf440w+qNCuSXty45BcahL+EfTrcx2cHBhisqPnEEZCe96VWJEw0hQSy8grla4xXDRzHjz92ftR7uar4SP3IdtkVZBChG/1AKi68tDW0BZWDgjRQvVKjHscqjPVFypeHjL6fo= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497064440526.5482910762407; Wed, 8 Jan 2020 07:24:24 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBM-0004Of-0F; Wed, 08 Jan 2020 15:23:40 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBK-0004OS-44 for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:23:38 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d32cf8a1-322a-11ea-b82c-12813bfff9fa; Wed, 08 Jan 2020 15:23:32 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 9305CB1E4; Wed, 8 Jan 2020 15:23:31 +0000 (UTC) X-Inumbo-ID: d32cf8a1-322a-11ea-b82c-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:21 +0100 Message-Id: <20200108152328.27194-3-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Josh Whitehead , Meng Xu , Jan Beulich , Stewart Hildebrand , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" include/xen/sched-if.h should be private to scheduler code, so move it to common/sched/private.h and move the remaining use cases to cpupool.c and core.c. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V2: - rename to private.h (Andrew Cooper) --- xen/arch/x86/dom0_build.c | 5 +- xen/common/domain.c | 70 -------- xen/common/domctl.c | 135 +-------------- xen/common/sched/arinc653.c | 3 +- xen/common/sched/core.c | 191 +++++++++++++++++= +++- xen/common/sched/cpupool.c | 13 +- xen/common/sched/credit.c | 2 +- xen/common/sched/credit2.c | 3 +- xen/common/sched/null.c | 3 +- .../xen/sched-if.h =3D> common/sched/private.h} | 3 - xen/common/sched/rt.c | 3 +- xen/include/xen/domain.h | 3 + xen/include/xen/sched.h | 7 + 13 files changed, 228 insertions(+), 213 deletions(-) rename xen/{include/xen/sched-if.h =3D> common/sched/private.h} (99%) diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c index 28b964e018..56c2dee0fc 100644 --- a/xen/arch/x86/dom0_build.c +++ b/xen/arch/x86/dom0_build.c @@ -9,7 +9,6 @@ #include #include #include -#include #include =20 #include @@ -227,9 +226,9 @@ unsigned int __init dom0_max_vcpus(void) dom0_nodes =3D node_online_map; for_each_node_mask ( node, dom0_nodes ) cpumask_or(&dom0_cpus, &dom0_cpus, &node_to_cpumask(node)); - cpumask_and(&dom0_cpus, &dom0_cpus, cpupool0->cpu_valid); + cpumask_and(&dom0_cpus, &dom0_cpus, cpupool_valid_cpus(cpupool0)); if ( cpumask_empty(&dom0_cpus) ) - cpumask_copy(&dom0_cpus, cpupool0->cpu_valid); + cpumask_copy(&dom0_cpus, cpupool_valid_cpus(cpupool0)); =20 max_vcpus =3D cpumask_weight(&dom0_cpus); if ( opt_dom0_max_vcpus_min > max_vcpus ) diff --git a/xen/common/domain.c b/xen/common/domain.c index 0b1103fdb2..71a7c2776f 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -10,7 +10,6 @@ #include #include #include -#include #include #include #include @@ -565,75 +564,6 @@ void __init setup_system_domains(void) #endif } =20 -void domain_update_node_affinity(struct domain *d) -{ - cpumask_var_t dom_cpumask, dom_cpumask_soft; - cpumask_t *dom_affinity; - const cpumask_t *online; - struct sched_unit *unit; - unsigned int cpu; - - /* Do we have vcpus already? If not, no need to update node-affinity. = */ - if ( !d->vcpu || !d->vcpu[0] ) - return; - - if ( !zalloc_cpumask_var(&dom_cpumask) ) - return; - if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) - { - free_cpumask_var(dom_cpumask); - return; - } - - online =3D cpupool_domain_master_cpumask(d); - - spin_lock(&d->node_affinity_lock); - - /* - * If d->auto_node_affinity is true, let's compute the domain's - * node-affinity and update d->node_affinity accordingly. if false, - * just leave d->auto_node_affinity alone. - */ - if ( d->auto_node_affinity ) - { - /* - * We want the narrowest possible set of pcpus (to get the narowest - * possible set of nodes). What we need is the cpumask of where the - * domain can run (the union of the hard affinity of all its vcpus= ), - * and the full mask of where it would prefer to run (the union of - * the soft affinity of all its various vcpus). Let's build them. - */ - for_each_sched_unit ( d, unit ) - { - cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity); - cpumask_or(dom_cpumask_soft, dom_cpumask_soft, - unit->cpu_soft_affinity); - } - /* Filter out non-online cpus */ - cpumask_and(dom_cpumask, dom_cpumask, online); - ASSERT(!cpumask_empty(dom_cpumask)); - /* And compute the intersection between hard, online and soft */ - cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); - - /* - * If not empty, the intersection of hard, soft and online is the - * narrowest set we want. If empty, we fall back to hard&online. - */ - dom_affinity =3D cpumask_empty(dom_cpumask_soft) ? - dom_cpumask : dom_cpumask_soft; - - nodes_clear(d->node_affinity); - for_each_cpu ( cpu, dom_affinity ) - node_set(cpu_to_node(cpu), d->node_affinity); - } - - spin_unlock(&d->node_affinity_lock); - - free_cpumask_var(dom_cpumask_soft); - free_cpumask_var(dom_cpumask); -} - - int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity) { /* Being disjoint with the system is just wrong. */ diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 650310e874..8b819f56e5 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -65,9 +64,9 @@ static int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *= xenctl_bitmap, return err; } =20 -static int xenctl_bitmap_to_bitmap(unsigned long *bitmap, - const struct xenctl_bitmap *xenctl_bitm= ap, - unsigned int nbits) +int xenctl_bitmap_to_bitmap(unsigned long *bitmap, + const struct xenctl_bitmap *xenctl_bitmap, + unsigned int nbits) { unsigned int guest_bytes, copy_bytes; int err =3D 0; @@ -200,7 +199,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_= getdomaininfo *info) info->shared_info_frame =3D mfn_to_gmfn(d, virt_to_mfn(d->shared_info)= ); BUG_ON(SHARED_M2P(info->shared_info_frame)); =20 - info->cpupool =3D d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; + info->cpupool =3D cpupool_get_id(d); =20 memcpy(info->handle, d->handle, sizeof(xen_domain_handle_t)); =20 @@ -234,16 +233,6 @@ void domctl_lock_release(void) spin_unlock(¤t->domain->hypercall_deadlock_mutex); } =20 -static inline -int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpu= aff) -{ - return vcpuaff->flags =3D=3D 0 || - ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) && - guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) || - ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) && - guest_handle_is_null(vcpuaff->cpumap_soft.bitmap)); -} - void vnuma_destroy(struct vnuma_info *vnuma) { if ( vnuma ) @@ -608,122 +597,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u= _domctl) =20 case XEN_DOMCTL_setvcpuaffinity: case XEN_DOMCTL_getvcpuaffinity: - { - struct vcpu *v; - const struct sched_unit *unit; - struct xen_domctl_vcpuaffinity *vcpuaff =3D &op->u.vcpuaffinity; - - ret =3D -EINVAL; - if ( vcpuaff->vcpu >=3D d->max_vcpus ) - break; - - ret =3D -ESRCH; - if ( (v =3D d->vcpu[vcpuaff->vcpu]) =3D=3D NULL ) - break; - - unit =3D v->sched_unit; - ret =3D -EINVAL; - if ( vcpuaffinity_params_invalid(vcpuaff) ) - break; - - if ( op->cmd =3D=3D XEN_DOMCTL_setvcpuaffinity ) - { - cpumask_var_t new_affinity, old_affinity; - cpumask_t *online =3D cpupool_domain_master_cpumask(v->domain); - - /* - * We want to be able to restore hard affinity if we are trying - * setting both and changing soft affinity (which happens late= r, - * when hard affinity has been succesfully chaged already) fai= ls. - */ - if ( !alloc_cpumask_var(&old_affinity) ) - { - ret =3D -ENOMEM; - break; - } - cpumask_copy(old_affinity, unit->cpu_hard_affinity); - - if ( !alloc_cpumask_var(&new_affinity) ) - { - free_cpumask_var(old_affinity); - ret =3D -ENOMEM; - break; - } - - /* Undo a stuck SCHED_pin_override? */ - if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE ) - vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE= ); - - ret =3D 0; - - /* - * We both set a new affinity and report back to the caller wh= at - * the scheduler will be effectively using. - */ - if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) - { - ret =3D xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), - &vcpuaff->cpumap_hard, - nr_cpu_ids); - if ( !ret ) - ret =3D vcpu_set_hard_affinity(v, new_affinity); - if ( ret ) - goto setvcpuaffinity_out; - - /* - * For hard affinity, what we return is the intersection of - * cpupool's online mask and the new hard affinity. - */ - cpumask_and(new_affinity, online, unit->cpu_hard_affinity); - ret =3D cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, - new_affinity); - } - if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) - { - ret =3D xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), - &vcpuaff->cpumap_soft, - nr_cpu_ids); - if ( !ret) - ret =3D vcpu_set_soft_affinity(v, new_affinity); - if ( ret ) - { - /* - * Since we're returning error, the caller expects not= hing - * happened, so we rollback the changes to hard affini= ty - * (if any). - */ - if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) - vcpu_set_hard_affinity(v, old_affinity); - goto setvcpuaffinity_out; - } - - /* - * For soft affinity, we return the intersection between t= he - * new soft affinity, the cpupool's online map and the (ne= w) - * hard affinity. - */ - cpumask_and(new_affinity, new_affinity, online); - cpumask_and(new_affinity, new_affinity, - unit->cpu_hard_affinity); - ret =3D cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, - new_affinity); - } - - setvcpuaffinity_out: - free_cpumask_var(new_affinity); - free_cpumask_var(old_affinity); - } - else - { - if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) - ret =3D cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, - unit->cpu_hard_affinity); - if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) - ret =3D cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, - unit->cpu_soft_affinity); - } + ret =3D vcpu_affinity_domctl(d, op->cmd, &op->u.vcpuaffinity); break; - } =20 case XEN_DOMCTL_scheduler_op: ret =3D sched_adjust(d, &op->u.scheduler_op); diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c index 565575c326..8895d92b5e 100644 --- a/xen/common/sched/arinc653.c +++ b/xen/common/sched/arinc653.c @@ -26,7 +26,6 @@ =20 #include #include -#include #include #include #include @@ -35,6 +34,8 @@ #include #include =20 +#include "private.h" + /************************************************************************** * Private Macros * *************************************************************************= */ diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 4d8eb4c617..2fae959e90 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -38,6 +37,8 @@ #include #include =20 +#include "private.h" + #ifdef CONFIG_XEN_GUEST #include #else @@ -1607,6 +1608,194 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigne= d int cpu, uint8_t reason) return ret; } =20 +static inline +int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpu= aff) +{ + return vcpuaff->flags =3D=3D 0 || + ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) && + guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) || + ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) && + guest_handle_is_null(vcpuaff->cpumap_soft.bitmap)); +} + +int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, + struct xen_domctl_vcpuaffinity *vcpuaff) +{ + struct vcpu *v; + const struct sched_unit *unit; + int ret =3D 0; + + if ( vcpuaff->vcpu >=3D d->max_vcpus ) + return -EINVAL; + + if ( (v =3D d->vcpu[vcpuaff->vcpu]) =3D=3D NULL ) + return -ESRCH; + + if ( vcpuaffinity_params_invalid(vcpuaff) ) + return -EINVAL; + + unit =3D v->sched_unit; + + if ( cmd =3D=3D XEN_DOMCTL_setvcpuaffinity ) + { + cpumask_var_t new_affinity, old_affinity; + cpumask_t *online =3D cpupool_domain_master_cpumask(v->domain); + + /* + * We want to be able to restore hard affinity if we are trying + * setting both and changing soft affinity (which happens later, + * when hard affinity has been succesfully chaged already) fails. + */ + if ( !alloc_cpumask_var(&old_affinity) ) + return -ENOMEM; + + cpumask_copy(old_affinity, unit->cpu_hard_affinity); + + if ( !alloc_cpumask_var(&new_affinity) ) + { + free_cpumask_var(old_affinity); + return -ENOMEM; + } + + /* Undo a stuck SCHED_pin_override? */ + if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE ) + vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE); + + ret =3D 0; + + /* + * We both set a new affinity and report back to the caller what + * the scheduler will be effectively using. + */ + if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) + { + ret =3D xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), + &vcpuaff->cpumap_hard, nr_cpu_id= s); + if ( !ret ) + ret =3D vcpu_set_hard_affinity(v, new_affinity); + if ( ret ) + goto setvcpuaffinity_out; + + /* + * For hard affinity, what we return is the intersection of + * cpupool's online mask and the new hard affinity. + */ + cpumask_and(new_affinity, online, unit->cpu_hard_affinity); + ret =3D cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, new_af= finity); + } + if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) + { + ret =3D xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), + &vcpuaff->cpumap_soft, nr_cpu_id= s); + if ( !ret) + ret =3D vcpu_set_soft_affinity(v, new_affinity); + if ( ret ) + { + /* + * Since we're returning error, the caller expects nothing + * happened, so we rollback the changes to hard affinity + * (if any). + */ + if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) + vcpu_set_hard_affinity(v, old_affinity); + goto setvcpuaffinity_out; + } + + /* + * For soft affinity, we return the intersection between the + * new soft affinity, the cpupool's online map and the (new) + * hard affinity. + */ + cpumask_and(new_affinity, new_affinity, online); + cpumask_and(new_affinity, new_affinity, unit->cpu_hard_affinit= y); + ret =3D cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, new_af= finity); + } + + setvcpuaffinity_out: + free_cpumask_var(new_affinity); + free_cpumask_var(old_affinity); + } + else + { + if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) + ret =3D cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, + unit->cpu_hard_affinity); + if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) + ret =3D cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, + unit->cpu_soft_affinity); + } + + return ret; +} + +void domain_update_node_affinity(struct domain *d) +{ + cpumask_var_t dom_cpumask, dom_cpumask_soft; + cpumask_t *dom_affinity; + const cpumask_t *online; + struct sched_unit *unit; + unsigned int cpu; + + /* Do we have vcpus already? If not, no need to update node-affinity. = */ + if ( !d->vcpu || !d->vcpu[0] ) + return; + + if ( !zalloc_cpumask_var(&dom_cpumask) ) + return; + if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) + { + free_cpumask_var(dom_cpumask); + return; + } + + online =3D cpupool_domain_master_cpumask(d); + + spin_lock(&d->node_affinity_lock); + + /* + * If d->auto_node_affinity is true, let's compute the domain's + * node-affinity and update d->node_affinity accordingly. if false, + * just leave d->auto_node_affinity alone. + */ + if ( d->auto_node_affinity ) + { + /* + * We want the narrowest possible set of pcpus (to get the narowest + * possible set of nodes). What we need is the cpumask of where the + * domain can run (the union of the hard affinity of all its vcpus= ), + * and the full mask of where it would prefer to run (the union of + * the soft affinity of all its various vcpus). Let's build them. + */ + for_each_sched_unit ( d, unit ) + { + cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity); + cpumask_or(dom_cpumask_soft, dom_cpumask_soft, + unit->cpu_soft_affinity); + } + /* Filter out non-online cpus */ + cpumask_and(dom_cpumask, dom_cpumask, online); + ASSERT(!cpumask_empty(dom_cpumask)); + /* And compute the intersection between hard, online and soft */ + cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); + + /* + * If not empty, the intersection of hard, soft and online is the + * narrowest set we want. If empty, we fall back to hard&online. + */ + dom_affinity =3D cpumask_empty(dom_cpumask_soft) ? + dom_cpumask : dom_cpumask_soft; + + nodes_clear(d->node_affinity); + for_each_cpu ( cpu, dom_affinity ) + node_set(cpu_to_node(cpu), d->node_affinity); + } + + spin_unlock(&d->node_affinity_lock); + + free_cpumask_var(dom_cpumask_soft); + free_cpumask_var(dom_cpumask); +} + typedef long ret_t; =20 #endif /* !COMPAT */ diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index d66b541a94..7b31ab0d61 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -16,11 +16,12 @@ #include #include #include -#include #include #include #include =20 +#include "private.h" + #define for_each_cpupool(ptr) \ for ((ptr) =3D &cpupool_list; *(ptr) !=3D NULL; (ptr) =3D &((*(ptr))->= next)) =20 @@ -875,6 +876,16 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) return ret; } =20 +int cpupool_get_id(const struct domain *d) +{ + return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; +} + +cpumask_t *cpupool_valid_cpus(struct cpupool *pool) +{ + return pool->cpu_valid; +} + void dump_runq(unsigned char key) { unsigned long flags; diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c index aa41a3301b..4329d9df56 100644 --- a/xen/common/sched/credit.c +++ b/xen/common/sched/credit.c @@ -15,7 +15,6 @@ #include #include #include -#include #include #include #include @@ -24,6 +23,7 @@ #include #include =20 +#include "private.h" =20 /* * Locking: diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c index f7c477053c..65e8ab052e 100644 --- a/xen/common/sched/credit2.c +++ b/xen/common/sched/credit2.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #include #include @@ -26,6 +25,8 @@ #include #include =20 +#include "private.h" + /* Meant only for helping developers during debugging. */ /* #define d2printk printk */ #define d2printk(x...) diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c index 3f3418c9b1..b99f1e3c65 100644 --- a/xen/common/sched/null.c +++ b/xen/common/sched/null.c @@ -29,10 +29,11 @@ */ =20 #include -#include #include #include =20 +#include "private.h" + /* * null tracing events. Check include/public/trace.h for more details. */ diff --git a/xen/include/xen/sched-if.h b/xen/common/sched/private.h similarity index 99% rename from xen/include/xen/sched-if.h rename to xen/common/sched/private.h index b0ac54e63d..a702fd23b1 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/common/sched/private.h @@ -12,9 +12,6 @@ #include #include =20 -/* A global pointer to the initial cpupool (POOL0). */ -extern struct cpupool *cpupool0; - /* cpus currently in no cpupool */ extern cpumask_t cpupool_free_cpus; =20 diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c index b2b29481f3..8203b63a9d 100644 --- a/xen/common/sched/rt.c +++ b/xen/common/sched/rt.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include #include @@ -31,6 +30,8 @@ #include #include =20 +#include "private.h" + /* * TODO: * diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 1cb205d977..7e51d361de 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -27,6 +27,9 @@ struct xen_domctl_getdomaininfo; void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info= ); void arch_get_domain_info(const struct domain *d, struct xen_domctl_getdomaininfo *info); +int xenctl_bitmap_to_bitmap(unsigned long *bitmap, + const struct xenctl_bitmap *xenctl_bitmap, + unsigned int nbits); =20 /* * Arch-specifics. diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index cc942a3621..d3adc69ab9 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -50,6 +50,9 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t); /* A global pointer to the hardware domain (usually DOM0). */ extern struct domain *hardware_domain; =20 +/* A global pointer to the initial cpupool (POOL0). */ +extern struct cpupool *cpupool0; + #ifdef CONFIG_LATE_HWDOM extern domid_t hardware_domid; #else @@ -931,6 +934,8 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned in= t cpu, uint8_t reason); int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity); int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity); void restore_vcpu_affinity(struct domain *d); +int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, + struct xen_domctl_vcpuaffinity *vcpuaff); =20 void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate= ); uint64_t get_cpu_idle_time(unsigned int cpu); @@ -1068,6 +1073,8 @@ int cpupool_add_domain(struct domain *d, int poolid); void cpupool_rm_domain(struct domain *d); int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); +int cpupool_get_id(const struct domain *d); +cpumask_t *cpupool_valid_cpus(struct cpupool *pool); void schedule_dump(struct cpupool *c); extern void dump_runq(unsigned char key); =20 --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Mar 29 12:33:38 2024 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497067; cv=none; d=zohomail.com; s=zohoarc; b=jAxcEKeMd2gYzgzc+ngCgF0FF8H7OOJC3QrdMUCWb5yoXGkty/Ax24NLPXdC1id6aH+oz7CXqy/zsbHsNplMBlrAxecmz01w1YT9ntsBhS/7XyuK6BORUOewQG6i1bmTk6yf2uzHNKCKYPTtOW0R+dCI1Nsf7W+j15+6FjJCBnk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497067; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=xfzieMMTTX/F5H90xqBKy/S8znosLU00kpykYSFAtvA=; b=ZA7jRbmd/OHdwi5n1jxPtT5MP3T8TXfCAbScQliA2obG/DAigLLM5UOm07KDUldtX+AWAYKVFuo8Nm6/4yzMtjSBm8yDP5JnchU9dz3KxdiQUctTHS9qm+o7BidThWcCk0tOYRMLQ3QBu9j4hldALDAidP273L1XEvWUTLalYyE= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497067886992.176780149468; Wed, 8 Jan 2020 07:24:27 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBY-0004WT-O3; Wed, 08 Jan 2020 15:23:52 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBX-0004Vo-IE for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:23:51 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d45bd6d8-322a-11ea-a38f-bc764e2007e4; Wed, 08 Jan 2020 15:23:32 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CB2D3B1E6; Wed, 8 Jan 2020 15:23:31 +0000 (UTC) X-Inumbo-ID: d45bd6d8-322a-11ea-a38f-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:22 +0100 Message-Id: <20200108152328.27194-4-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 3/9] xen/sched: cleanup sched.h X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Jan Beulich MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" There are some items in include/xen/sched.h which can be moved to private.h as they are scheduler private. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/core.c | 2 +- xen/common/sched/private.h | 13 +++++++++++++ xen/include/xen/sched.h | 17 ----------------- 3 files changed, 14 insertions(+), 18 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 2fae959e90..4153d110be 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -1346,7 +1346,7 @@ int vcpu_set_hard_affinity(struct vcpu *v, const cpum= ask_t *affinity) return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_hard_affinity= ); } =20 -int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity) +static int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinit= y) { return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_soft_affinity= ); } diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index a702fd23b1..edce354dc7 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -533,6 +533,7 @@ static inline void sched_unit_unpause(const struct sche= d_unit *unit) struct cpupool { int cpupool_id; +#define CPUPOOLID_NONE -1 unsigned int n_dom; cpumask_var_t cpu_valid; /* all cpus assigned to pool */ cpumask_var_t res_valid; /* all scheduling resources of pool */ @@ -618,5 +619,17 @@ affinity_balance_cpumask(const struct sched_unit *unit= , int step, =20 void sched_rm_cpu(unsigned int cpu); const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int c= pu); +void schedule_dump(struct cpupool *c); +struct scheduler *scheduler_get_default(void); +struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr); +void scheduler_free(struct scheduler *sched); +int cpu_disable_scheduler(unsigned int cpu); +int schedule_cpu_add(unsigned int cpu, struct cpupool *c); +int schedule_cpu_rm(unsigned int cpu); +int sched_move_domain(struct domain *d, struct cpupool *c); +struct cpupool *cpupool_get_by_id(int poolid); +void cpupool_put(struct cpupool *pool); +int cpupool_add_domain(struct domain *d, int poolid); +void cpupool_rm_domain(struct domain *d); =20 #endif /* __XEN_SCHED_IF_H__ */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index d3adc69ab9..b4c2e4f7c2 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -687,7 +687,6 @@ int sched_init_vcpu(struct vcpu *v); void sched_destroy_vcpu(struct vcpu *v); int sched_init_domain(struct domain *d, int poolid); void sched_destroy_domain(struct domain *d); -int sched_move_domain(struct domain *d, struct cpupool *c); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); int sched_id(void); @@ -920,19 +919,10 @@ static inline bool sched_has_urgent_vcpu(void) return atomic_read(&this_cpu(sched_urgent_count)); } =20 -struct scheduler; - -struct scheduler *scheduler_get_default(void); -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr); -void scheduler_free(struct scheduler *sched); -int schedule_cpu_add(unsigned int cpu, struct cpupool *c); -int schedule_cpu_rm(unsigned int cpu); void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value); -int cpu_disable_scheduler(unsigned int cpu); void sched_setup_dom0_vcpus(struct domain *d); int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reas= on); int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity); -int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity); void restore_vcpu_affinity(struct domain *d); int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, struct xen_domctl_vcpuaffinity *vcpuaff); @@ -1065,17 +1055,10 @@ extern enum cpufreq_controller { FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen } cpufreq_controller; =20 -#define CPUPOOLID_NONE -1 - -struct cpupool *cpupool_get_by_id(int poolid); -void cpupool_put(struct cpupool *pool); -int cpupool_add_domain(struct domain *d, int poolid); -void cpupool_rm_domain(struct domain *d); int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); int cpupool_get_id(const struct domain *d); cpumask_t *cpupool_valid_cpus(struct cpupool *pool); -void schedule_dump(struct cpupool *c); extern void dump_runq(unsigned char key); =20 void arch_do_physinfo(struct xen_sysctl_physinfo *pi); --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Mar 29 12:33:38 2024 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497065; cv=none; d=zohomail.com; s=zohoarc; b=nNxwW/1s7ZmGgHWFlj2574bY4WyKCfFVcA9odlFB0mR3A/f9ZP+azmuJCX+yikp0c72MxK0SCmO8PGPQtbVlJFIcfWBREv9vvBagYhnm5BdVPIHW4V1DqmU6mk3TxDeinxCgttmhL01ShBPcBDZwImybB1NutQwrk9GPDS7AnBA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497065; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=9KeW1onabaJ/FQgv4bYsXo0baPqCKkmvShfOaYGMb6c=; b=bex9n+CrYqFpsbnQyvDDuye9NGRQJcdgkFfCeT8zXRGAyTVCIezNkiQJJrxsUp5ofxMwm+b+kbdWIlbGJHQphK+YY7TlNw/+Y5IFm0iE0wBi/EQG8NAlrd1/ny7+HJ1+RQgHmvETwzTDHiJoi5ETKltiU7YWAu8ZpyFCREzxa0E= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497065381480.17429143471566; Wed, 8 Jan 2020 07:24:25 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBU-0004ST-6M; Wed, 08 Jan 2020 15:23:48 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBS-0004Ru-H0 for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:23:46 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d46a53e8-322a-11ea-b1f0-bc764e2007e4; Wed, 08 Jan 2020 15:23:32 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id E6E36B1E7; Wed, 8 Jan 2020 15:23:31 +0000 (UTC) X-Inumbo-ID: d46a53e8-322a-11ea-b1f0-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:23 +0100 Message-Id: <20200108152328.27194-5-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 4/9] xen/sched: remove special cases for free cpus in schedulers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" With the idle scheduler now taking care of all cpus not in any cpupool the special cases in the other schedulers for no cpupool associated can be removed. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/credit.c | 7 ++----- xen/common/sched/credit2.c | 30 ------------------------------ 2 files changed, 2 insertions(+), 35 deletions(-) diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c index 4329d9df56..6b04f8f71c 100644 --- a/xen/common/sched/credit.c +++ b/xen/common/sched/credit.c @@ -1690,11 +1690,8 @@ csched_load_balance(struct csched_private *prv, int = cpu, =20 BUG_ON(get_sched_res(cpu) !=3D snext->unit->res); =20 - /* - * If this CPU is going offline, or is not (yet) part of any cpupool - * (as it happens, e.g., during cpu bringup), we shouldn't steal work. - */ - if ( unlikely(!cpumask_test_cpu(cpu, online) || c =3D=3D NULL) ) + /* If this CPU is going offline, we shouldn't steal work. */ + if ( unlikely(!cpumask_test_cpu(cpu, online)) ) goto out; =20 if ( snext->pri =3D=3D CSCHED_PRI_IDLE ) diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c index 65e8ab052e..849d254e04 100644 --- a/xen/common/sched/credit2.c +++ b/xen/common/sched/credit2.c @@ -2744,40 +2744,10 @@ static void csched2_unit_migrate( const struct scheduler *ops, struct sched_unit *unit, unsigned int new= _cpu) { - struct domain *d =3D unit->domain; struct csched2_unit * const svc =3D csched2_unit(unit); struct csched2_runqueue_data *trqd; s_time_t now =3D NOW(); =20 - /* - * Being passed a target pCPU which is outside of our cpupool is only - * valid if we are shutting down (or doing ACPI suspend), and we are - * moving everyone to BSP, no matter whether or not BSP is inside our - * cpupool. - * - * And since there indeed is the chance that it is not part of it, all - * we must do is remove _and_ unassign the unit from any runqueue, as - * well as updating v->processor with the target, so that the suspend - * process can continue. - * - * It will then be during resume that a new, meaningful, value for - * v->processor will be chosen, and during actual domain unpause that - * the unit will be assigned to and added to the proper runqueue. - */ - if ( unlikely(!cpumask_test_cpu(new_cpu, cpupool_domain_master_cpumask= (d))) ) - { - ASSERT(system_state =3D=3D SYS_STATE_suspend); - if ( unit_on_runq(svc) ) - { - runq_remove(svc); - update_load(ops, svc->rqd, NULL, -1, now); - } - _runq_deassign(svc); - sched_set_res(unit, get_sched_res(new_cpu)); - return; - } - - /* If here, new_cpu must be a valid Credit2 pCPU, and in our affinity.= */ ASSERT(cpumask_test_cpu(new_cpu, &csched2_priv(ops)->initialized)); ASSERT(cpumask_test_cpu(new_cpu, unit->cpu_hard_affinity)); =20 --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Mar 29 12:33:38 2024 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497067; cv=none; d=zohomail.com; s=zohoarc; b=R/KUWqZ1m2EjDtOai3bH+zLHuQ6+9PPj8IkPZokju0DVfKYWpwFT5BLXty06gP/mQtvvUrUfu2OIBHygQqViEnXw0VLSHKULwQGPpJ7zKgXoBoZb1i53OlhOv5Qrf9LzixqhqM0bCJBpB5CRKg4dI4V8yiQKf4/Uv0ZlVCEKF2k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497067; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=HAe37PSJF+oc1DQ6Y1y7Q/l8zV2CAY1Or0MMP5ZukoI=; b=RL9Hr7svTyqjyYLco81UTIn7dsCQ7SjMdEN2uLJjGDY0oqi+Bam8brT7uLQ38yXq5cS/qrsJM+JWaEF1JA6otMGlUTGYT4DS+aD/SqjXdbyxOjW0VSAsl5CQ04z4JrSEp5bIRn+X+4BqCvw77/szem+Bxv2lsTGLJyaUgkJIg5k= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497067025826.8084662997674; Wed, 8 Jan 2020 07:24:27 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBe-0004ZV-1F; Wed, 08 Jan 2020 15:23:58 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBc-0004Yj-HO for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:23:56 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d4fa4174-322a-11ea-9832-bc764e2007e4; Wed, 08 Jan 2020 15:23:33 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 1892BB1E8; Wed, 8 Jan 2020 15:23:32 +0000 (UTC) X-Inumbo-ID: d4fa4174-322a-11ea-9832-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:24 +0100 Message-Id: <20200108152328.27194-6-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Meng Xu , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In rt scheduler there are three instances of cpumasks allocated on the stack. Replace them by using cpumask_scratch. Signed-off-by: Juergen Gross Reviewed-by: Meng Xu --- xen/common/sched/rt.c | 56 ++++++++++++++++++++++++++++++++++-------------= ---- 1 file changed, 37 insertions(+), 19 deletions(-) diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c index 8203b63a9d..d26f77f554 100644 --- a/xen/common/sched/rt.c +++ b/xen/common/sched/rt.c @@ -637,23 +637,38 @@ replq_reinsert(const struct scheduler *ops, struct rt= _unit *svc) * and available resources */ static struct sched_resource * -rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu) { - cpumask_t cpus; + cpumask_t *cpus =3D cpumask_scratch_cpu(locked_cpu); cpumask_t *online; int cpu; =20 online =3D cpupool_domain_master_cpumask(unit->domain); - cpumask_and(&cpus, online, unit->cpu_hard_affinity); + cpumask_and(cpus, online, unit->cpu_hard_affinity); =20 - cpu =3D cpumask_test_cpu(sched_unit_master(unit), &cpus) + cpu =3D cpumask_test_cpu(sched_unit_master(unit), cpus) ? sched_unit_master(unit) - : cpumask_cycle(sched_unit_master(unit), &cpus); - ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) ); + : cpumask_cycle(sched_unit_master(unit), cpus); + ASSERT( !cpumask_empty(cpus) && cpumask_test_cpu(cpu, cpus) ); =20 return get_sched_res(cpu); } =20 +/* + * Pick a valid resource for the unit vc + * Valid resource of an unit is intesection of unit's affinity + * and available resources + */ +static struct sched_resource * +rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +{ + struct sched_resource *res; + + res =3D rt_res_pick_locked(unit, unit->res->master_cpu); + + return res; +} + /* * Init/Free related code */ @@ -886,11 +901,14 @@ rt_unit_insert(const struct scheduler *ops, struct sc= hed_unit *unit) struct rt_unit *svc =3D rt_unit(unit); s_time_t now; spinlock_t *lock; + unsigned int cpu =3D smp_processor_id(); =20 BUG_ON( is_idle_unit(unit) ); =20 /* This is safe because unit isn't yet being scheduled */ - sched_set_res(unit, rt_res_pick(ops, unit)); + lock =3D pcpu_schedule_lock_irq(cpu); + sched_set_res(unit, rt_res_pick_locked(unit, cpu)); + pcpu_schedule_unlock_irq(lock, cpu); =20 lock =3D unit_schedule_lock_irq(unit); =20 @@ -1003,13 +1021,13 @@ burn_budget(const struct scheduler *ops, struct rt_= unit *svc, s_time_t now) * lock is grabbed before calling this function */ static struct rt_unit * -runq_pick(const struct scheduler *ops, const cpumask_t *mask) +runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int= cpu) { struct list_head *runq =3D rt_runq(ops); struct list_head *iter; struct rt_unit *svc =3D NULL; struct rt_unit *iter_svc =3D NULL; - cpumask_t cpu_common; + cpumask_t *cpu_common =3D cpumask_scratch_cpu(cpu); cpumask_t *online; =20 list_for_each ( iter, runq ) @@ -1018,9 +1036,9 @@ runq_pick(const struct scheduler *ops, const cpumask_= t *mask) =20 /* mask cpu_hard_affinity & cpupool & mask */ online =3D cpupool_domain_master_cpumask(iter_svc->unit->domain); - cpumask_and(&cpu_common, online, iter_svc->unit->cpu_hard_affinity= ); - cpumask_and(&cpu_common, mask, &cpu_common); - if ( cpumask_empty(&cpu_common) ) + cpumask_and(cpu_common, online, iter_svc->unit->cpu_hard_affinity); + cpumask_and(cpu_common, mask, cpu_common); + if ( cpumask_empty(cpu_common) ) continue; =20 ASSERT( iter_svc->cur_budget > 0 ); @@ -1092,7 +1110,7 @@ rt_schedule(const struct scheduler *ops, struct sched= _unit *currunit, } else { - snext =3D runq_pick(ops, cpumask_of(sched_cpu)); + snext =3D runq_pick(ops, cpumask_of(sched_cpu), cur_cpu); =20 if ( snext =3D=3D NULL ) snext =3D rt_unit(sched_idle_unit(sched_cpu)); @@ -1186,22 +1204,22 @@ runq_tickle(const struct scheduler *ops, struct rt_= unit *new) struct rt_unit *iter_svc; struct sched_unit *iter_unit; int cpu =3D 0, cpu_to_tickle =3D 0; - cpumask_t not_tickled; + cpumask_t *not_tickled =3D cpumask_scratch_cpu(smp_processor_id()); cpumask_t *online; =20 if ( new =3D=3D NULL || is_idle_unit(new->unit) ) return; =20 online =3D cpupool_domain_master_cpumask(new->unit->domain); - cpumask_and(¬_tickled, online, new->unit->cpu_hard_affinity); - cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); + cpumask_and(not_tickled, online, new->unit->cpu_hard_affinity); + cpumask_andnot(not_tickled, not_tickled, &prv->tickled); =20 /* * 1) If there are any idle CPUs, kick one. * For cache benefit,we first search new->cpu. * The same loop also find the one with lowest priority. */ - cpu =3D cpumask_test_or_cycle(sched_unit_master(new->unit), ¬_tickl= ed); + cpu =3D cpumask_test_or_cycle(sched_unit_master(new->unit), not_tickle= d); while ( cpu!=3D nr_cpu_ids ) { iter_unit =3D curr_on_cpu(cpu); @@ -1216,8 +1234,8 @@ runq_tickle(const struct scheduler *ops, struct rt_un= it *new) compare_unit_priority(iter_svc, latest_deadline_unit) < 0 ) latest_deadline_unit =3D iter_svc; =20 - cpumask_clear_cpu(cpu, ¬_tickled); - cpu =3D cpumask_cycle(cpu, ¬_tickled); + cpumask_clear_cpu(cpu, not_tickled); + cpu =3D cpumask_cycle(cpu, not_tickled); } =20 /* 2) candicate has higher priority, kick out lowest priority unit */ --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Mar 29 12:33:38 2024 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497074; cv=none; d=zohomail.com; s=zohoarc; b=nE0kSwNbyxgmvKK1CEDqYwSAKBzM4iAhNemsnZGJIIC4JoKHTTtAaVFWKtzR/ZBdkzA2hAAgcNFv0n2VVo2af8kdPsho5dvljF9aNjEwYNZiDwn4x6Hs0k6iCceEIxZOU4WuW5wOiL63NfBc6C+DFeebBLwI7GGM6mQo5Dz7rf0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497074; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=boXYwYlZuzFYpbV1nDSdPjJI7eVtXoAwOrTX1oF3ihg=; b=Ph1iGOXlgLQEEvf6956BH9efh5aWivgyCjNpl1fd3KZbD9yWfx4aoPGOG6owKFADbVgAigU5Q1iM9MyhXwyPHuUgcPeJAuioN52xe8/whROjKSoF8zReniXosl1x6T2n/p+fI8L93MxX735Q/3vrf0J1C4KJDPyVcKby0u8pQ7Q= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497074392381.34408518665714; Wed, 8 Jan 2020 07:24:34 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBj-0004e9-BZ; Wed, 08 Jan 2020 15:24:03 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBh-0004ca-Hk for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:24:01 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d4fc95e6-322a-11ea-b1f0-bc764e2007e4; Wed, 08 Jan 2020 15:23:33 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 3BC25AF9F; Wed, 8 Jan 2020 15:23:32 +0000 (UTC) X-Inumbo-ID: d4fc95e6-322a-11ea-b1f0-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:25 +0100 Message-Id: <20200108152328.27194-7-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Instead of having an own percpu-variable for private data per cpu the generic scheduler interface for that purpose should be used. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/null.c | 89 +++++++++++++++++++++++++++++++++------------= ---- 1 file changed, 60 insertions(+), 29 deletions(-) diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c index b99f1e3c65..3161ac2e62 100644 --- a/xen/common/sched/null.c +++ b/xen/common/sched/null.c @@ -89,7 +89,6 @@ struct null_private { struct null_pcpu { struct sched_unit *unit; }; -DEFINE_PER_CPU(struct null_pcpu, npc); =20 /* * Schedule unit @@ -159,32 +158,48 @@ static void null_deinit(struct scheduler *ops) ops->sched_data =3D NULL; } =20 -static void init_pdata(struct null_private *prv, unsigned int cpu) +static void init_pdata(struct null_private *prv, struct null_pcpu *npc, + unsigned int cpu) { /* Mark the pCPU as free, and with no unit assigned */ cpumask_set_cpu(cpu, &prv->cpus_free); - per_cpu(npc, cpu).unit =3D NULL; + npc->unit =3D NULL; } =20 static void null_init_pdata(const struct scheduler *ops, void *pdata, int = cpu) { struct null_private *prv =3D null_priv(ops); =20 - /* alloc_pdata is not implemented, so we want this to be NULL. */ - ASSERT(!pdata); + ASSERT(pdata); =20 - init_pdata(prv, cpu); + init_pdata(prv, pdata, cpu); } =20 static void null_deinit_pdata(const struct scheduler *ops, void *pcpu, int= cpu) { struct null_private *prv =3D null_priv(ops); + struct null_pcpu *npc =3D pcpu; =20 - /* alloc_pdata not implemented, so this must have stayed NULL */ - ASSERT(!pcpu); + ASSERT(npc); =20 cpumask_clear_cpu(cpu, &prv->cpus_free); - per_cpu(npc, cpu).unit =3D NULL; + npc->unit =3D NULL; +} + +static void *null_alloc_pdata(const struct scheduler *ops, int cpu) +{ + struct null_pcpu *npc; + + npc =3D xzalloc(struct null_pcpu); + if ( npc =3D=3D NULL ) + return ERR_PTR(-ENOMEM); + + return npc; +} + +static void null_free_pdata(const struct scheduler *ops, void *pcpu, int c= pu) +{ + xfree(pcpu); } =20 static void *null_alloc_udata(const struct scheduler *ops, @@ -268,6 +283,7 @@ pick_res(struct null_private *prv, const struct sched_u= nit *unit) unsigned int bs; unsigned int cpu =3D sched_unit_master(unit), new_cpu; cpumask_t *cpus =3D cpupool_domain_master_cpumask(unit->domain); + struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; =20 ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 @@ -286,8 +302,7 @@ pick_res(struct null_private *prv, const struct sched_u= nit *unit) * don't, so we get to keep in the scratch cpumask what we have ju= st * put in it.) */ - if ( likely((per_cpu(npc, cpu).unit =3D=3D NULL || - per_cpu(npc, cpu).unit =3D=3D unit) + if ( likely((npc->unit =3D=3D NULL || npc->unit =3D=3D unit) && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) ) { new_cpu =3D cpu; @@ -336,9 +351,11 @@ pick_res(struct null_private *prv, const struct sched_= unit *unit) static void unit_assign(struct null_private *prv, struct sched_unit *unit, unsigned int cpu) { + struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; + ASSERT(is_unit_online(unit)); =20 - per_cpu(npc, cpu).unit =3D unit; + npc->unit =3D unit; sched_set_res(unit, get_sched_res(cpu)); cpumask_clear_cpu(cpu, &prv->cpus_free); =20 @@ -363,12 +380,13 @@ static bool unit_deassign(struct null_private *prv, s= truct sched_unit *unit) unsigned int bs; unsigned int cpu =3D sched_unit_master(unit); struct null_unit *wvc; + struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; =20 ASSERT(list_empty(&null_unit(unit)->waitq_elem)); - ASSERT(per_cpu(npc, cpu).unit =3D=3D unit); + ASSERT(npc->unit =3D=3D unit); ASSERT(!cpumask_test_cpu(cpu, &prv->cpus_free)); =20 - per_cpu(npc, cpu).unit =3D NULL; + npc->unit =3D NULL; cpumask_set_cpu(cpu, &prv->cpus_free); =20 dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain, @@ -436,7 +454,7 @@ static spinlock_t *null_switch_sched(struct scheduler *= new_ops, */ ASSERT(!local_irq_is_enabled()); =20 - init_pdata(prv, cpu); + init_pdata(prv, pdata, cpu); =20 return &sr->_lock; } @@ -446,6 +464,7 @@ static void null_unit_insert(const struct scheduler *op= s, { struct null_private *prv =3D null_priv(ops); struct null_unit *nvc =3D null_unit(unit); + struct null_pcpu *npc; unsigned int cpu; spinlock_t *lock; =20 @@ -462,6 +481,7 @@ static void null_unit_insert(const struct scheduler *op= s, retry: sched_set_res(unit, pick_res(prv, unit)); cpu =3D sched_unit_master(unit); + npc =3D get_sched_res(cpu)->sched_priv; =20 spin_unlock(lock); =20 @@ -471,7 +491,7 @@ static void null_unit_insert(const struct scheduler *op= s, cpupool_domain_master_cpumask(unit->domain)); =20 /* If the pCPU is free, we assign unit to it */ - if ( likely(per_cpu(npc, cpu).unit =3D=3D NULL) ) + if ( likely(npc->unit =3D=3D NULL) ) { /* * Insert is followed by vcpu_wake(), so there's no need to poke @@ -519,7 +539,10 @@ static void null_unit_remove(const struct scheduler *o= ps, /* If offline, the unit shouldn't be assigned, nor in the waitqueue */ if ( unlikely(!is_unit_online(unit)) ) { - ASSERT(per_cpu(npc, sched_unit_master(unit)).unit !=3D unit); + struct null_pcpu *npc; + + npc =3D unit->res->sched_priv; + ASSERT(npc->unit !=3D unit); ASSERT(list_empty(&nvc->waitq_elem)); goto out; } @@ -548,6 +571,7 @@ static void null_unit_wake(const struct scheduler *ops, struct null_private *prv =3D null_priv(ops); struct null_unit *nvc =3D null_unit(unit); unsigned int cpu =3D sched_unit_master(unit); + struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; =20 ASSERT(!is_idle_unit(unit)); =20 @@ -569,7 +593,7 @@ static void null_unit_wake(const struct scheduler *ops, else SCHED_STAT_CRANK(unit_wake_not_runnable); =20 - if ( likely(per_cpu(npc, cpu).unit =3D=3D unit) ) + if ( likely(npc->unit =3D=3D unit) ) { cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); return; @@ -581,7 +605,7 @@ static void null_unit_wake(const struct scheduler *ops, * and its previous resource is free (and affinities match), we can ju= st * assign the unit to it (we own the proper lock already) and be done. */ - if ( per_cpu(npc, cpu).unit =3D=3D NULL && + if ( npc->unit =3D=3D NULL && unit_check_affinity(unit, cpu, BALANCE_HARD_AFFINITY) ) { if ( !has_soft_affinity(unit) || @@ -622,6 +646,7 @@ static void null_unit_sleep(const struct scheduler *ops, { struct null_private *prv =3D null_priv(ops); unsigned int cpu =3D sched_unit_master(unit); + struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; bool tickled =3D false; =20 ASSERT(!is_idle_unit(unit)); @@ -640,7 +665,7 @@ static void null_unit_sleep(const struct scheduler *ops, list_del_init(&nvc->waitq_elem); spin_unlock(&prv->waitq_lock); } - else if ( per_cpu(npc, cpu).unit =3D=3D unit ) + else if ( npc->unit =3D=3D unit ) tickled =3D unit_deassign(prv, unit); } =20 @@ -663,6 +688,7 @@ static void null_unit_migrate(const struct scheduler *o= ps, { struct null_private *prv =3D null_priv(ops); struct null_unit *nvc =3D null_unit(unit); + struct null_pcpu *npc; =20 ASSERT(!is_idle_unit(unit)); =20 @@ -686,7 +712,8 @@ static void null_unit_migrate(const struct scheduler *o= ps, * If unit is assigned to a pCPU, then such pCPU becomes free, and we * should look in the waitqueue if anyone else can be assigned to it. */ - if ( likely(per_cpu(npc, sched_unit_master(unit)).unit =3D=3D unit) ) + npc =3D unit->res->sched_priv; + if ( likely(npc->unit =3D=3D unit) ) { unit_deassign(prv, unit); SCHED_STAT_CRANK(migrate_running); @@ -720,7 +747,8 @@ static void null_unit_migrate(const struct scheduler *o= ps, * * In latter, all we can do is to park unit in the waitqueue. */ - if ( per_cpu(npc, new_cpu).unit =3D=3D NULL && + npc =3D get_sched_res(new_cpu)->sched_priv; + if ( npc->unit =3D=3D NULL && unit_check_affinity(unit, new_cpu, BALANCE_HARD_AFFINITY) ) { /* unit might have been in the waitqueue, so remove it */ @@ -788,6 +816,7 @@ static void null_schedule(const struct scheduler *ops, = struct sched_unit *prev, unsigned int bs; const unsigned int cur_cpu =3D smp_processor_id(); const unsigned int sched_cpu =3D sched_get_resource_cpu(cur_cpu); + struct null_pcpu *npc =3D get_sched_res(sched_cpu)->sched_priv; struct null_private *prv =3D null_priv(ops); struct null_unit *wvc; =20 @@ -802,14 +831,14 @@ static void null_schedule(const struct scheduler *ops= , struct sched_unit *prev, } d; d.cpu =3D cur_cpu; d.tasklet =3D tasklet_work_scheduled; - if ( per_cpu(npc, sched_cpu).unit =3D=3D NULL ) + if ( npc->unit =3D=3D NULL ) { d.unit =3D d.dom =3D -1; } else { - d.unit =3D per_cpu(npc, sched_cpu).unit->unit_id; - d.dom =3D per_cpu(npc, sched_cpu).unit->domain->domain_id; + d.unit =3D npc->unit->unit_id; + d.dom =3D npc->unit->domain->domain_id; } __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d); } @@ -820,7 +849,7 @@ static void null_schedule(const struct scheduler *ops, = struct sched_unit *prev, prev->next_task =3D sched_idle_unit(sched_cpu); } else - prev->next_task =3D per_cpu(npc, sched_cpu).unit; + prev->next_task =3D npc->unit; prev->next_time =3D -1; =20 /* @@ -921,6 +950,7 @@ static inline void dump_unit(struct null_private *prv, = struct null_unit *nvc) static void null_dump_pcpu(const struct scheduler *ops, int cpu) { struct null_private *prv =3D null_priv(ops); + struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; struct null_unit *nvc; spinlock_t *lock; unsigned long flags; @@ -930,9 +960,8 @@ static void null_dump_pcpu(const struct scheduler *ops,= int cpu) printk("CPU[%02d] sibling=3D{%*pbl}, core=3D{%*pbl}", cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)), CPUMASK_PR(per_cpu(cpu_core_mask, cpu))); - if ( per_cpu(npc, cpu).unit !=3D NULL ) - printk(", unit=3D%pdv%d", per_cpu(npc, cpu).unit->domain, - per_cpu(npc, cpu).unit->unit_id); + if ( npc->unit !=3D NULL ) + printk(", unit=3D%pdv%d", npc->unit->domain, npc->unit->unit_id); printk("\n"); =20 /* current unit (nothing to say if that's the idle unit) */ @@ -1010,6 +1039,8 @@ static const struct scheduler sched_null_def =3D { =20 .init =3D null_init, .deinit =3D null_deinit, + .alloc_pdata =3D null_alloc_pdata, + .free_pdata =3D null_free_pdata, .init_pdata =3D null_init_pdata, .switch_sched =3D null_switch_sched, .deinit_pdata =3D null_deinit_pdata, --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Mar 29 12:33:38 2024 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497061; cv=none; d=zohomail.com; s=zohoarc; b=hw2hc5+ATtmP7i0stcLNvxNnAQjI2UEZEHqNw6OPzG/7eZ+Wcb3+bwS6/5owSI0ixdQKlGlATJcsRyXj8hOsk0ko8cIsTtHjMEXBGKQrMVEbtczKCufaXB+EZezozI1yMCttKv8peOvRBOhqk2GrVH1HwbTMvSrchif1xI9RWW4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497061; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5g3O4KkpD+kVButslKxctArV5I55lQoA5He3G9Fl0js=; b=Z9fjIP6ZYeyEJfAmvAe7N57FTM7k2HkpwQVbXTo4Qro54e3scCykn9BbBJqsHQruGNkxZ0XDcCZAHSf6x8DzuPdU38wm+vJqeZgEaom/UgfwAIHSXV1vanryL9g2RG72javrHC36pEV6gtqw1geOhIaOTlxMs4lmXiyr4er+iMI= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497061317196.2935010516536; Wed, 8 Jan 2020 07:24:21 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBP-0004Qz-RP; Wed, 08 Jan 2020 15:23:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBP-0004QZ-4M for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:23:43 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d5d27440-322a-11ea-b82c-12813bfff9fa; Wed, 08 Jan 2020 15:23:35 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 8E522B1E9; Wed, 8 Jan 2020 15:23:32 +0000 (UTC) X-Inumbo-ID: d5d27440-322a-11ea-b82c-12813bfff9fa X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:26 +0100 Message-Id: <20200108152328.27194-8-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Josh Whitehead , Meng Xu , Jan Beulich , Stewart Hildebrand MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Scheduling code has several places using int or bool_t instead of bool. Switch those. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli Reviewed-by: Meng Xu --- V2: - rename bool "pos" to "first" (Dario Faggioli) --- xen/common/sched/arinc653.c | 8 ++++---- xen/common/sched/core.c | 14 +++++++------- xen/common/sched/cpupool.c | 10 +++++----- xen/common/sched/credit.c | 12 ++++++------ xen/common/sched/private.h | 2 +- xen/common/sched/rt.c | 18 +++++++++--------- xen/include/xen/sched.h | 6 +++--- 7 files changed, 35 insertions(+), 35 deletions(-) diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c index 8895d92b5e..bce8021e3f 100644 --- a/xen/common/sched/arinc653.c +++ b/xen/common/sched/arinc653.c @@ -75,7 +75,7 @@ typedef struct arinc653_unit_s * arinc653_unit_t pointer. */ struct sched_unit * unit; /* awake holds whether the UNIT has been woken with vcpu_wake() */ - bool_t awake; + bool awake; /* list holds the linked list information for the list this UNIT * is stored in */ struct list_head list; @@ -427,7 +427,7 @@ a653sched_alloc_udata(const struct scheduler *ops, stru= ct sched_unit *unit, * will mark the UNIT awake. */ svc->unit =3D unit; - svc->awake =3D 0; + svc->awake =3D false; if ( !is_idle_unit(unit) ) list_add(&svc->list, &SCHED_PRIV(ops)->unit_list); update_schedule_units(ops); @@ -473,7 +473,7 @@ static void a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { if ( AUNIT(unit) !=3D NULL ) - AUNIT(unit)->awake =3D 0; + AUNIT(unit)->awake =3D false; =20 /* * If the UNIT being put to sleep is the same one that is currently @@ -493,7 +493,7 @@ static void a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { if ( AUNIT(unit) !=3D NULL ) - AUNIT(unit)->awake =3D 1; + AUNIT(unit)->awake =3D true; =20 cpu_raise_softirq(sched_unit_master(unit), SCHEDULE_SOFTIRQ); } diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 4153d110be..896f82f4d2 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -53,7 +53,7 @@ string_param("sched", opt_sched); * scheduler will give preferrence to partially idle package compared to * the full idle package, when picking pCPU to schedule vCPU. */ -bool_t sched_smt_power_savings =3D 0; +bool sched_smt_power_savings; boolean_param("sched_smt_power_savings", sched_smt_power_savings); =20 /* Default scheduling rate limit: 1ms @@ -574,7 +574,7 @@ int sched_init_vcpu(struct vcpu *v) { get_sched_res(v->processor)->curr =3D unit; get_sched_res(v->processor)->sched_unit_idle =3D unit; - v->is_running =3D 1; + v->is_running =3D true; unit->is_running =3D true; unit->state_entry_time =3D NOW(); } @@ -983,7 +983,7 @@ static void sched_unit_migrate_finish(struct sched_unit= *unit) unsigned long flags; unsigned int old_cpu, new_cpu; spinlock_t *old_lock, *new_lock; - bool_t pick_called =3D 0; + bool pick_called =3D false; struct vcpu *v; =20 /* @@ -1029,7 +1029,7 @@ static void sched_unit_migrate_finish(struct sched_un= it *unit) if ( (new_lock =3D=3D get_sched_res(new_cpu)->schedule_lock) && cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_vali= d) ) break; - pick_called =3D 1; + pick_called =3D true; } else { @@ -1037,7 +1037,7 @@ static void sched_unit_migrate_finish(struct sched_un= it *unit) * We do not hold the scheduler lock appropriate for this vCPU. * Thus we cannot select a new CPU on this iteration. Try agai= n. */ - pick_called =3D 0; + pick_called =3D false; } =20 sched_spin_unlock_double(old_lock, new_lock, flags); @@ -2148,7 +2148,7 @@ static void sched_switch_units(struct sched_resource = *sr, vcpu_runstate_change(vnext, vnext->new_state, now); } =20 - vnext->is_running =3D 1; + vnext->is_running =3D true; =20 if ( is_idle_vcpu(vnext) ) vnext->sched_unit =3D next; @@ -2219,7 +2219,7 @@ static void vcpu_context_saved(struct vcpu *vprev, st= ruct vcpu *vnext) smp_wmb(); =20 if ( vprev !=3D vnext ) - vprev->is_running =3D 0; + vprev->is_running =3D false; } =20 static void unit_context_saved(struct sched_resource *sr) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 7b31ab0d61..c1396cfff4 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -154,7 +154,7 @@ static struct cpupool *alloc_cpupool_struct(void) * the searched id is returned * returns NULL if not found. */ -static struct cpupool *__cpupool_find_by_id(int id, int exact) +static struct cpupool *__cpupool_find_by_id(int id, bool exact) { struct cpupool **q; =20 @@ -169,10 +169,10 @@ static struct cpupool *__cpupool_find_by_id(int id, i= nt exact) =20 static struct cpupool *cpupool_find_by_id(int poolid) { - return __cpupool_find_by_id(poolid, 1); + return __cpupool_find_by_id(poolid, true); } =20 -static struct cpupool *__cpupool_get_by_id(int poolid, int exact) +static struct cpupool *__cpupool_get_by_id(int poolid, bool exact) { struct cpupool *c; spin_lock(&cpupool_lock); @@ -185,12 +185,12 @@ static struct cpupool *__cpupool_get_by_id(int poolid= , int exact) =20 struct cpupool *cpupool_get_by_id(int poolid) { - return __cpupool_get_by_id(poolid, 1); + return __cpupool_get_by_id(poolid, true); } =20 static struct cpupool *cpupool_get_next_by_id(int poolid) { - return __cpupool_get_by_id(poolid, 0); + return __cpupool_get_by_id(poolid, false); } =20 void cpupool_put(struct cpupool *pool) diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c index 6b04f8f71c..a75efbd43d 100644 --- a/xen/common/sched/credit.c +++ b/xen/common/sched/credit.c @@ -245,7 +245,7 @@ __runq_elem(struct list_head *elem) } =20 /* Is the first element of cpu's runq (if any) cpu's idle unit? */ -static inline bool_t is_runq_idle(unsigned int cpu) +static inline bool is_runq_idle(unsigned int cpu) { /* * We're peeking at cpu's runq, we must hold the proper lock. @@ -344,7 +344,7 @@ static void burn_credits(struct csched_unit *svc, s_tim= e_t now) svc->start_time +=3D (credits * MILLISECS(1)) / CSCHED_CREDITS_PER_MSE= C; } =20 -static bool_t __read_mostly opt_tickle_one_idle =3D 1; +static bool __read_mostly opt_tickle_one_idle =3D true; boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle); =20 DEFINE_PER_CPU(unsigned int, last_tickle_cpu); @@ -719,7 +719,7 @@ __csched_unit_is_migrateable(const struct csched_privat= e *prv, =20 static int _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *uni= t, - bool_t commit) + bool commit) { int cpu =3D sched_unit_master(unit); /* We must always use cpu's scratch space */ @@ -871,7 +871,7 @@ csched_res_pick(const struct scheduler *ops, const stru= ct sched_unit *unit) * get boosted, which we don't deserve as we are "only" migrating. */ set_bit(CSCHED_FLAG_UNIT_MIGRATING, &svc->flags); - return get_sched_res(_csched_cpu_pick(ops, unit, 1)); + return get_sched_res(_csched_cpu_pick(ops, unit, true)); } =20 static inline void @@ -975,7 +975,7 @@ csched_unit_acct(struct csched_private *prv, unsigned i= nt cpu) * migrating it to run elsewhere (see multi-core and multi-thread * support in csched_res_pick()). */ - new_cpu =3D _csched_cpu_pick(ops, currunit, 0); + new_cpu =3D _csched_cpu_pick(ops, currunit, false); =20 unit_schedule_unlock_irqrestore(lock, flags, currunit); =20 @@ -1108,7 +1108,7 @@ static void csched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { struct csched_unit * const svc =3D CSCHED_UNIT(unit); - bool_t migrating; + bool migrating; =20 BUG_ON( is_idle_unit(unit) ); =20 diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index edce354dc7..9d0db75cbb 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -589,7 +589,7 @@ unsigned int cpupool_get_granularity(const struct cpupo= ol *c); * * The hard affinity is not a subset of soft affinity * * There is an overlap between the soft and hard affinity masks */ -static inline int has_soft_affinity(const struct sched_unit *unit) +static inline bool has_soft_affinity(const struct sched_unit *unit) { return unit->soft_aff_effective && !cpumask_subset(cpupool_domain_master_cpumask(unit->domain), diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c index d26f77f554..6dc4b6a6e5 100644 --- a/xen/common/sched/rt.c +++ b/xen/common/sched/rt.c @@ -490,13 +490,13 @@ rt_update_deadline(s_time_t now, struct rt_unit *svc) static inline bool deadline_queue_remove(struct list_head *queue, struct list_head *elem) { - int pos =3D 0; + bool first =3D false; =20 if ( queue->next !=3D elem ) - pos =3D 1; + first =3D true; =20 list_del_init(elem); - return !pos; + return !first; } =20 static inline bool @@ -505,17 +505,17 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struc= t list_head *), struct list_head *queue) { struct list_head *iter; - int pos =3D 0; + bool first =3D true; =20 list_for_each ( iter, queue ) { struct rt_unit * iter_svc =3D (*qelem)(iter); if ( compare_unit_priority(svc, iter_svc) > 0 ) break; - pos++; + first =3D false; } list_add_tail(elem, iter); - return !pos; + return first; } #define deadline_runq_insert(...) \ deadline_queue_insert(&q_elem, ##__VA_ARGS__) @@ -605,7 +605,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_u= nit *svc) { struct list_head *replq =3D rt_replq(ops); struct rt_unit *rearm_svc =3D svc; - bool_t rearm =3D 0; + bool rearm =3D false; =20 ASSERT( unit_on_replq(svc) ); =20 @@ -622,7 +622,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_u= nit *svc) { deadline_replq_insert(svc, &svc->replq_elem, replq); rearm_svc =3D replq_elem(replq->next); - rearm =3D 1; + rearm =3D true; } else rearm =3D deadline_replq_insert(svc, &svc->replq_elem, replq); @@ -1279,7 +1279,7 @@ rt_unit_wake(const struct scheduler *ops, struct sche= d_unit *unit) { struct rt_unit * const svc =3D rt_unit(unit); s_time_t now; - bool_t missed; + bool missed; =20 BUG_ON( is_idle_unit(unit) ); =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index b4c2e4f7c2..d8e961095f 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -560,18 +560,18 @@ static inline bool is_system_domain(const struct doma= in *d) * Use this when you don't have an existing reference to @d. It returns * FALSE if @d is being destroyed. */ -static always_inline int get_domain(struct domain *d) +static always_inline bool get_domain(struct domain *d) { int old, seen =3D atomic_read(&d->refcnt); do { old =3D seen; if ( unlikely(old & DOMAIN_DESTROYED) ) - return 0; + return false; seen =3D atomic_cmpxchg(&d->refcnt, old, old + 1); } while ( unlikely(seen !=3D old) ); - return 1; + return true; } =20 /* --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Mar 29 12:33:38 2024 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497083; cv=none; d=zohomail.com; s=zohoarc; b=X9MwCoH6l8HUMfzD68HuEgNWlnPhEj8QKurNBA8PWqmdnj+O4VTvz6sKhLv0nN95tOEb6Agww6Yxo7n8UkzzXRrRxZHDCtbU1otgYD4wRexdJR+vniNNzC+sEpBUkggn7a3skOEkujz8EG2jAPbZuwgkazofevLRr1pkReyfELw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497083; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=IC6C1D4M75AC29gLGRoUaiQec8zum2kQhvLQm//G87Y=; b=FOy1JE1qMPTOQigv+lYWCUnt605W5gszL23MQAezTRStOnW/6YMBCwekc9kd2ygHuHIGfDMfCMgvOsADoo1thZqYQplhn4/YpN9NguAqbJjnMNYNaQwgkfW9cK1mMXxqt26bAmA1iokU8UwR392c+602IDBuPRm5YFMJ75J2//4= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497083681186.69414866009777; Wed, 8 Jan 2020 07:24:43 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBo-0004hy-MY; Wed, 08 Jan 2020 15:24:08 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBm-0004ge-Ii for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:24:06 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d5d35270-322a-11ea-8599-bc764e2007e4; Wed, 08 Jan 2020 15:23:35 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D0BE3AE79; Wed, 8 Jan 2020 15:23:32 +0000 (UTC) X-Inumbo-ID: d5d35270-322a-11ea-8599-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:27 +0100 Message-Id: <20200108152328.27194-9-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 8/9] xen/sched: eliminate sched_tick_suspend() and sched_tick_resume() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Jan Beulich , Volodymyr Babchuk , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" sched_tick_suspend() and sched_tick_resume() only call rcu related functions, so eliminate them and do the rcu_idle_timer*() calling in rcu_idle_[enter|exit](). Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli Acked-by: Julien Grall Acked-by: Andrew Cooper --- xen/arch/arm/domain.c | 6 +++--- xen/arch/x86/acpi/cpu_idle.c | 15 ++++++++------- xen/arch/x86/cpu/mwait-idle.c | 8 ++++---- xen/common/rcupdate.c | 7 +++++-- xen/common/sched/core.c | 12 ------------ xen/include/xen/rcupdate.h | 3 --- xen/include/xen/sched.h | 2 -- 7 files changed, 20 insertions(+), 33 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index c0a13aa0ab..aa3df3b3ba 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -46,8 +46,8 @@ static void do_idle(void) { unsigned int cpu =3D smp_processor_id(); =20 - sched_tick_suspend(); - /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */ + rcu_idle_enter(cpu); + /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */ process_pending_softirqs(); =20 local_irq_disable(); @@ -58,7 +58,7 @@ static void do_idle(void) } local_irq_enable(); =20 - sched_tick_resume(); + rcu_idle_exit(cpu); } =20 void idle_loop(void) diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c index 5edd1844f4..2676f0d7da 100644 --- a/xen/arch/x86/acpi/cpu_idle.c +++ b/xen/arch/x86/acpi/cpu_idle.c @@ -599,7 +599,8 @@ void update_idle_stats(struct acpi_processor_power *pow= er, =20 static void acpi_processor_idle(void) { - struct acpi_processor_power *power =3D processor_powers[smp_processor_= id()]; + unsigned int cpu =3D smp_processor_id(); + struct acpi_processor_power *power =3D processor_powers[cpu]; struct acpi_processor_cx *cx =3D NULL; int next_state; uint64_t t1, t2 =3D 0; @@ -648,8 +649,8 @@ static void acpi_processor_idle(void) =20 cpufreq_dbs_timer_suspend(); =20 - sched_tick_suspend(); - /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */ + rcu_idle_enter(cpu); + /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */ process_pending_softirqs(); =20 /* @@ -658,10 +659,10 @@ static void acpi_processor_idle(void) */ local_irq_disable(); =20 - if ( !cpu_is_haltable(smp_processor_id()) ) + if ( !cpu_is_haltable(cpu) ) { local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); return; } @@ -786,7 +787,7 @@ static void acpi_processor_idle(void) /* Now in C0 */ power->last_state =3D &power->states[0]; local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); return; } @@ -794,7 +795,7 @@ static void acpi_processor_idle(void) /* Now in C0 */ power->last_state =3D &power->states[0]; =20 - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); =20 if ( cpuidle_current_governor->reflect ) diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c index 52413e6da1..f49b04c45b 100644 --- a/xen/arch/x86/cpu/mwait-idle.c +++ b/xen/arch/x86/cpu/mwait-idle.c @@ -755,8 +755,8 @@ static void mwait_idle(void) =20 cpufreq_dbs_timer_suspend(); =20 - sched_tick_suspend(); - /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */ + rcu_idle_enter(cpu); + /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */ process_pending_softirqs(); =20 /* Interrupts must be disabled for C2 and higher transitions. */ @@ -764,7 +764,7 @@ static void mwait_idle(void) =20 if (!cpu_is_haltable(cpu)) { local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); return; } @@ -806,7 +806,7 @@ static void mwait_idle(void) if (!(lapic_timer_reliable_states & (1 << cstate))) lapic_timer_on(); =20 - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); =20 if ( cpuidle_current_governor->reflect ) diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c index a56103c6f7..cb712c8690 100644 --- a/xen/common/rcupdate.c +++ b/xen/common/rcupdate.c @@ -459,7 +459,7 @@ int rcu_needs_cpu(int cpu) * periodically poke rcu_pedning(), so that it will invoke the callback * not too late after the end of the grace period. */ -void rcu_idle_timer_start() +static void rcu_idle_timer_start(void) { struct rcu_data *rdp =3D &this_cpu(rcu_data); =20 @@ -475,7 +475,7 @@ void rcu_idle_timer_start() rdp->idle_timer_active =3D true; } =20 -void rcu_idle_timer_stop() +static void rcu_idle_timer_stop(void) { struct rcu_data *rdp =3D &this_cpu(rcu_data); =20 @@ -633,10 +633,13 @@ void rcu_idle_enter(unsigned int cpu) * Se the comment before cpumask_andnot() in rcu_start_batch(). */ smp_mb(); + + rcu_idle_timer_start(); } =20 void rcu_idle_exit(unsigned int cpu) { + rcu_idle_timer_stop(); ASSERT(cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask)); cpumask_clear_cpu(cpu, &rcu_ctrlblk.idle_cpumask); } diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 896f82f4d2..d32b9b1baa 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -3268,18 +3268,6 @@ void schedule_dump(struct cpupool *c) rcu_read_unlock(&sched_res_rculock); } =20 -void sched_tick_suspend(void) -{ - rcu_idle_enter(smp_processor_id()); - rcu_idle_timer_start(); -} - -void sched_tick_resume(void) -{ - rcu_idle_timer_stop(); - rcu_idle_exit(smp_processor_id()); -} - void wait(void) { schedule(); diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h index 13850865ed..174d058113 100644 --- a/xen/include/xen/rcupdate.h +++ b/xen/include/xen/rcupdate.h @@ -148,7 +148,4 @@ int rcu_barrier(void); void rcu_idle_enter(unsigned int cpu); void rcu_idle_exit(unsigned int cpu); =20 -void rcu_idle_timer_start(void); -void rcu_idle_timer_stop(void); - #endif /* __XEN_RCUPDATE_H */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index d8e961095f..cf7aa39844 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -690,8 +690,6 @@ void sched_destroy_domain(struct domain *d); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); int sched_id(void); -void sched_tick_suspend(void); -void sched_tick_resume(void); void vcpu_wake(struct vcpu *v); long vcpu_yield(void); void vcpu_sleep_nosync(struct vcpu *v); --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Mar 29 12:33:38 2024 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1578497088; cv=none; d=zohomail.com; s=zohoarc; b=c4e+HODSp8DdQatziZE2gsFkGI/1oBQ8tDh3Ga5k6qJRacTPOWRguzgg5AXbUjKv2DzxGcK2W7euyttyRNheYdWp7JW/X3BmBs5v43biUPvMcAgExeRr/YJma+QTtQNdkVBky6pyILEgI/hoCyZAc0I2K2vu3gWnoX+bzSI/P74= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1578497088; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=P29AcP2mIcf62G7t3Z9GVDniXjO202cc3FrkP1FuESA=; b=i6O6s66oWDsVP9N5iMy6OTsqVIpzzTFFNbQnxGr4VIL8VaX7nrVI2vN4XzelNhck7Ybf/HB9eGjd+KZ6/w6RB9jYe+iJvI44FzI19Bvz0u/OS9MCemg612sEEMLAVBrEwDGciGgLuRphXn6MVfp4jx/9VziMdm1mRCYkiukFZpo= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1578497088606620.3289111538567; Wed, 8 Jan 2020 07:24:48 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBt-0004mX-8B; Wed, 08 Jan 2020 15:24:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipDBr-0004l6-Iv for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 15:24:11 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d6bff4fe-322a-11ea-a38f-bc764e2007e4; Wed, 08 Jan 2020 15:23:36 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 2D579AE2E; Wed, 8 Jan 2020 15:23:33 +0000 (UTC) X-Inumbo-ID: d6bff4fe-322a-11ea-a38f-bc764e2007e4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 16:23:28 +0100 Message-Id: <20200108152328.27194-10-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200108152328.27194-1-jgross@suse.com> References: <20200108152328.27194-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 9/9] xen/sched: add const qualifier where appropriate X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Josh Whitehead , Meng Xu , Jan Beulich , Stewart Hildebrand MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Make use of the const qualifier more often in scheduling code. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli Acked-by: Meng Xu --- xen/common/sched/arinc653.c | 4 ++-- xen/common/sched/core.c | 25 +++++++++++----------- xen/common/sched/cpupool.c | 2 +- xen/common/sched/credit.c | 44 ++++++++++++++++++++------------------ xen/common/sched/credit2.c | 52 +++++++++++++++++++++++------------------= ---- xen/common/sched/null.c | 17 ++++++++------- xen/common/sched/rt.c | 32 ++++++++++++++-------------- xen/include/xen/sched.h | 9 ++++---- 8 files changed, 96 insertions(+), 89 deletions(-) diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c index bce8021e3f..5421918221 100644 --- a/xen/common/sched/arinc653.c +++ b/xen/common/sched/arinc653.c @@ -608,7 +608,7 @@ static struct sched_resource * a653sched_pick_resource(const struct scheduler *ops, const struct sched_unit *unit) { - cpumask_t *online; + const cpumask_t *online; unsigned int cpu; =20 /* @@ -639,7 +639,7 @@ a653_switch_sched(struct scheduler *new_ops, unsigned i= nt cpu, void *pdata, void *vdata) { struct sched_resource *sr =3D get_sched_res(cpu); - arinc653_unit_t *svc =3D vdata; + const arinc653_unit_t *svc =3D vdata; =20 ASSERT(!pdata && svc && is_idle_unit(svc->unit)); =20 diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index d32b9b1baa..944164d78a 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -175,7 +175,7 @@ static inline struct scheduler *dom_scheduler(const str= uct domain *d) =20 static inline struct scheduler *unit_scheduler(const struct sched_unit *un= it) { - struct domain *d =3D unit->domain; + const struct domain *d =3D unit->domain; =20 if ( likely(d->cpupool !=3D NULL) ) return d->cpupool->sched; @@ -202,7 +202,7 @@ static inline struct scheduler *vcpu_scheduler(const st= ruct vcpu *v) } #define VCPU2ONLINE(_v) cpupool_domain_master_cpumask((_v)->domain) =20 -static inline void trace_runstate_change(struct vcpu *v, int new_state) +static inline void trace_runstate_change(const struct vcpu *v, int new_sta= te) { struct { uint32_t vcpu:16, domain:16; } d; uint32_t event; @@ -220,7 +220,7 @@ static inline void trace_runstate_change(struct vcpu *v= , int new_state) __trace_var(event, 1/*tsc*/, sizeof(d), &d); } =20 -static inline void trace_continue_running(struct vcpu *v) +static inline void trace_continue_running(const struct vcpu *v) { struct { uint32_t vcpu:16, domain:16; } d; =20 @@ -302,7 +302,8 @@ void sched_guest_idle(void (*idle) (void), unsigned int= cpu) atomic_dec(&per_cpu(sched_urgent_count, cpu)); } =20 -void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) +void vcpu_runstate_get(const struct vcpu *v, + struct vcpu_runstate_info *runstate) { spinlock_t *lock; s_time_t delta; @@ -324,7 +325,7 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runs= tate_info *runstate) uint64_t get_cpu_idle_time(unsigned int cpu) { struct vcpu_runstate_info state =3D { 0 }; - struct vcpu *v =3D idle_vcpu[cpu]; + const struct vcpu *v =3D idle_vcpu[cpu]; =20 if ( cpu_online(cpu) && v ) vcpu_runstate_get(v, &state); @@ -392,7 +393,7 @@ static void sched_free_unit_mem(struct sched_unit *unit) =20 static void sched_free_unit(struct sched_unit *unit, struct vcpu *v) { - struct vcpu *vunit; + const struct vcpu *vunit; unsigned int cnt =3D 0; =20 /* Don't count to be released vcpu, might be not in vcpu list yet. */ @@ -522,7 +523,7 @@ static unsigned int sched_select_initial_cpu(const stru= ct vcpu *v) =20 int sched_init_vcpu(struct vcpu *v) { - struct domain *d =3D v->domain; + const struct domain *d =3D v->domain; struct sched_unit *unit; unsigned int processor; =20 @@ -913,7 +914,7 @@ static void sched_unit_move_locked(struct sched_unit *u= nit, unsigned int new_cpu) { unsigned int old_cpu =3D unit->res->master_cpu; - struct vcpu *v; + const struct vcpu *v; =20 rcu_read_lock(&sched_res_rculock); =20 @@ -1090,7 +1091,7 @@ static bool sched_check_affinity_broken(const struct = sched_unit *unit) return false; } =20 -static void sched_reset_affinity_broken(struct sched_unit *unit) +static void sched_reset_affinity_broken(const struct sched_unit *unit) { struct vcpu *v; =20 @@ -1176,7 +1177,7 @@ void restore_vcpu_affinity(struct domain *d) int cpu_disable_scheduler(unsigned int cpu) { struct domain *d; - struct cpupool *c; + const struct cpupool *c; cpumask_t online_affinity; int ret =3D 0; =20 @@ -1251,8 +1252,8 @@ out: static int cpu_disable_scheduler_check(unsigned int cpu) { struct domain *d; - struct vcpu *v; - struct cpupool *c; + const struct vcpu *v; + const struct cpupool *c; =20 c =3D get_sched_res(cpu)->cpupool; if ( c =3D=3D NULL ) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index c1396cfff4..28d5143e37 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -881,7 +881,7 @@ int cpupool_get_id(const struct domain *d) return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; } =20 -cpumask_t *cpupool_valid_cpus(struct cpupool *pool) +const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool) { return pool->cpu_valid; } diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c index a75efbd43d..cdda6fa09b 100644 --- a/xen/common/sched/credit.c +++ b/xen/common/sched/credit.c @@ -233,7 +233,7 @@ static void csched_tick(void *_cpu); static void csched_acct(void *dummy); =20 static inline int -__unit_on_runq(struct csched_unit *svc) +__unit_on_runq(const struct csched_unit *svc) { return !list_empty(&svc->runq_elem); } @@ -349,11 +349,11 @@ boolean_param("tickle_one_idle_cpu", opt_tickle_one_i= dle); =20 DEFINE_PER_CPU(unsigned int, last_tickle_cpu); =20 -static inline void __runq_tickle(struct csched_unit *new) +static inline void __runq_tickle(const struct csched_unit *new) { unsigned int cpu =3D sched_unit_master(new->unit); - struct sched_resource *sr =3D get_sched_res(cpu); - struct sched_unit *unit =3D new->unit; + const struct sched_resource *sr =3D get_sched_res(cpu); + const struct sched_unit *unit =3D new->unit; struct csched_unit * const cur =3D CSCHED_UNIT(curr_on_cpu(cpu)); struct csched_private *prv =3D CSCHED_PRIV(sr->scheduler); cpumask_t mask, idle_mask, *online; @@ -509,7 +509,7 @@ static inline void __runq_tickle(struct csched_unit *ne= w) static void csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) { - struct csched_private *prv =3D CSCHED_PRIV(ops); + const struct csched_private *prv =3D CSCHED_PRIV(ops); =20 /* * pcpu either points to a valid struct csched_pcpu, or is NULL, if we= 're @@ -652,7 +652,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned= int cpu, =20 #ifndef NDEBUG static inline void -__csched_unit_check(struct sched_unit *unit) +__csched_unit_check(const struct sched_unit *unit) { struct csched_unit * const svc =3D CSCHED_UNIT(unit); struct csched_dom * const sdom =3D svc->sdom; @@ -700,8 +700,8 @@ __csched_vcpu_is_cache_hot(const struct csched_private = *prv, =20 static inline int __csched_unit_is_migrateable(const struct csched_private *prv, - struct sched_unit *unit, - int dest_cpu, cpumask_t *mask) + const struct sched_unit *unit, + int dest_cpu, const cpumask_t *mask) { const struct csched_unit *svc =3D CSCHED_UNIT(unit); /* @@ -725,7 +725,7 @@ _csched_cpu_pick(const struct scheduler *ops, const str= uct sched_unit *unit, /* We must always use cpu's scratch space */ cpumask_t *cpus =3D cpumask_scratch_cpu(cpu); cpumask_t idlers; - cpumask_t *online =3D cpupool_domain_master_cpumask(unit->domain); + const cpumask_t *online =3D cpupool_domain_master_cpumask(unit->domain= ); struct csched_pcpu *spc =3D NULL; int balance_step; =20 @@ -932,7 +932,7 @@ csched_unit_acct(struct csched_private *prv, unsigned i= nt cpu) { struct sched_unit *currunit =3D current->sched_unit; struct csched_unit * const svc =3D CSCHED_UNIT(currunit); - struct sched_resource *sr =3D get_sched_res(cpu); + const struct sched_resource *sr =3D get_sched_res(cpu); const struct scheduler *ops =3D sr->scheduler; =20 ASSERT( sched_unit_master(currunit) =3D=3D cpu ); @@ -1084,7 +1084,7 @@ csched_unit_sleep(const struct scheduler *ops, struct= sched_unit *unit) { struct csched_unit * const svc =3D CSCHED_UNIT(unit); unsigned int cpu =3D sched_unit_master(unit); - struct sched_resource *sr =3D get_sched_res(cpu); + const struct sched_resource *sr =3D get_sched_res(cpu); =20 SCHED_STAT_CRANK(unit_sleep); =20 @@ -1577,7 +1577,7 @@ static void csched_tick(void *_cpu) { unsigned int cpu =3D (unsigned long)_cpu; - struct sched_resource *sr =3D get_sched_res(cpu); + const struct sched_resource *sr =3D get_sched_res(cpu); struct csched_pcpu *spc =3D CSCHED_PCPU(cpu); struct csched_private *prv =3D CSCHED_PRIV(sr->scheduler); =20 @@ -1604,7 +1604,7 @@ csched_tick(void *_cpu) static struct csched_unit * csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) { - struct sched_resource *sr =3D get_sched_res(cpu); + const struct sched_resource *sr =3D get_sched_res(cpu); const struct csched_private * const prv =3D CSCHED_PRIV(sr->scheduler); const struct csched_pcpu * const peer_pcpu =3D CSCHED_PCPU(peer_cpu); struct csched_unit *speer; @@ -1681,10 +1681,10 @@ static struct csched_unit * csched_load_balance(struct csched_private *prv, int cpu, struct csched_unit *snext, bool *stolen) { - struct cpupool *c =3D get_sched_res(cpu)->cpupool; + const struct cpupool *c =3D get_sched_res(cpu)->cpupool; struct csched_unit *speer; cpumask_t workers; - cpumask_t *online =3D c->res_valid; + const cpumask_t *online =3D c->res_valid; int peer_cpu, first_cpu, peer_node, bstep; int node =3D cpu_to_node(cpu); =20 @@ -2008,7 +2008,7 @@ out: } =20 static void -csched_dump_unit(struct csched_unit *svc) +csched_dump_unit(const struct csched_unit *svc) { struct csched_dom * const sdom =3D svc->sdom; =20 @@ -2041,10 +2041,11 @@ csched_dump_unit(struct csched_unit *svc) static void csched_dump_pcpu(const struct scheduler *ops, int cpu) { - struct list_head *runq, *iter; + const struct list_head *runq; + struct list_head *iter; struct csched_private *prv =3D CSCHED_PRIV(ops); - struct csched_pcpu *spc; - struct csched_unit *svc; + const struct csched_pcpu *spc; + const struct csched_unit *svc; spinlock_t *lock; unsigned long flags; int loop; @@ -2132,12 +2133,13 @@ csched_dump(const struct scheduler *ops) loop =3D 0; list_for_each( iter_sdom, &prv->active_sdom ) { - struct csched_dom *sdom; + const struct csched_dom *sdom; + sdom =3D list_entry(iter_sdom, struct csched_dom, active_sdom_elem= ); =20 list_for_each( iter_svc, &sdom->active_unit ) { - struct csched_unit *svc; + const struct csched_unit *svc; spinlock_t *lock; =20 svc =3D list_entry(iter_svc, struct csched_unit, active_unit_e= lem); diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c index 849d254e04..256c1c01fc 100644 --- a/xen/common/sched/credit2.c +++ b/xen/common/sched/credit2.c @@ -692,7 +692,7 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *m= ask) */ static int get_fallback_cpu(struct csched2_unit *svc) { - struct sched_unit *unit =3D svc->unit; + const struct sched_unit *unit =3D svc->unit; unsigned int bs; =20 SCHED_STAT_CRANK(need_fallback_cpu); @@ -774,7 +774,7 @@ static int get_fallback_cpu(struct csched2_unit *svc) * * FIXME: Do pre-calculated division? */ -static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time, +static void t2c_update(const struct csched2_runqueue_data *rqd, s_time_t t= ime, struct csched2_unit *svc) { uint64_t val =3D time * rqd->max_weight + svc->residual; @@ -783,7 +783,8 @@ static void t2c_update(struct csched2_runqueue_data *rq= d, s_time_t time, svc->credit -=3D val; } =20 -static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, st= ruct csched2_unit *svc) +static s_time_t c2t(const struct csched2_runqueue_data *rqd, s_time_t cred= it, + const struct csched2_unit *svc) { return credit * svc->weight / rqd->max_weight; } @@ -792,7 +793,7 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd, = s_time_t credit, struct c * Runqueue related code. */ =20 -static inline int unit_on_runq(struct csched2_unit *svc) +static inline int unit_on_runq(const struct csched2_unit *svc) { return !list_empty(&svc->runq_elem); } @@ -849,9 +850,9 @@ static inline bool same_core(unsigned int cpua, unsigne= d int cpub) } =20 static unsigned int -cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu) +cpu_to_runqueue(const struct csched2_private *prv, unsigned int cpu) { - struct csched2_runqueue_data *rqd; + const struct csched2_runqueue_data *rqd; unsigned int rqi; =20 for ( rqi =3D 0; rqi < nr_cpu_ids; rqi++ ) @@ -917,7 +918,7 @@ static void update_max_weight(struct csched2_runqueue_d= ata *rqd, int new_weight, =20 list_for_each( iter, &rqd->svc ) { - struct csched2_unit * svc =3D list_entry(iter, struct csched2_= unit, rqd_elem); + const struct csched2_unit * svc =3D list_entry(iter, struct cs= ched2_unit, rqd_elem); =20 if ( svc->weight > max_weight ) max_weight =3D svc->weight; @@ -970,7 +971,7 @@ _runq_assign(struct csched2_unit *svc, struct csched2_r= unqueue_data *rqd) } =20 static void -runq_assign(const struct scheduler *ops, struct sched_unit *unit) +runq_assign(const struct scheduler *ops, const struct sched_unit *unit) { struct csched2_unit *svc =3D unit->priv; =20 @@ -997,7 +998,7 @@ _runq_deassign(struct csched2_unit *svc) } =20 static void -runq_deassign(const struct scheduler *ops, struct sched_unit *unit) +runq_deassign(const struct scheduler *ops, const struct sched_unit *unit) { struct csched2_unit *svc =3D unit->priv; =20 @@ -1203,7 +1204,7 @@ static void update_svc_load(const struct scheduler *ops, struct csched2_unit *svc, int change, s_time_t now) { - struct csched2_private *prv =3D csched2_priv(ops); + const struct csched2_private *prv =3D csched2_priv(ops); s_time_t delta, unit_load; unsigned int P, W; =20 @@ -1362,11 +1363,11 @@ static inline bool is_preemptable(const struct csch= ed2_unit *svc, * Within the same class, the highest difference of credit. */ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now, - struct csched2_unit *new, unsigned int cpu) + const struct csched2_unit *new, unsigned int = cpu) { struct csched2_runqueue_data *rqd =3D c2rqd(ops, cpu); struct csched2_unit * cur =3D csched2_unit(curr_on_cpu(cpu)); - struct csched2_private *prv =3D csched2_priv(ops); + const struct csched2_private *prv =3D csched2_priv(ops); s_time_t score; =20 /* @@ -1441,7 +1442,7 @@ runq_tickle(const struct scheduler *ops, struct csche= d2_unit *new, s_time_t now) struct sched_unit *unit =3D new->unit; unsigned int bs, cpu =3D sched_unit_master(unit); struct csched2_runqueue_data *rqd =3D c2rqd(ops, cpu); - cpumask_t *online =3D cpupool_domain_master_cpumask(unit->domain); + const cpumask_t *online =3D cpupool_domain_master_cpumask(unit->domain= ); cpumask_t mask; =20 ASSERT(new->rqd =3D=3D rqd); @@ -2005,7 +2006,7 @@ static void replenish_domain_budget(void* data) =20 #ifndef NDEBUG static inline void -csched2_unit_check(struct sched_unit *unit) +csched2_unit_check(const struct sched_unit *unit) { struct csched2_unit * const svc =3D csched2_unit(unit); struct csched2_dom * const sdom =3D svc->sdom; @@ -2541,8 +2542,8 @@ static void migrate(const struct scheduler *ops, * - svc is not already flagged to migrate, * - if svc is allowed to run on at least one of the pcpus of rqd. */ -static bool unit_is_migrateable(struct csched2_unit *svc, - struct csched2_runqueue_data *rqd) +static bool unit_is_migrateable(const struct csched2_unit *svc, + const struct csched2_runqueue_data *rqd) { struct sched_unit *unit =3D svc->unit; int cpu =3D sched_unit_master(unit); @@ -3076,7 +3077,7 @@ csched2_free_domdata(const struct scheduler *ops, voi= d *data) static void csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { - struct csched2_unit *svc =3D unit->priv; + const struct csched2_unit *svc =3D unit->priv; struct csched2_dom * const sdom =3D svc->sdom; spinlock_t *lock; =20 @@ -3142,7 +3143,7 @@ csched2_runtime(const struct scheduler *ops, int cpu, int rt_credit; /* Proposed runtime measured in credits */ struct csched2_runqueue_data *rqd =3D c2rqd(ops, cpu); struct list_head *runq =3D &rqd->runq; - struct csched2_private *prv =3D csched2_priv(ops); + const struct csched2_private *prv =3D csched2_priv(ops); =20 /* * If we're idle, just stay so. Others (or external events) @@ -3239,7 +3240,7 @@ runq_candidate(struct csched2_runqueue_data *rqd, unsigned int *skipped) { struct list_head *iter, *temp; - struct sched_resource *sr =3D get_sched_res(cpu); + const struct sched_resource *sr =3D get_sched_res(cpu); struct csched2_unit *snext =3D NULL; struct csched2_private *prv =3D csched2_priv(sr->scheduler); bool yield =3D false, soft_aff_preempt =3D false; @@ -3603,7 +3604,8 @@ static void csched2_schedule( } =20 static void -csched2_dump_unit(struct csched2_private *prv, struct csched2_unit *svc) +csched2_dump_unit(const struct csched2_private *prv, + const struct csched2_unit *svc) { printk("[%i.%i] flags=3D%x cpu=3D%i", svc->unit->domain->domain_id, @@ -3626,8 +3628,8 @@ csched2_dump_unit(struct csched2_private *prv, struct= csched2_unit *svc) static inline void dump_pcpu(const struct scheduler *ops, int cpu) { - struct csched2_private *prv =3D csched2_priv(ops); - struct csched2_unit *svc; + const struct csched2_private *prv =3D csched2_priv(ops); + const struct csched2_unit *svc; =20 printk("CPU[%02d] runq=3D%d, sibling=3D{%*pbl}, core=3D{%*pbl}\n", cpu, c2r(cpu), @@ -3695,8 +3697,8 @@ csched2_dump(const struct scheduler *ops) loop =3D 0; list_for_each( iter_sdom, &prv->sdom ) { - struct csched2_dom *sdom; - struct sched_unit *unit; + const struct csched2_dom *sdom; + const struct sched_unit *unit; =20 sdom =3D list_entry(iter_sdom, struct csched2_dom, sdom_elem); =20 @@ -3737,7 +3739,7 @@ csched2_dump(const struct scheduler *ops) printk("RUNQ:\n"); list_for_each( iter, runq ) { - struct csched2_unit *svc =3D runq_elem(iter); + const struct csched2_unit *svc =3D runq_elem(iter); =20 if ( svc ) { diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c index 3161ac2e62..8c3101649d 100644 --- a/xen/common/sched/null.c +++ b/xen/common/sched/null.c @@ -278,12 +278,12 @@ static void null_free_domdata(const struct scheduler = *ops, void *data) * So this is not part of any hot path. */ static struct sched_resource * -pick_res(struct null_private *prv, const struct sched_unit *unit) +pick_res(const struct null_private *prv, const struct sched_unit *unit) { unsigned int bs; unsigned int cpu =3D sched_unit_master(unit), new_cpu; - cpumask_t *cpus =3D cpupool_domain_master_cpumask(unit->domain); - struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; + const cpumask_t *cpus =3D cpupool_domain_master_cpumask(unit->domain); + const struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; =20 ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); =20 @@ -375,7 +375,7 @@ static void unit_assign(struct null_private *prv, struc= t sched_unit *unit, } =20 /* Returns true if a cpu was tickled */ -static bool unit_deassign(struct null_private *prv, struct sched_unit *uni= t) +static bool unit_deassign(struct null_private *prv, const struct sched_uni= t *unit) { unsigned int bs; unsigned int cpu =3D sched_unit_master(unit); @@ -441,7 +441,7 @@ static spinlock_t *null_switch_sched(struct scheduler *= new_ops, { struct sched_resource *sr =3D get_sched_res(cpu); struct null_private *prv =3D null_priv(new_ops); - struct null_unit *nvc =3D vdata; + const struct null_unit *nvc =3D vdata; =20 ASSERT(nvc && is_idle_unit(nvc->unit)); =20 @@ -940,7 +940,8 @@ static void null_schedule(const struct scheduler *ops, = struct sched_unit *prev, prev->next_task->migrated =3D false; } =20 -static inline void dump_unit(struct null_private *prv, struct null_unit *n= vc) +static inline void dump_unit(const struct null_private *prv, + const struct null_unit *nvc) { printk("[%i.%i] pcpu=3D%d", nvc->unit->domain->domain_id, nvc->unit->unit_id, list_empty(&nvc->waitq_elem) ? @@ -950,8 +951,8 @@ static inline void dump_unit(struct null_private *prv, = struct null_unit *nvc) static void null_dump_pcpu(const struct scheduler *ops, int cpu) { struct null_private *prv =3D null_priv(ops); - struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; - struct null_unit *nvc; + const struct null_pcpu *npc =3D get_sched_res(cpu)->sched_priv; + const struct null_unit *nvc; spinlock_t *lock; unsigned long flags; =20 diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c index 6dc4b6a6e5..f90a605643 100644 --- a/xen/common/sched/rt.c +++ b/xen/common/sched/rt.c @@ -352,7 +352,7 @@ static void rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv =3D rt_priv(ops); - struct rt_unit *svc; + const struct rt_unit *svc; unsigned long flags; =20 spin_lock_irqsave(&prv->lock, flags); @@ -371,8 +371,8 @@ rt_dump(const struct scheduler *ops) { struct list_head *runq, *depletedq, *replq, *iter; struct rt_private *prv =3D rt_priv(ops); - struct rt_unit *svc; - struct rt_dom *sdom; + const struct rt_unit *svc; + const struct rt_dom *sdom; unsigned long flags; =20 spin_lock_irqsave(&prv->lock, flags); @@ -408,7 +408,7 @@ rt_dump(const struct scheduler *ops) printk("Domain info:\n"); list_for_each ( iter, &prv->sdom ) { - struct sched_unit *unit; + const struct sched_unit *unit; =20 sdom =3D list_entry(iter, struct rt_dom, sdom_elem); printk("\tdomain: %d\n", sdom->dom->domain_id); @@ -509,7 +509,7 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struct = list_head *), =20 list_for_each ( iter, queue ) { - struct rt_unit * iter_svc =3D (*qelem)(iter); + const struct rt_unit * iter_svc =3D (*qelem)(iter); if ( compare_unit_priority(svc, iter_svc) > 0 ) break; first =3D false; @@ -547,7 +547,7 @@ replq_remove(const struct scheduler *ops, struct rt_uni= t *svc) */ if ( !list_empty(replq) ) { - struct rt_unit *svc_next =3D replq_elem(replq->next); + const struct rt_unit *svc_next =3D replq_elem(replq->next); set_timer(&prv->repl_timer, svc_next->cur_deadline); } else @@ -604,7 +604,7 @@ static void replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) { struct list_head *replq =3D rt_replq(ops); - struct rt_unit *rearm_svc =3D svc; + const struct rt_unit *rearm_svc =3D svc; bool rearm =3D false; =20 ASSERT( unit_on_replq(svc) ); @@ -640,7 +640,7 @@ static struct sched_resource * rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu) { cpumask_t *cpus =3D cpumask_scratch_cpu(locked_cpu); - cpumask_t *online; + const cpumask_t *online; int cpu; =20 online =3D cpupool_domain_master_cpumask(unit->domain); @@ -1028,7 +1028,7 @@ runq_pick(const struct scheduler *ops, const cpumask_= t *mask, unsigned int cpu) struct rt_unit *svc =3D NULL; struct rt_unit *iter_svc =3D NULL; cpumask_t *cpu_common =3D cpumask_scratch_cpu(cpu); - cpumask_t *online; + const cpumask_t *online; =20 list_for_each ( iter, runq ) { @@ -1197,15 +1197,15 @@ rt_unit_sleep(const struct scheduler *ops, struct s= ched_unit *unit) * lock is grabbed before calling this function */ static void -runq_tickle(const struct scheduler *ops, struct rt_unit *new) +runq_tickle(const struct scheduler *ops, const struct rt_unit *new) { struct rt_private *prv =3D rt_priv(ops); - struct rt_unit *latest_deadline_unit =3D NULL; /* lowest priority */ - struct rt_unit *iter_svc; - struct sched_unit *iter_unit; + const struct rt_unit *latest_deadline_unit =3D NULL; /* lowest priorit= y */ + const struct rt_unit *iter_svc; + const struct sched_unit *iter_unit; int cpu =3D 0, cpu_to_tickle =3D 0; cpumask_t *not_tickled =3D cpumask_scratch_cpu(smp_processor_id()); - cpumask_t *online; + const cpumask_t *online; =20 if ( new =3D=3D NULL || is_idle_unit(new->unit) ) return; @@ -1379,7 +1379,7 @@ rt_dom_cntl( { struct rt_private *prv =3D rt_priv(ops); struct rt_unit *svc; - struct sched_unit *unit; + const struct sched_unit *unit; unsigned long flags; int rc =3D 0; struct xen_domctl_schedparam_vcpu local_sched; @@ -1484,7 +1484,7 @@ rt_dom_cntl( */ static void repl_timer_handler(void *data){ s_time_t now; - struct scheduler *ops =3D data; + const struct scheduler *ops =3D data; struct rt_private *prv =3D rt_priv(ops); struct list_head *replq =3D rt_replq(ops); struct list_head *runq =3D rt_runq(ops); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index cf7aa39844..7c5c437247 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -773,7 +773,7 @@ static inline void hypercall_cancel_continuation(struct= vcpu *v) extern struct domain *domain_list; =20 /* Caller must hold the domlist_read_lock or domlist_update_lock. */ -static inline struct domain *first_domain_in_cpupool( struct cpupool *c) +static inline struct domain *first_domain_in_cpupool(const struct cpupool = *c) { struct domain *d; for (d =3D rcu_dereference(domain_list); d && d->cpupool !=3D c; @@ -781,7 +781,7 @@ static inline struct domain *first_domain_in_cpupool( s= truct cpupool *c) return d; } static inline struct domain *next_domain_in_cpupool( - struct domain *d, struct cpupool *c) + struct domain *d, const struct cpupool *c) { for (d =3D rcu_dereference(d->next_in_list); d && d->cpupool !=3D c; d =3D rcu_dereference(d->next_in_list)); @@ -925,7 +925,8 @@ void restore_vcpu_affinity(struct domain *d); int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, struct xen_domctl_vcpuaffinity *vcpuaff); =20 -void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate= ); +void vcpu_runstate_get(const struct vcpu *v, + struct vcpu_runstate_info *runstate); uint64_t get_cpu_idle_time(unsigned int cpu); void sched_guest_idle(void (*idle) (void), unsigned int cpu); void scheduler_enable(void); @@ -1056,7 +1057,7 @@ extern enum cpufreq_controller { int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); int cpupool_get_id(const struct domain *d); -cpumask_t *cpupool_valid_cpus(struct cpupool *pool); +const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool); extern void dump_runq(unsigned char key); =20 void arch_do_physinfo(struct xen_sysctl_physinfo *pi); --=20 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel