From nobody Mon Feb 9 20:12:35 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1610970953; cv=none; d=zohomail.com; s=zohoarc; b=bfvfZi8LmmtU2OJ3cnxLuSJA2hQgYmz6yHpMWqD/hYnvOZD9QfNRD3eKyova7dSjY5xY21Mm8LTH0cvmLG0jVvnsACcMWW20pYmBXTh63ktXAD/qQitj9pk98Ks+sBS//WmlQWluD8YB3j3/+ltk23ZOyKD/jl73k9jZ4AaGjd4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610970953; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Zvj0WIR083HaUoZpxG9d5NzM+F+8nwp7U2Fyjv6QVtM=; b=V57Dv/atZs3Q9yE6fm5L4BFhEhX33bxkUHFdzzAhSHYHB34/MFaAgquyw6qGfw3nN/JanLYBU5eQ6zrKuWQoVQIIZc9dp9hc/xOQECpFVBfc6W99rOJs6ckya8BOyLu6KcvdT2iFUZ9CZSFuQv9ZtaMWBd9/tD1IaUrPS29PUjQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610970953136434.7515939302324; Mon, 18 Jan 2021 03:55:53 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.69617.124780 (Exim 4.92) (envelope-from ) id 1l1T89-0005h4-MP; Mon, 18 Jan 2021 11:55:33 +0000 Received: by outflank-mailman (output) from mailman id 69617.124780; Mon, 18 Jan 2021 11:55:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T89-0005gh-Bu; Mon, 18 Jan 2021 11:55:33 +0000 Received: by outflank-mailman (input) for mailman id 69617; Mon, 18 Jan 2021 11:55:31 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T87-0005WK-Pv for xen-devel@lists.xenproject.org; Mon, 18 Jan 2021 11:55:31 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 25d4e8b7-18d2-4e33-90d2-10ce82f442c1; Mon, 18 Jan 2021 11:55:22 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 508AEB8E7; Mon, 18 Jan 2021 11:55:20 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 25d4e8b7-18d2-4e33-90d2-10ce82f442c1 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1610970920; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zvj0WIR083HaUoZpxG9d5NzM+F+8nwp7U2Fyjv6QVtM=; b=dojN8pLPDg3/t6GXcEDE+e3vaa/gLgxQQ7oZTOox0BM4ihwMwrh2m01usfBP2pp2t70fq1 98J7UNtIv+dCNG5KGEmpkKCQ3pIayqa4jGxXLhXhPo5H5fHvqQrPcQQnKKoN/CGQtC8XGk kFlzDZYFKr7vnrmjrFOv7J4q2b+yhj8= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable Date: Mon, 18 Jan 2021 12:55:16 +0100 Message-Id: <20210118115516.11001-6-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210118115516.11001-1-jgross@suse.com> References: <20210118115516.11001-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Make /cpupool//sched-gran in hypfs writable. This will enable per cpupool selectable scheduling granularity. Writing this node is allowed only with no cpu assigned to the cpupool. Allowed are values "cpu", "core" and "socket". Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli Reviewed-by: Jan Beulich --- V2: - test user parameters earlier (Jan Beulich) V3: - fix build without CONFIG_HYPFS on Arm (Andrew Cooper) --- docs/misc/hypfs-paths.pandoc | 5 ++- xen/common/sched/cpupool.c | 70 ++++++++++++++++++++++++++++++------ 2 files changed, 63 insertions(+), 12 deletions(-) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index f1ce24d7fe..e86f7d0dbe 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -184,10 +184,13 @@ A directory of all current cpupools. The individual cpupools. Each entry is a directory with the name being the cpupool-id (e.g. /cpupool/0/). =20 -#### /cpupool/*/sched-gran =3D ("cpu" | "core" | "socket") +#### /cpupool/*/sched-gran =3D ("cpu" | "core" | "socket") [w] =20 The scheduling granularity of a cpupool. =20 +Writing a value is allowed only for cpupools with no cpu assigned and if t= he +architecture is supporting different scheduling granularities. + #### /params/ =20 A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index e2011367bd..acd26f9449 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -77,7 +77,7 @@ static void sched_gran_print(enum sched_gran mode, unsign= ed int gran) } =20 #ifdef CONFIG_HAS_SCHED_GRANULARITY -static int __init sched_select_granularity(const char *str) +static int sched_gran_get(const char *str, enum sched_gran *mode) { unsigned int i; =20 @@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *= str) { if ( strcmp(sg_name[i].name, str) =3D=3D 0 ) { - opt_sched_granularity =3D sg_name[i].mode; + *mode =3D sg_name[i].mode; return 0; } } =20 return -EINVAL; } + +static int __init sched_select_granularity(const char *str) +{ + return sched_gran_get(str, &opt_sched_granularity); +} custom_param("sched-gran", sched_select_granularity); +#elif CONFIG_HYPFS +static int sched_gran_get(const char *str, enum sched_gran *mode) +{ + return -EINVAL; +} #endif =20 -static unsigned int __init cpupool_check_granularity(void) +static unsigned int cpupool_check_granularity(enum sched_gran mode) { unsigned int cpu; unsigned int siblings, gran =3D 0; =20 - if ( opt_sched_granularity =3D=3D SCHED_GRAN_cpu ) + if ( mode =3D=3D SCHED_GRAN_cpu ) return 1; =20 for_each_online_cpu ( cpu ) { - siblings =3D cpumask_weight(sched_get_opt_cpumask(opt_sched_granul= arity, - cpu)); + siblings =3D cpumask_weight(sched_get_opt_cpumask(mode, cpu)); if ( gran =3D=3D 0 ) gran =3D siblings; else if ( gran !=3D siblings ) return 0; } =20 - sched_disable_smt_switching =3D true; - return gran; } =20 @@ -126,7 +133,7 @@ static void __init cpupool_gran_init(void) =20 while ( gran =3D=3D 0 ) { - gran =3D cpupool_check_granularity(); + gran =3D cpupool_check_granularity(opt_sched_granularity); =20 if ( gran =3D=3D 0 ) { @@ -152,6 +159,9 @@ static void __init cpupool_gran_init(void) if ( fallback ) warning_add(fallback); =20 + if ( opt_sched_granularity !=3D SCHED_GRAN_cpu ) + sched_disable_smt_switching =3D true; + sched_granularity =3D gran; sched_gran_print(opt_sched_granularity, sched_granularity); } @@ -1126,17 +1136,55 @@ static unsigned int hypfs_gran_getsize(const struct= hypfs_entry *entry) return strlen(gran) + 1; } =20 +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf, + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen) +{ + const struct hypfs_dyndir_id *data; + struct cpupool *cpupool; + enum sched_gran gran; + unsigned int sched_gran =3D 0; + char name[SCHED_GRAN_NAME_LEN]; + int ret =3D 0; + + if ( ulen > SCHED_GRAN_NAME_LEN ) + return -ENOSPC; + + if ( copy_from_guest(name, uaddr, ulen) ) + return -EFAULT; + + if ( memchr(name, 0, ulen) =3D=3D (name + ulen - 1) ) + sched_gran =3D sched_gran_get(name, &gran) ? + 0 : cpupool_check_granularity(gran); + if ( sched_gran =3D=3D 0 ) + return -EINVAL; + + data =3D hypfs_get_dyndata(); + cpupool =3D data->data; + ASSERT(cpupool); + + if ( !cpumask_empty(cpupool->cpu_valid) ) + ret =3D -EBUSY; + else + { + cpupool->gran =3D gran; + cpupool->sched_gran =3D sched_gran; + } + + return ret; +} + static const struct hypfs_funcs cpupool_gran_funcs =3D { .enter =3D hypfs_node_enter, .exit =3D hypfs_node_exit, .read =3D cpupool_gran_read, - .write =3D hypfs_write_deny, + .write =3D cpupool_gran_write, .getsize =3D hypfs_gran_getsize, .findentry =3D hypfs_leaf_findentry, }; =20 static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran= ", - 0, &cpupool_gran_funcs); + SCHED_GRAN_NAME_LEN, &cpupool_gran_funcs); static char granstr[SCHED_GRAN_NAME_LEN] =3D { [0 ... SCHED_GRAN_NAME_LEN - 2] =3D '?', [SCHED_GRAN_NAME_LEN - 1] =3D 0 --=20 2.26.2