From nobody Thu Dec 18 11:31:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6E62C2FC27 for ; Thu, 17 Aug 2023 13:27:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351431AbjHQN0y (ORCPT ); Thu, 17 Aug 2023 09:26:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351321AbjHQN0X (ORCPT ); Thu, 17 Aug 2023 09:26:23 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D66E30D1 for ; Thu, 17 Aug 2023 06:25:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692278738; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/xbf/RWuRBmFe9qyiZvWNgyow68Xfdh2X5sgn+zfTqQ=; b=Sk7sN9AJubj5jmRmWj5LbhkKrbR0wh0GShg1YGQYIGUwOXrB0vFZBmlm8XTJm67LOpo2QR SVkWhDl9D/+OOfYRpO2Xqjviot3oYWKaaiho9K09wrBuI6v9CsFK+J1krJLLUI3OHGB888 AAHPxU5b/otpBErVuFKzFc/jJNOntiY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-622-p9dIBDiLMl-2wOT7UiLtTg-1; Thu, 17 Aug 2023 09:25:32 -0400 X-MC-Unique: p9dIBDiLMl-2wOT7UiLtTg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E6ED8108BF83; Thu, 17 Aug 2023 13:25:13 +0000 (UTC) Received: from llong.com (unknown [10.22.16.215]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5C8D1492C1B; Thu, 17 Aug 2023 13:25:13 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Christian Brauner , Jonathan Corbet , Shuah Khan Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Juri Lelli , Dietmar Eggemann , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Giuseppe Scrivano , Waiman Long Subject: [PATCH-cgroup v7 4/6] cgroup/cpuset: Check partition conflict with housekeeping setup Date: Thu, 17 Aug 2023 09:24:52 -0400 Message-Id: <20230817132454.755459-5-longman@redhat.com> In-Reply-To: <20230817132454.755459-1-longman@redhat.com> References: <20230817132454.755459-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" A user can pre-configure certain CPUs in an isolated state at boot time with the "isolcpus" kernel boot command line option. Those CPUs will not be in the housekeeping_cpumask(HK_TYPE_DOMAIN) and so will not be in any sched domains. This may conflict with the partition setup at runtime. Those boot time isolated CPUs should only be used in an isolated partition. This patch adds the necessary check and disallows partition setup if the check fails. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index bf5486130071..b58bf3d9dd48 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -75,6 +75,7 @@ enum prs_errcode { PERR_NOCPUS, PERR_HOTPLUG, PERR_CPUSEMPTY, + PERR_HKEEPING, }; =20 static const char * const perr_strings[] =3D { @@ -85,6 +86,7 @@ static const char * const perr_strings[] =3D { [PERR_NOCPUS] =3D "Parent unable to distribute cpu downstream", [PERR_HOTPLUG] =3D "No cpu available due to hotplug", [PERR_CPUSEMPTY] =3D "cpuset.cpus is empty", + [PERR_HKEEPING] =3D "partition config conflicts with housekeeping setup", }; =20 struct cpuset { @@ -1636,6 +1638,26 @@ static void remote_partition_check(struct cpuset *cs= , struct cpumask *newmask, rebuild_sched_domains_locked(); } =20 +/* + * prstate_housekeeping_conflict - check for partition & housekeeping conf= licts + * @prstate: partition root state to be checked + * @new_cpus: cpu mask + * Return: true if there is conflict, false otherwise + * + * CPUs outside of housekeeping_cpumask(HK_TYPE_DOMAIN) can only be used in + * an isolated partition. + */ +static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new= _cpus) +{ + const struct cpumask *hk_domain =3D housekeeping_cpumask(HK_TYPE_DOMAIN); + bool all_in_hk =3D cpumask_subset(new_cpus, hk_domain); + + if (!all_in_hk && (prstate !=3D PRS_ISOLATED)) + return true; + + return false; +} + /** * update_parent_effective_cpumask - update effective_cpus mask of parent = cpuset * @cs: The cpuset that requests change in partition root state @@ -1737,6 +1759,9 @@ static int update_parent_effective_cpumask(struct cpu= set *cs, int cmd, if (!cpumask_intersects(xcpus, parent->effective_xcpus)) return PERR_INVCPUS; =20 + if (prstate_housekeeping_conflict(new_prs, xcpus)) + return PERR_HKEEPING; + /* * A parent can be left with no CPU as long as there is no * task directly associated with the parent partition. @@ -2324,6 +2349,9 @@ static int update_cpumask(struct cpuset *cs, struct c= puset *trialcs, if (cpumask_empty(trialcs->effective_xcpus)) { invalidate =3D true; cs->prs_err =3D PERR_INVCPUS; + } else if (prstate_housekeeping_conflict(old_prs, trialcs->effective_xcp= us)) { + invalidate =3D true; + cs->prs_err =3D PERR_HKEEPING; } else if (tasks_nocpu_error(parent, cs, trialcs->effective_xcpus)) { invalidate =3D true; cs->prs_err =3D PERR_NOCPUS; @@ -2456,6 +2484,9 @@ static int update_exclusive_cpumask(struct cpuset *cs= , struct cpuset *trialcs, if (cpumask_empty(trialcs->exclusive_cpus)) { invalidate =3D true; cs->prs_err =3D PERR_INVCPUS; + } else if (prstate_housekeeping_conflict(old_prs, trialcs->effective_xcp= us)) { + invalidate =3D true; + cs->prs_err =3D PERR_HKEEPING; } else if (tasks_nocpu_error(parent, cs, trialcs->effective_xcpus)) { invalidate =3D true; cs->prs_err =3D PERR_NOCPUS; --=20 2.31.1