From nobody Thu Dec 18 07:57:05 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D891C04A6A for ; Tue, 15 Aug 2023 15:38:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238144AbjHOPiT (ORCPT ); Tue, 15 Aug 2023 11:38:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238179AbjHOPh7 (ORCPT ); Tue, 15 Aug 2023 11:37:59 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7254E10DC for ; Tue, 15 Aug 2023 08:37:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692113824; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lohESFIvlzyZPy5EqizT6cujwSQc/IAqYfiYml/IU5Q=; b=PchO2ZW/eq3/VNb1+AVrcwRhx2TrY23vqEYUPOKQmdwO+JrDGvNOwSXd4FtohmHv/v7bFn 1HPGQRIRY1htyGHfB43YakObFf7Va9P3eVYolOhXbGTZRtASHryr7gWZ7ZtD7d06g2PFrg fGEK7Q4rX2oyF+cZJ4ihrBKt0xeI8uM= Received: from mimecast-mx02.redhat.com (66.187.233.73 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-363-6lgcb1pWPT6pxvwAiW543Q-1; Tue, 15 Aug 2023 11:30:49 -0400 X-MC-Unique: 6lgcb1pWPT6pxvwAiW543Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BE20C381494A; Tue, 15 Aug 2023 15:30:48 +0000 (UTC) Received: from llong.com (unknown [10.22.18.67]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3279E40D2839; Tue, 15 Aug 2023 15:30:48 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Christian Brauner , Jonathan Corbet , Shuah Khan Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Juri Lelli , Dietmar Eggemann , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Giuseppe Scrivano , Waiman Long Subject: [PATCH-cgroup v6 4/6] cgroup/cpuset: Check partition conflict with housekeeping setup Date: Tue, 15 Aug 2023 11:30:25 -0400 Message-Id: <20230815153027.633355-5-longman@redhat.com> In-Reply-To: <20230815153027.633355-1-longman@redhat.com> References: <20230815153027.633355-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" A user can pre-configure certain CPUs in an isolated state at boot time with the "isolcpus" kernel boot command line option. Those CPUs will not be in the housekeeping_cpumask(HK_TYPE_DOMAIN) and so will not be in any sched domains. This may conflict with the partition setup at runtime. Those boot time isolated CPUs should only be used in an isolated partition. This patch adds the necessary check and disallows partition setup if the check fails. Signed-off-by: Waiman Long --- kernel/cgroup/cpuset.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 5b4d72f2f09d..7b83050f5485 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -75,6 +75,7 @@ enum prs_errcode { PERR_NOCPUS, PERR_HOTPLUG, PERR_CPUSEMPTY, + PERR_HKEEPING, }; =20 static const char * const perr_strings[] =3D { @@ -85,6 +86,7 @@ static const char * const perr_strings[] =3D { [PERR_NOCPUS] =3D "Parent unable to distribute cpu downstream", [PERR_HOTPLUG] =3D "No cpu available due to hotplug", [PERR_CPUSEMPTY] =3D "cpuset.cpus is empty", + [PERR_HKEEPING] =3D "partition config conflicts with housekeeping setup", }; =20 struct cpuset { @@ -1654,6 +1656,26 @@ static void remote_partition_check(struct cpuset *cs= , struct cpumask *newmask, rebuild_sched_domains_locked(); } =20 +/* + * prstate_housekeeping_conflict - check for partition & housekeeping conf= licts + * @prstate: partition root state to be checked + * @new_cpus: cpu mask + * Return: true if there is conflict, false otherwise + * + * CPUs outside of housekeeping_cpumask(HK_TYPE_DOMAIN) can only be used in + * an isolated partition. + */ +static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new= _cpus) +{ + const struct cpumask *hk_domain =3D housekeeping_cpumask(HK_TYPE_DOMAIN); + bool all_in_hk =3D cpumask_subset(new_cpus, hk_domain); + + if (!all_in_hk && (prstate !=3D PRS_ISOLATED)) + return true; + + return false; +} + /** * update_parent_effective_cpumask - update effective_cpus mask of parent = cpuset * @cs: The cpuset that requests change in partition root state @@ -1755,6 +1777,9 @@ static int update_parent_effective_cpumask(struct cpu= set *cs, int cmd, if (!cpumask_intersects(xcpus, parent->effective_xcpus)) return PERR_INVCPUS; =20 + if (prstate_housekeeping_conflict(new_prs, xcpus)) + return PERR_HKEEPING; + /* * A parent can be left with no CPU as long as there is no * task directly associated with the parent partition. @@ -2338,6 +2363,9 @@ static int update_cpumask(struct cpuset *cs, struct c= puset *trialcs, if (cpumask_empty(trialcs->effective_xcpus)) { invalidate =3D true; cs->prs_err =3D PERR_INVCPUS; + } else if (prstate_housekeeping_conflict(old_prs, trialcs->effective_xcp= us)) { + invalidate =3D true; + cs->prs_err =3D PERR_HKEEPING; } else if (tasks_nocpu_error(parent, cs, trialcs->effective_xcpus)) { invalidate =3D true; cs->prs_err =3D PERR_NOCPUS; @@ -2482,6 +2510,9 @@ static int update_exclusive_cpumask(struct cpuset *cs= , struct cpuset *trialcs, if (!trialcs->exclusive_cpus) { invalidate =3D true; cs->prs_err =3D PERR_INVCPUS; + } else if (prstate_housekeeping_conflict(old_prs, trialcs->effective_xcp= us)) { + invalidate =3D true; + cs->prs_err =3D PERR_HKEEPING; } else if (tasks_nocpu_error(parent, cs, trialcs->exclusive_cpus)) { invalidate =3D true; cs->prs_err =3D PERR_NOCPUS; --=20 2.31.1