From nobody Thu May 2 19:09:11 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1659520753; cv=none; d=zohomail.com; s=zohoarc; b=ccsxj4rvQB7Px2Hk4GH2oUt4POsuTwC11PkeHmXdVt2ozh1xPWO42LwstQxsngvZmsyk7NwjgNURNUvENsZ0wl5aL63O9jGdNmD/UyT/IedkZ84O+iUcvdywj0XeuyZQ3JbNrJLrT0XggV8NbdayrneNSZx4SzNJSAaEI+fKvUQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1659520753; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=r2TvviRtiNX33JH8gt8aOmz5SDTQQb68hYBn0CV+hh0=; b=DEk4hcJWWelYlGQXTcTEALT0f4I8WL3SV5ptB+IxtpAFW+2FNB97ILBCgXWu2MFDkuqzF5WtAtDOXtlZ3osuIusA3vz/fPiO84Yhe2qEzUYqUYKFKbHr7IQyAOrweZu+QuFxA/8OxwQO/a36AwIly6BDLHu98UGsTMEU7HhFv6s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1659520753269500.4069536482243; Wed, 3 Aug 2022 02:59:13 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.379711.613416 (Exim 4.92) (envelope-from ) id 1oJB9T-0003ox-NR; Wed, 03 Aug 2022 09:58:55 +0000 Received: by outflank-mailman (output) from mailman id 379711.613416; Wed, 03 Aug 2022 09:58:55 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oJB9T-0003oq-I9; Wed, 03 Aug 2022 09:58:55 +0000 Received: by outflank-mailman (input) for mailman id 379711; Wed, 03 Aug 2022 09:58:54 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oJB9S-0003FP-Fa for xen-devel@lists.xenproject.org; Wed, 03 Aug 2022 09:58:54 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id e159d17e-1312-11ed-924f-1f966e50362f; Wed, 03 Aug 2022 11:58:53 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id D71BA1FB8E; Wed, 3 Aug 2022 09:58:52 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A92AE13AD8; Wed, 3 Aug 2022 09:58:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id oNlSJtxG6mIZFwAAMHmgww (envelope-from ); Wed, 03 Aug 2022 09:58:52 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e159d17e-1312-11ed-924f-1f966e50362f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1659520732; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r2TvviRtiNX33JH8gt8aOmz5SDTQQb68hYBn0CV+hh0=; b=cJxly7c6tISEqQZTcWqfoMPKOHTb1zGbR0QqNzTRxHUKQWP7e5RgLDO++6Y9U0TYv51Cgn RUgoDv+//8NMlXXu3bGnn4FroxAOsAO+rK+hD4y9GqAONUTViD4eCey2P4FZVLyKlMhIGA O6rNV12vXKQiC9J89GSSBOZLInnBea0= Subject: [PATCH v2 1/2] xen: sched: dom0_vcpus_pin should only affect dom0 From: Dario Faggioli To: xen-devel@lists.xenproject.org Cc: Jan Beulich , Jan Beulich , George Dunlap Date: Wed, 03 Aug 2022 11:58:52 +0200 Message-ID: <165952073210.13196.2525249635894768659.stgit@tumbleweed.Wayrath> In-Reply-To: <165952060175.13196.15449615309231718989.stgit@tumbleweed.Wayrath> References: <165952060175.13196.15449615309231718989.stgit@tumbleweed.Wayrath> User-Agent: StGit/1.5 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1659520754223100006 If dom0_vcpus_pin is used, make sure the pinning is only done for dom0 vcpus, instead of for the hardware domain (which might not be dom0 at all!). Suggested-by: Jan Beulich Signed-off-by: Dario Faggioli Reviewed-by: Jan Beulich --- Cc: George Dunlap --- Changes from v1: - check domain_id to be 0, for properly identifying dom0 Difference from "RFC" [1]: - new patch [1] https://lore.kernel.org/xen-devel/e061a647cd77a36834e2085a96a07caa785c5= 066.camel@suse.com/ --- xen/common/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index f689b55783..a066c629cb 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -575,7 +575,7 @@ int sched_init_vcpu(struct vcpu *v) * Initialize affinity settings. The idler, and potentially * domain-0 VCPUs, are pinned onto their respective physical CPUs. */ - if ( is_idle_domain(d) || (is_hardware_domain(d) && opt_dom0_vcpus_pin= ) ) + if ( is_idle_domain(d) || (d->domain_id =3D=3D 0 && opt_dom0_vcpus_pin= ) ) sched_set_affinity(unit, cpumask_of(processor), &cpumask_all); else sched_set_affinity(unit, &cpumask_all, &cpumask_all); From nobody Thu May 2 19:09:11 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1659520760; cv=none; d=zohomail.com; s=zohoarc; b=NLydjHWQjpI6el6a9oAJ/e/W2ozNKAn7udPq6h/B2Ju/etfGIH04sa+XTZy8KlF3hXlpgIRiqvAtUaUoLlVMd2GxDHvPalwim2MCHyX4X3rCEoZfccxS4Kf/dGEfFI8QGYQ0Yjp37GWnCjMh9XPfgZKlNhpD60Y9QQjz3/bVi48= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1659520760; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=B/bhWdHRlyzFCYLiuGJa8uWfzKnEGA/Jn4ePkDIyFsw=; b=XgpEfblxWy8eaT3r+m+wHAM3hSylL8na1XQ3SNa3r7LOPQNtoyRYieV1Hgh4J11jdaRRr5OM2soHsIp8NxNATUzc+OfQNO+4eBE8HTukxZB7nzuaN945doCSqbuxQbJh3vehctuiQXBalCd0Sq3elQE60i/OJx+dNttcqNz4IJA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1659520760931870.0621373032293; Wed, 3 Aug 2022 02:59:20 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.379712.613426 (Exim 4.92) (envelope-from ) id 1oJB9Z-0004Cj-TY; Wed, 03 Aug 2022 09:59:01 +0000 Received: by outflank-mailman (output) from mailman id 379712.613426; Wed, 03 Aug 2022 09:59:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oJB9Z-0004CZ-QT; Wed, 03 Aug 2022 09:59:01 +0000 Received: by outflank-mailman (input) for mailman id 379712; Wed, 03 Aug 2022 09:59:00 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oJB9X-0003FK-Sg for xen-devel@lists.xenproject.org; Wed, 03 Aug 2022 09:59:00 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id e4e1510e-1312-11ed-bd2d-47488cf2e6aa; Wed, 03 Aug 2022 11:58:59 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id C36891FB8E; Wed, 3 Aug 2022 09:58:58 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 88EF313AD8; Wed, 3 Aug 2022 09:58:58 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 9h8RH+JG6mIjFwAAMHmgww (envelope-from ); Wed, 03 Aug 2022 09:58:58 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e4e1510e-1312-11ed-bd2d-47488cf2e6aa DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1659520738; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B/bhWdHRlyzFCYLiuGJa8uWfzKnEGA/Jn4ePkDIyFsw=; b=XNYaDEmtiSHdbR6G6cXLJMaNssVRuLV+J3mOcdSNlznqGo4w/FyajHaZhgPbob5X5wtvLk 2E4J7jbTBR8TxTmuTzGOUVLze0mFq5E6ZOvznJXmEriapVqBXrEpkiv9uAo4FFHNJFlw1x 69qjjNlsyLNKDK/+IkVeN4kJR1Sq/TY= Subject: [PATCH v2 2/2] xen/sched: setup dom0 vCPUs affinity only once From: Dario Faggioli To: xen-devel@lists.xenproject.org Cc: Olaf Hering , Jan Beulich , Jan Beulich , George Dunlap Date: Wed, 03 Aug 2022 11:58:57 +0200 Message-ID: <165952073792.13196.9868875379058225090.stgit@tumbleweed.Wayrath> In-Reply-To: <165952060175.13196.15449615309231718989.stgit@tumbleweed.Wayrath> References: <165952060175.13196.15449615309231718989.stgit@tumbleweed.Wayrath> User-Agent: StGit/1.5 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1659520762198100001 Right now, affinity for dom0 vCPUs is setup in two steps. This is a problem as, at least in Credit2, unit_insert() sees and uses the "intermediate" affinity, and place the vCPUs on CPUs where they cannot be run. And this in turn results in boot hangs, if the "dom0_nodes" parameter is used. Fix this by setting up the affinity properly once and for all, in sched_init_vcpu() called by create_vcpu(). Note that, unless a soft-affinity is explicitly specified for dom0 (by using the relaxed mode of "dom0_nodes") we set it to the default, which is all CPUs, instead of computing it basing on hard affinity (if any). This is because hard and soft affinity should be considered as independent user controlled properties. In fact, if we dor derive dom0's soft-affinity from its boot-time hard-affinity, such computed value will continue to be used even if later the user changes the hard-affinity. And this could result in the vCPUs behaving differently than what the user wanted and expects. Fixes: dafd936dddbd ("Make credit2 the default scheduler") Reported-by: Olaf Hering Signed-off-by: Dario Faggioli Reviewed-by: Jan Beulich --- Cc: Jan Beulich Cc: George Dunlap --- Changes from v1: - Fixed hash of the referred commit in changelog - check domain_id to be 0, for properly identifying dom0 Changes from "RFC" [1]: - Moved handling of the shim case - Added some more explanation (in particular, about why we stick to all CPUs for the soft affinity) in both commit message and comment - Remove spurious (and non-necessary) credit2 change [1] https://lore.kernel.org/xen-devel/e061a647cd77a36834e2085a96a07caa785c5= 066.camel@suse.com/ --- xen/common/sched/core.c | 63 +++++++++++++++++++++++++++++--------------= ---- 1 file changed, 39 insertions(+), 24 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index a066c629cb..ff1ddc7624 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -571,12 +571,46 @@ int sched_init_vcpu(struct vcpu *v) return 1; } =20 - /* - * Initialize affinity settings. The idler, and potentially - * domain-0 VCPUs, are pinned onto their respective physical CPUs. - */ - if ( is_idle_domain(d) || (d->domain_id =3D=3D 0 && opt_dom0_vcpus_pin= ) ) + if ( is_idle_domain(d) ) + { + /* Idle vCPUs are always pinned onto their respective pCPUs */ + sched_set_affinity(unit, cpumask_of(processor), &cpumask_all); + } + else if ( pv_shim && v->vcpu_id =3D=3D 0 ) + { + /* + * PV-shim: vcpus are pinned 1:1. Initially only 1 cpu is online, + * others will be dealt with when onlining them. This avoids pinni= ng + * a vcpu to a not yet online cpu here. + */ + sched_set_affinity(unit, cpumask_of(0), cpumask_of(0)); + } + else if ( d->domain_id =3D=3D 0 && opt_dom0_vcpus_pin ) + { + /* + * If dom0_vcpus_pin is specified, dom0 vCPUs are pinned 1:1 to + * their respective pCPUs too. + */ sched_set_affinity(unit, cpumask_of(processor), &cpumask_all); + } +#ifdef CONFIG_X86 + else if ( d->domain_id =3D=3D 0 ) + { + /* + * In absence of dom0_vcpus_pin instead, the hard and soft affinit= y of + * dom0 is controlled by the (x86 only) dom0_nodes parameter. At t= his + * point it has been parsed and decoded into the dom0_cpus mask. + * + * Note that we always honor what user explicitly requested, for b= oth + * hard and soft affinity, without doing any dynamic computation of + * either of them. + */ + if ( !dom0_affinity_relaxed ) + sched_set_affinity(unit, &dom0_cpus, &cpumask_all); + else + sched_set_affinity(unit, &cpumask_all, &dom0_cpus); + } +#endif else sched_set_affinity(unit, &cpumask_all, &cpumask_all); =20 @@ -3402,29 +3436,10 @@ void wait(void) void __init sched_setup_dom0_vcpus(struct domain *d) { unsigned int i; - struct sched_unit *unit; =20 for ( i =3D 1; i < d->max_vcpus; i++ ) vcpu_create(d, i); =20 - /* - * PV-shim: vcpus are pinned 1:1. - * Initially only 1 cpu is online, others will be dealt with when - * onlining them. This avoids pinning a vcpu to a not yet online cpu h= ere. - */ - if ( pv_shim ) - sched_set_affinity(d->vcpu[0]->sched_unit, - cpumask_of(0), cpumask_of(0)); - else - { - for_each_sched_unit ( d, unit ) - { - if ( !opt_dom0_vcpus_pin && !dom0_affinity_relaxed ) - sched_set_affinity(unit, &dom0_cpus, NULL); - sched_set_affinity(unit, NULL, &dom0_cpus); - } - } - domain_update_node_affinity(d); } #endif