From nobody Sat Sep 21 05:39:45 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=cloud.com ARC-Seal: i=1; a=rsa-sha256; t=1704814749; cv=none; d=zohomail.com; s=zohoarc; b=DYbvPcXg4WnCm7OHWZLb1Mx8UbjoqUm8f0Rt4vIh6DrBJfIK1YnrLN9Z6OqTJcKzyBB4ZekHk9vQark7J78iJqC9dOShkFrlzA4xrVUmwkNpwB3FZsqVn4UTq9X5AscvXo7rW0MW5Hm4em3T8aMhxkvjLsb4dQc+YbRpCpLVyuQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1704814749; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=ip0/OxWiHAhF5VmbjEgxvgDRsSrRtgBm50FRE5FwbqQ=; b=jbHQpjvKxJoX9hNeoEGkzZ2Z6VmA49QyDfeCSChS5Aipj+4VuZSGh7JsbLzixTWGoX5fv3314pX8PrACc9suAYBRGb1E0pqEb/ALflZyDbmHuH9hIeCe0GoV7OYRXMWeLHShvaUCuRJ62dR9uT15OgVhfLV917Wq1V5zrc8/W2o= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1704814749036359.5807355375215; Tue, 9 Jan 2024 07:39:09 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.664704.1034847 (Exim 4.92) (envelope-from ) id 1rNEBk-00018S-N9; Tue, 09 Jan 2024 15:38:48 +0000 Received: by outflank-mailman (output) from mailman id 664704.1034847; Tue, 09 Jan 2024 15:38:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rNEBk-00017L-Ha; Tue, 09 Jan 2024 15:38:48 +0000 Received: by outflank-mailman (input) for mailman id 664704; Tue, 09 Jan 2024 15:38:46 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rNEBi-00081N-M2 for xen-devel@lists.xenproject.org; Tue, 09 Jan 2024 15:38:46 +0000 Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [2a00:1450:4864:20::533]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 2baa700d-af05-11ee-9b0f-b553b5be7939; Tue, 09 Jan 2024 16:38:43 +0100 (CET) Received: by mail-ed1-x533.google.com with SMTP id 4fb4d7f45d1cf-557bfc7f7b4so3275023a12.0 for ; Tue, 09 Jan 2024 07:38:43 -0800 (PST) Received: from EMEAENGAAD19049.citrite.net (default-46-102-197-194.interdsl.co.uk. [46.102.197.194]) by smtp.gmail.com with ESMTPSA id fi3-20020a170906da0300b00a2adb417051sm1153685ejb.216.2024.01.09.07.38.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jan 2024 07:38:42 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2baa700d-af05-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1704814723; x=1705419523; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ip0/OxWiHAhF5VmbjEgxvgDRsSrRtgBm50FRE5FwbqQ=; b=brS3bRqHxXNU66iQEeFqTDvN59lN3eTMjFOxwCfyQdMo0WRrADjAoGXYVCc3qnwFff Hz9MLJjoSSwcKAyhqe3n5HXxQBHm3qrHThtWYqx4j08VaxxsJu2VKp5Lzbv29D2XWAsg futw2kezOFtxTUCmz7Uoi4z+oGM+kr2itKnX4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704814723; x=1705419523; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ip0/OxWiHAhF5VmbjEgxvgDRsSrRtgBm50FRE5FwbqQ=; b=PliZcg2Hhv7Ey8nIIAdz6nKsGu1lIToqRVWYRkCrqPEDoQ1NEUMiaoH9sc+26v48XJ l7/bTBzsJ7QwLFedMrvwku3nlj5n94REAPFMkEbpCyMj2upBgRRmvN65GjbGuM+tyIPd cUasns67Y3+f8xCY9aCWXJtYXKURB3RwQV1WAJafllMZlN4kZlPO2BA0xfn9LDjDs4N4 GulWt8bXNwMJMD36JDmkxvRqY3YHLQc6WJm+3Ydk7qzW6nZ6dBGfIMfwxELt5blWxuNa zFk0YeLmHGp4Ukw/EgMduSePDTxrIpZWzmrUY4OovG4S4a6wHm8eGeTHU3+94SryitAS RKaQ== X-Gm-Message-State: AOJu0YziZ0o5mrKV7xnu5r4pfEDZpw0/9f3MAPnW+KnxpyMa88soWvDm FIidyC7ZHNoZKv1o3XvZOzlitFQ8mFgSLNm2TsZcHsfzr9Y= X-Google-Smtp-Source: AGHT+IE2HfS1HpRppvtt5bqnp8gTZZIdOPraqXzbpm3Ahk9DyKr+vlJhrMk3dVY1NAQcGCgux/5Vvg== X-Received: by 2002:a17:906:ecab:b0:a28:ac72:4570 with SMTP id qh11-20020a170906ecab00b00a28ac724570mr1014762ejb.21.1704814723186; Tue, 09 Jan 2024 07:38:43 -0800 (PST) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Wei Liu , Anthony PERARD , Juergen Gross , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Subject: [PATCH 6/6] xen/x86: Add topology generator Date: Tue, 9 Jan 2024 15:38:34 +0000 Message-Id: <20240109153834.4192-7-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240109153834.4192-1-alejandro.vallejo@cloud.com> References: <20240109153834.4192-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @cloud.com) X-ZM-MESSAGEID: 1704814750605100007 Content-Type: text/plain; charset="utf-8" This allows toolstack to synthesise sensible topologies for guests. In particular, this patch causes x2APIC IDs to be packed according to the topology now exposed to the guests on leaf 0xb. Signed-off-by: Alejandro Vallejo --- tools/include/xenguest.h | 15 ++++ tools/libs/guest/xg_cpuid_x86.c | 144 ++++++++++++++++++++------------ xen/arch/x86/cpu-policy.c | 6 +- 3 files changed, 107 insertions(+), 58 deletions(-) diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h index 4e9078fdee..f0043c559b 100644 --- a/tools/include/xenguest.h +++ b/tools/include/xenguest.h @@ -843,5 +843,20 @@ enum xc_static_cpu_featuremask { XC_FEATUREMASK_HVM_HAP_DEF, }; const uint32_t *xc_get_static_cpu_featuremask(enum xc_static_cpu_featurema= sk); + +/** + * Synthesise topology information in `p` given high-level constraints + * + * Topology is given in various fields accross several leaves, some of + * which are vendor-specific. This function uses the policy itself to + * derive such leaves from threads/core and cores/package. + * + * @param p CPU policy of the domain. + * @param threads_per_core threads/core. Doesn't need to be a power of = 2. + * @param cores_per_package cores/package. Doesn't need to be a power of= 2. + */ +void xc_topo_from_parts(struct cpu_policy *p, + uint32_t threads_per_core, uint32_t cores_per_pkg); + #endif /* __i386__ || __x86_64__ */ #endif /* XENGUEST_H */ diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x8= 6.c index 4453178100..7a622721be 100644 --- a/tools/libs/guest/xg_cpuid_x86.c +++ b/tools/libs/guest/xg_cpuid_x86.c @@ -584,7 +584,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t d= omid, bool restore, bool hvm; xc_domaininfo_t di; struct xc_cpu_policy *p =3D xc_cpu_policy_init(); - unsigned int i, nr_leaves =3D ARRAY_SIZE(p->leaves), nr_msrs =3D 0; + unsigned int nr_leaves =3D ARRAY_SIZE(p->leaves), nr_msrs =3D 0; uint32_t err_leaf =3D -1, err_subleaf =3D -1, err_msr =3D -1; uint32_t host_featureset[FEATURESET_NR_ENTRIES] =3D {}; uint32_t len =3D ARRAY_SIZE(host_featureset); @@ -727,60 +727,8 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t = domid, bool restore, } else { - /* - * Topology for HVM guests is entirely controlled by Xen. For now= , we - * hardcode APIC_ID =3D vcpu_id * 2 to give the illusion of no SMT. - */ - p->policy.basic.htt =3D true; - p->policy.extd.cmp_legacy =3D false; - - /* - * Leaf 1 EBX[23:16] is Maximum Logical Processors Per Package. - * Update to reflect vLAPIC_ID =3D vCPU_ID * 2, but make sure to a= void - * overflow. - */ - if ( !p->policy.basic.lppp ) - p->policy.basic.lppp =3D 2; - else if ( !(p->policy.basic.lppp & 0x80) ) - p->policy.basic.lppp *=3D 2; - - switch ( p->policy.x86_vendor ) - { - case X86_VENDOR_INTEL: - for ( i =3D 0; (p->policy.cache.subleaf[i].type && - i < ARRAY_SIZE(p->policy.cache.raw)); ++i ) - { - p->policy.cache.subleaf[i].cores_per_package =3D - (p->policy.cache.subleaf[i].cores_per_package << 1) | = 1; - p->policy.cache.subleaf[i].threads_per_cache =3D 0; - } - break; - - case X86_VENDOR_AMD: - case X86_VENDOR_HYGON: - /* - * Leaf 0x80000008 ECX[15:12] is ApicIdCoreSize. - * Leaf 0x80000008 ECX[7:0] is NumberOfCores (minus one). - * Update to reflect vLAPIC_ID =3D vCPU_ID * 2. But avoid - * - overflow, - * - going out of sync with leaf 1 EBX[23:16], - * - incrementing ApicIdCoreSize when it's zero (which changes= the - * meaning of bits 7:0). - * - * UPDATE: I addition to avoiding overflow, some - * proprietary operating systems have trouble with - * apic_id_size values greater than 7. Limit the value to - * 7 for now. - */ - if ( p->policy.extd.nc < 0x7f ) - { - if ( p->policy.extd.apic_id_size !=3D 0 && p->policy.extd.= apic_id_size < 0x7 ) - p->policy.extd.apic_id_size++; - - p->policy.extd.nc =3D (p->policy.extd.nc << 1) | 1; - } - break; - } + /* TODO: Expose the ability to choose a custom topology for HVM/PV= H */ + xc_topo_from_parts(&p->policy, 1, di.max_vcpu_id + 1); } =20 nr_leaves =3D ARRAY_SIZE(p->leaves); @@ -1028,3 +976,89 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, x= c_cpu_policy_t *host, =20 return false; } + +static uint32_t order(uint32_t n) +{ + return 32 - __builtin_clz(n); +} + +void xc_topo_from_parts(struct cpu_policy *p, + uint32_t threads_per_core, uint32_t cores_per_pkg) +{ + uint32_t threads_per_pkg =3D threads_per_core * cores_per_pkg; + uint32_t apic_id_size; + + if ( p->basic.max_leaf < 0xb ) + p->basic.max_leaf =3D 0xb; + + memset(p->topo.raw, 0, sizeof(p->topo.raw)); + + /* thread level */ + p->topo.subleaf[0].nr_logical =3D threads_per_core; + p->topo.subleaf[0].id_shift =3D 0; + p->topo.subleaf[0].level =3D 0; + p->topo.subleaf[0].type =3D 1; + if ( threads_per_core > 1 ) + p->topo.subleaf[0].id_shift =3D order(threads_per_core - 1); + + /* core level */ + p->topo.subleaf[1].nr_logical =3D cores_per_pkg; + if ( p->x86_vendor =3D=3D X86_VENDOR_INTEL ) + p->topo.subleaf[1].nr_logical =3D threads_per_pkg; + p->topo.subleaf[1].id_shift =3D p->topo.subleaf[0].id_shift; + p->topo.subleaf[1].level =3D 1; + p->topo.subleaf[1].type =3D 2; + if ( cores_per_pkg > 1 ) + p->topo.subleaf[1].id_shift +=3D order(cores_per_pkg - 1); + + apic_id_size =3D p->topo.subleaf[1].id_shift; + + /* + * Contrary to what the name might seem to imply. HTT is an enabler for + * SMP and there's no harm in setting it even with a single vCPU. + */ + p->basic.htt =3D true; + + p->basic.lppp =3D 0xff; + if ( threads_per_pkg < 0xff ) + p->basic.lppp =3D threads_per_pkg; + + switch ( p->x86_vendor ) + { + case X86_VENDOR_INTEL: + struct cpuid_cache_leaf *sl =3D p->cache.subleaf; + for ( size_t i =3D 0; sl->type && + i < ARRAY_SIZE(p->cache.raw); i++, sl++ ) + { + sl->cores_per_package =3D cores_per_pkg - 1; + sl->threads_per_cache =3D threads_per_core - 1; + if ( sl->type =3D=3D 3 /* unified cache */ ) + sl->threads_per_cache =3D threads_per_pkg - 1; + } + break; + + case X86_VENDOR_AMD: + case X86_VENDOR_HYGON: + /* Expose p->basic.lppp */ + p->extd.cmp_legacy =3D true; + + /* Clip NC to the maximum value it can hold */ + p->extd.nc =3D 0xff; + if ( threads_per_pkg <=3D 0xff ) + p->extd.nc =3D threads_per_pkg - 1; + + /* TODO: Expose leaf e1E */ + p->extd.topoext =3D false; + + /* + * Clip APIC ID to 8, as that's what high core-count machines = do + * + * That what AMD EPYC 9654 does with >256 CPUs + */ + p->extd.apic_id_size =3D 8; + if ( apic_id_size < 8 ) + p->extd.apic_id_size =3D apic_id_size; + + break; + } +} diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c index 76efb050ed..679d1fe4fa 100644 --- a/xen/arch/x86/cpu-policy.c +++ b/xen/arch/x86/cpu-policy.c @@ -278,9 +278,6 @@ static void recalculate_misc(struct cpu_policy *p) =20 p->basic.raw[0x8] =3D EMPTY_LEAF; =20 - /* TODO: Rework topology logic. */ - memset(p->topo.raw, 0, sizeof(p->topo.raw)); - p->basic.raw[0xc] =3D EMPTY_LEAF; =20 p->extd.e1d &=3D ~CPUID_COMMON_1D_FEATURES; @@ -387,6 +384,9 @@ static void __init calculate_host_policy(void) recalculate_xstate(p); recalculate_misc(p); =20 + /* Wipe host topology. Toolstack is expected to synthesise a sensible = one */ + memset(p->topo.raw, 0, sizeof(p->topo.raw)); + /* When vPMU is disabled, drop it from the host policy. */ if ( vpmu_mode =3D=3D XENPMU_MODE_OFF ) p->basic.raw[0xa] =3D EMPTY_LEAF; --=20 2.34.1