From nobody Sun Nov 24 17:07:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=cloud.com ARC-Seal: i=1; a=rsa-sha256; t=1716993183; cv=none; d=zohomail.com; s=zohoarc; b=V0Qbhsi4YHRgb+oIKs3u7aGxKSXlrMbkuxObsvYXLCWsqHaQwLGyZ9LpliBx5JFoIYUob/H1NFUyoLcZk8XjwjUogbpt4q9WWk7c6dqP8cuNcr1PuMvPbg7SmAXpn8lYFoV0HMbIJHKCUon9EtW2Nnhyupr2UZ+WJKJA29h9yjI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1716993183; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=JQ0Z2MuZPai5+x8nYCnYhzQNJt6RhMWelwC5TWM5Ogs=; b=XRiKBjZYuOz5qejPhIYBdoVmkNZYBFVabbcU3emITjnpy8GIKUXrU0BtRAqmhUy2Q2TrowQokg1nxl4bZ29iB02ObHASuaHkuQqX9rsMIArmBr8qC7yADWURuCk1xHFyf+eYYljL7zOOd42ERmpnF/FY4AZ2wG/DYS0pgs6Xjx4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716993183683573.0172210182717; Wed, 29 May 2024 07:33:03 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.732084.1137924 (Exim 4.92) (envelope-from ) id 1sCKMB-0000lX-Mh; Wed, 29 May 2024 14:32:47 +0000 Received: by outflank-mailman (output) from mailman id 732084.1137924; Wed, 29 May 2024 14:32:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKMB-0000lM-HJ; Wed, 29 May 2024 14:32:47 +0000 Received: by outflank-mailman (input) for mailman id 732084; Wed, 29 May 2024 14:32:46 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKMA-0000Tu-6P for xen-devel@lists.xenproject.org; Wed, 29 May 2024 14:32:46 +0000 Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com [2a00:1450:4864:20::22f]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 4fceb73a-1dc8-11ef-b4bb-af5377834399; Wed, 29 May 2024 16:32:44 +0200 (CEST) Received: by mail-lj1-x22f.google.com with SMTP id 38308e7fff4ca-2e95a74d51fso4463131fa.2 for ; Wed, 29 May 2024 07:32:44 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([217.156.233.157]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a647b827400sm74614166b.69.2024.05.29.07.32.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 07:32:41 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4fceb73a-1dc8-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1716993163; x=1717597963; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JQ0Z2MuZPai5+x8nYCnYhzQNJt6RhMWelwC5TWM5Ogs=; b=Cp8KFdxIfSXTeR3eYrG/dK5Z5vA92t89293nTSOlIEcgjvoenjQbdUY3S+TQOzyj5J ViDO4tT9JaODmNi8AOllIwSi74+v9GuJC9BR7nm2iTjwgkNvOWUGtsURj5gEwRQ4rOzh ls8z6EcQ/W/5dm639oxBD2Ax2Yuqup/GqMM9E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716993163; x=1717597963; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JQ0Z2MuZPai5+x8nYCnYhzQNJt6RhMWelwC5TWM5Ogs=; b=lVyqGRGt8P9gw8JFXBHV+HUvZaeOzAv+UYnqMiQEHZTaYOXdJAmqHo3TRbrY8/IVkW Exbf044EQD4T3FdS3e1r3CvTpex8SJ0E5P9DzyHTk2tjz88Avwj96NeSVEW9iaEnJ6Lm ut+M7lI0LdJg5iKBhbzcFRkCuku3GeqQ6IwSMoaT7w/EjHgpeHq+N6Q0axNR9ouna59n kyYnXCa6yWQtw0IFRqm0JmLfRPbrgbi9VfrAx/kNAbjA11hdrbzK+vR38JTrrtQTtkvR t1l2o3o9BTDcRW5ZCI4xgTxOywRIxQlok2kNIqG7cIeuQ262FpWJUV1ItOOLbpKvsOdQ aH5g== X-Gm-Message-State: AOJu0YzYet9Zbjb5hCRTfK2W+Ha5xo6UboGxHGd48QNdvVQFVyp+Zun9 QVyzSGZjjII/pb9J8Ryv6kwta7UTBHd4Z/5rZ9FCS+HOIYORBp42/c3ZDZ73SxzmeY1hx9ojI7T U X-Google-Smtp-Source: AGHT+IHDUsIhMRGeVQpyBeVL9vJxvadAtJqOlw/C7yhkmpleEPvJsfTSGOUs7kr2FkkLuVGGYY6P2Q== X-Received: by 2002:a05:6512:23a0:b0:529:be5c:8de0 with SMTP id 2adb3069b0e04-529be5c8f07mr4543915e87.49.1716993162398; Wed, 29 May 2024 07:32:42 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Subject: [PATCH v3 1/6] x86/vlapic: Move lapic migration checks to the check hooks Date: Wed, 29 May 2024 15:32:30 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @cloud.com) X-ZM-MESSAGEID: 1716993185092100001 Content-Type: text/plain; charset="utf-8" While doing this, factor out checks common to architectural and hidden stat= e. Signed-off-by: Alejandro Vallejo Reviewed-by: Roger PAu Monn=C3=A9 --- v3: * Moved from v2/patch3. * Added check hook for the architectural state as well. * Use domain_vcpu() rather than the previous open coded checks for vcpu r= ange. --- xen/arch/x86/hvm/vlapic.c | 81 +++++++++++++++++++++++++-------------- 1 file changed, 53 insertions(+), 28 deletions(-) diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 9cfc82666ae5..a0df62b5ec0a 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1553,60 +1553,85 @@ static void lapic_load_fixup(struct vlapic *vlapic) v, vlapic->loaded.id, vlapic->loaded.ldr, good_ldr); } =20 -static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context= _t *h) -{ - unsigned int vcpuid =3D hvm_load_instance(h); - struct vcpu *v; - struct vlapic *s; =20 +static int lapic_check_common(const struct domain *d, unsigned int vcpuid) +{ if ( !has_vlapic(d) ) return -ENODEV; =20 /* Which vlapic to load? */ - if ( vcpuid >=3D d->max_vcpus || (v =3D d->vcpu[vcpuid]) =3D=3D NULL ) + if ( !domain_vcpu(d, vcpuid) ) { dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no apic%u\n", d->domain_id, vcpuid); return -EINVAL; } - s =3D vcpu_vlapic(v); =20 - if ( hvm_load_entry_zeroextend(LAPIC, h, &s->hw) !=3D 0 ) + return 0; +} + +static int cf_check lapic_check_hidden(const struct domain *d, + hvm_domain_context_t *h) +{ + unsigned int vcpuid =3D hvm_load_instance(h); + struct hvm_hw_lapic s; + int rc; + + if ( (rc =3D lapic_check_common(d, vcpuid)) ) + return rc; + + if ( hvm_load_entry_zeroextend(LAPIC, h, &s) !=3D 0 ) + return -ENODATA; + + /* EN=3D0 with EXTD=3D1 is illegal */ + if ( (s.apic_base_msr & (APIC_BASE_ENABLE | APIC_BASE_EXTD)) =3D=3D + APIC_BASE_EXTD ) return -EINVAL; =20 + return 0; +} + +static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context= _t *h) +{ + unsigned int vcpuid =3D hvm_load_instance(h); + struct vcpu *v =3D d->vcpu[vcpuid]; + struct vlapic *s =3D vcpu_vlapic(v); + + if ( hvm_load_entry_zeroextend(LAPIC, h, &s->hw) !=3D 0 ) + BUG(); + s->loaded.hw =3D 1; if ( s->loaded.regs ) lapic_load_fixup(s); =20 - if ( !(s->hw.apic_base_msr & APIC_BASE_ENABLE) && - unlikely(vlapic_x2apic_mode(s)) ) - return -EINVAL; - hvm_update_vlapic_mode(v); =20 return 0; } =20 -static int cf_check lapic_load_regs(struct domain *d, hvm_domain_context_t= *h) +static int cf_check lapic_check_regs(const struct domain *d, + hvm_domain_context_t *h) { unsigned int vcpuid =3D hvm_load_instance(h); - struct vcpu *v; - struct vlapic *s; + int rc; =20 - if ( !has_vlapic(d) ) - return -ENODEV; + if ( (rc =3D lapic_check_common(d, vcpuid)) ) + return rc; =20 - /* Which vlapic to load? */ - if ( vcpuid >=3D d->max_vcpus || (v =3D d->vcpu[vcpuid]) =3D=3D NULL ) - { - dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no apic%u\n", - d->domain_id, vcpuid); - return -EINVAL; - } - s =3D vcpu_vlapic(v); + if ( !hvm_get_entry(LAPIC_REGS, h) ) + return -ENODATA; + + return 0; +} + +static int cf_check lapic_load_regs(struct domain *d, hvm_domain_context_t= *h) +{ + unsigned int vcpuid =3D hvm_load_instance(h); + struct vcpu *v =3D d->vcpu[vcpuid]; + struct vlapic *s =3D vcpu_vlapic(v); =20 if ( hvm_load_entry(LAPIC_REGS, h, s->regs) !=3D 0 ) - return -EINVAL; + BUG(); =20 s->loaded.id =3D vlapic_get_reg(s, APIC_ID); s->loaded.ldr =3D vlapic_get_reg(s, APIC_LDR); @@ -1623,9 +1648,9 @@ static int cf_check lapic_load_regs(struct domain *d,= hvm_domain_context_t *h) return 0; } =20 -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_check_hidden, lapic_load_hidden, 1, HVMSR_PER_VCPU); -HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, +HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_check_regs, lapic_load_regs, 1, HVMSR_PER_VCPU); =20 int vlapic_init(struct vcpu *v) --=20 2.34.1 From nobody Sun Nov 24 17:07:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=cloud.com ARC-Seal: i=1; a=rsa-sha256; t=1716993183; cv=none; d=zohomail.com; s=zohoarc; b=BMjwT/rczlJtRmE16MK4T3i9MF6bimte+iEDuwyvA1GgI1PUUatJkKvqYuOFH7zrK+AotoMBl0Z4KwbFLO+X/VhmIrJQvDcBhVDC+dueG8hlNWN4pGjfqggsdDlfSprCehQtqVZ/Gs7hqa0Lngsh1x4vA4/rI5Kzo/4ypoIlpUU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1716993183; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=gvIO42hHDCq3AD6mYH6mrssGWU1hiwJplrDubSM8t74=; b=Sz1F44redAdcSwke3jWZeq6qqm5YfZRRY697Eay9jvlDjjpUn4IuIVqDAmIvHCh0p4N0OexGOaOqMQHf976bBLEGTDmKbc7ytzgYc4MaJ/m9gsYvN6OzD1VeLDtBUqNgfKYzJcgmR+2NGF8zpZPaDxjWJgpevBQw/3z9c+aEOFM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716993182993647.695046340742; Wed, 29 May 2024 07:33:02 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.732083.1137914 (Exim 4.92) (envelope-from ) id 1sCKMA-0000Uq-Ds; Wed, 29 May 2024 14:32:46 +0000 Received: by outflank-mailman (output) from mailman id 732083.1137914; Wed, 29 May 2024 14:32:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKMA-0000Uj-A9; Wed, 29 May 2024 14:32:46 +0000 Received: by outflank-mailman (input) for mailman id 732083; Wed, 29 May 2024 14:32:45 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKM9-0000Ev-LF for xen-devel@lists.xenproject.org; Wed, 29 May 2024 14:32:45 +0000 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [2a00:1450:4864:20::635]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 503a8b59-1dc8-11ef-90a1-e314d9c70b13; Wed, 29 May 2024 16:32:45 +0200 (CEST) Received: by mail-ej1-x635.google.com with SMTP id a640c23a62f3a-a59a352bbd9so2743066b.1 for ; Wed, 29 May 2024 07:32:44 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([217.156.233.157]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a647b827400sm74614166b.69.2024.05.29.07.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 07:32:43 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 503a8b59-1dc8-11ef-90a1-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1716993164; x=1717597964; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gvIO42hHDCq3AD6mYH6mrssGWU1hiwJplrDubSM8t74=; b=Lg+yacAcZPg5aPhU8PbBBA7eD6VdZuHVfSZjTmE8ZVinA4YdELZ0HYjP8mjv/XqX/9 gcIC1nf6wETlau5OoQDvwzI/yJoJP5F+jRD/haPipriCjSY/hXKh8yiOa+zUdBYtNsNZ tshlBGhUFOvZEL4NNU1g8sKDITOLrtPBD1FxE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716993164; x=1717597964; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gvIO42hHDCq3AD6mYH6mrssGWU1hiwJplrDubSM8t74=; b=Pis9V+6qduTT+RUIVfP2AnEljj4Z/SYN5ij1MEiseCX0misqINkkfvMZFhy2GUjPET tZIVhahyt8k5AvygKwphz90QLaaac2+Q6Z1AJP0vsMWX21zMTt7owJ8bm0RDA7cmO9w9 HwoC+7KXVOVab/veG1AeEweH/WX00v9fZvmzyZaqFdPVjb6g2yscr4JqM9r96iWDOqE8 rvNRCgYc2PsTITfEydYJGiIEv9oAb3AiwrrVIZkJwCtoFyNMbVgO37Wr9IpXM/kxcgn2 X31+MAfx094Ge9a63Vx7i+A9rXwH9hzhWacg6zraRHdMZvHbs0vkOYC0jjHTFe1IjKry /19Q== X-Gm-Message-State: AOJu0YzMhiJPo4NUchhg+Yf1dOJufsx2P0sDcDR8leoSv15GLFYnbky8 fXL0ZUVLDH2mtSDhG40/lm/qpRlBe6AWPko1jAuubiXJFevv8YKnESeHp17JmIzvp7JSiHU79Uk t X-Google-Smtp-Source: AGHT+IGCDJIqKAXnBevXhyvx7XtqD/jOuMkVyGaO6Fhb1dyuw1DrLN5lXGDXAkLJhnzLaQ+P4T4SrQ== X-Received: by 2002:a17:906:40c8:b0:a62:a48c:1123 with SMTP id a640c23a62f3a-a642d2775bamr207614566b.5.1716993164217; Wed, 29 May 2024 07:32:44 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Subject: [PATCH v3 2/6] xen/x86: Add initial x2APIC ID to the per-vLAPIC save area Date: Wed, 29 May 2024 15:32:31 +0100 Message-Id: <9912423b866ed696c375e0a51954d363c3706470.1716976271.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @cloud.com) X-ZM-MESSAGEID: 1716993185126100002 Content-Type: text/plain; charset="utf-8" This allows the initial x2APIC ID to be sent on the migration stream. The hardcoded mapping x2apic_id=3D2*vcpu_id is maintained for the time being. Given the vlapic data is zero-extended on restore, fix up migrations from hosts without the field by setting it to the old convention if zero. x2APIC IDs are calculated from the CPU policy where the guest topology is defined. For the time being, the function simply returns the old relationship, but will eventually return results consistent with the topology. Signed-off-by: Alejandro Vallejo Reviewed-by: Roger Pau Monn=C3=A9 --- v3: * Added rsvd_zero check to the check hook (introduced in v3/patch1). * Set APIC ID properly during policy update if the APIC is already in x2apic mode, ensuring its LDR is updated too in that case. * Fixed typo in variable for x86_x2apic_id_from_vcpu_id(). * Missed due to being mid-series. * Rewrote the comment on CPUID leaf 0xb. * Rewrote the comment on x86_x2apic_id_from_vcpu_id() --- xen/arch/x86/cpuid.c | 14 ++++----- xen/arch/x86/hvm/vlapic.c | 41 ++++++++++++++++++++++++-- xen/arch/x86/include/asm/hvm/hvm.h | 2 ++ xen/arch/x86/include/asm/hvm/vlapic.h | 2 ++ xen/include/public/arch-x86/hvm/save.h | 2 ++ xen/include/xen/lib/x86/cpu-policy.h | 9 ++++++ xen/lib/x86/policy.c | 11 +++++++ 7 files changed, 70 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c index 7a38e032146a..ebcdbc5cbc5d 100644 --- a/xen/arch/x86/cpuid.c +++ b/xen/arch/x86/cpuid.c @@ -139,10 +139,9 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf, const struct cpu_user_regs *regs; =20 case 0x1: - /* TODO: Rework topology logic. */ res->b &=3D 0x00ffffffu; if ( is_hvm_domain(d) ) - res->b |=3D (v->vcpu_id * 2) << 24; + res->b |=3D vlapic_x2apic_id(vcpu_vlapic(v)) << 24; =20 /* TODO: Rework vPMU control in terms of toolstack choices. */ if ( vpmu_available(v) && @@ -312,18 +311,15 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf, =20 case 0xb: /* - * In principle, this leaf is Intel-only. In practice, it is tigh= tly - * coupled with x2apic, and we offer an x2apic-capable APIC emulat= ion - * to guests on AMD hardware as well. - * - * TODO: Rework topology logic. + * Don't expose topology information to PV guests. Exposed on HVM + * along with x2APIC because they are tightly coupled. */ - if ( p->basic.x2apic ) + if ( is_hvm_domain(d) && p->basic.x2apic ) { *(uint8_t *)&res->c =3D subleaf; =20 /* Fix the x2APIC identifier. */ - res->d =3D v->vcpu_id * 2; + res->d =3D vlapic_x2apic_id(vcpu_vlapic(v)); } break; =20 diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index a0df62b5ec0a..626a6258a4d4 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1072,7 +1072,7 @@ static uint32_t x2apic_ldr_from_id(uint32_t id) static void set_x2apic_id(struct vlapic *vlapic) { const struct vcpu *v =3D vlapic_vcpu(vlapic); - uint32_t apic_id =3D v->vcpu_id * 2; + uint32_t apic_id =3D vlapic->hw.x2apic_id; uint32_t apic_ldr =3D x2apic_ldr_from_id(apic_id); =20 /* @@ -1086,6 +1086,26 @@ static void set_x2apic_id(struct vlapic *vlapic) vlapic_set_reg(vlapic, APIC_LDR, apic_ldr); } =20 +void vlapic_cpu_policy_changed(struct vcpu *v) +{ + struct vlapic *vlapic =3D vcpu_vlapic(v); + const struct cpu_policy *cp =3D v->domain->arch.cpu_policy; + + /* + * Don't override the initial x2APIC ID if we have migrated it or + * if the domain doesn't have vLAPIC at all. + */ + if ( !has_vlapic(v->domain) || vlapic->loaded.hw ) + return; + + vlapic->hw.x2apic_id =3D x86_x2apic_id_from_vcpu_id(cp, v->vcpu_id); + + if ( vlapic_x2apic_mode(vlapic) ) + set_x2apic_id(vlapic); /* Set the APIC ID _and_ the LDR */ + else + vlapic_set_reg(vlapic, APIC_ID, SET_xAPIC_ID(vlapic->hw.x2apic_id)= ); +} + int guest_wrmsr_apic_base(struct vcpu *v, uint64_t val) { const struct cpu_policy *cp =3D v->domain->arch.cpu_policy; @@ -1452,7 +1472,7 @@ void vlapic_reset(struct vlapic *vlapic) if ( v->vcpu_id =3D=3D 0 ) vlapic->hw.apic_base_msr |=3D APIC_BASE_BSP; =20 - vlapic_set_reg(vlapic, APIC_ID, (v->vcpu_id * 2) << 24); + vlapic_set_reg(vlapic, APIC_ID, SET_xAPIC_ID(vlapic->hw.x2apic_id)); vlapic_do_init(vlapic); } =20 @@ -1520,6 +1540,16 @@ static void lapic_load_fixup(struct vlapic *vlapic) const struct vcpu *v =3D vlapic_vcpu(vlapic); uint32_t good_ldr =3D x2apic_ldr_from_id(vlapic->loaded.id); =20 + /* + * Loading record without hw.x2apic_id in the save stream, calculate u= sing + * the traditional "vcpu_id * 2" relation. There's an implicit assumpt= ion + * that vCPU0 always has x2APIC0, which is true for the old relation, = and + * still holds under the new x2APIC generation algorithm. While that c= ase + * goes through the conditional it's benign because it still maps to z= ero. + */ + if ( !vlapic->hw.x2apic_id ) + vlapic->hw.x2apic_id =3D v->vcpu_id * 2; + /* Skip fixups on xAPIC mode, or if the x2APIC LDR is already correct = */ if ( !vlapic_x2apic_mode(vlapic) || (vlapic->loaded.ldr =3D=3D good_ldr) ) @@ -1588,6 +1618,13 @@ static int cf_check lapic_check_hidden(const struct = domain *d, APIC_BASE_EXTD ) return -EINVAL; =20 + /* + * Fail migrations from newer versions of Xen where + * rsvd_zero is interpreted as something else. + */ + if ( s.rsvd_zero ) + return -EINVAL; + return 0; } =20 diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/= hvm/hvm.h index 1c01e22c8e62..746b4739f53f 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -16,6 +16,7 @@ #include #include #include +#include =20 struct pirq; /* needed by pi_update_irte */ =20 @@ -448,6 +449,7 @@ static inline void hvm_update_guest_efer(struct vcpu *v) static inline void hvm_cpuid_policy_changed(struct vcpu *v) { alternative_vcall(hvm_funcs.cpuid_policy_changed, v); + vlapic_cpu_policy_changed(v); } =20 static inline void hvm_set_tsc_offset(struct vcpu *v, uint64_t offset, diff --git a/xen/arch/x86/include/asm/hvm/vlapic.h b/xen/arch/x86/include/a= sm/hvm/vlapic.h index 2c4ff94ae7a8..34f23cd38a20 100644 --- a/xen/arch/x86/include/asm/hvm/vlapic.h +++ b/xen/arch/x86/include/asm/hvm/vlapic.h @@ -44,6 +44,7 @@ #define vlapic_xapic_mode(vlapic) \ (!vlapic_hw_disabled(vlapic) && \ !((vlapic)->hw.apic_base_msr & APIC_BASE_EXTD)) +#define vlapic_x2apic_id(vlapic) ((vlapic)->hw.x2apic_id) =20 /* * Generic APIC bitmap vector update & search routines. @@ -107,6 +108,7 @@ int vlapic_ack_pending_irq(struct vcpu *v, int vector, = bool force_ack); =20 int vlapic_init(struct vcpu *v); void vlapic_destroy(struct vcpu *v); +void vlapic_cpu_policy_changed(struct vcpu *v); =20 void vlapic_reset(struct vlapic *vlapic); =20 diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/ar= ch-x86/hvm/save.h index 7ecacadde165..1c2ec669ffc9 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -394,6 +394,8 @@ struct hvm_hw_lapic { uint32_t disabled; /* VLAPIC_xx_DISABLED */ uint32_t timer_divisor; uint64_t tdt_msr; + uint32_t x2apic_id; + uint32_t rsvd_zero; }; =20 DECLARE_HVM_SAVE_TYPE(LAPIC, 5, struct hvm_hw_lapic); diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86= /cpu-policy.h index d5e447e9dc06..392320b9adbe 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -542,6 +542,15 @@ int x86_cpu_policies_are_compatible(const struct cpu_p= olicy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err); =20 +/** + * Calculates the x2APIC ID of a vCPU given a CPU policy + * + * @param p CPU policy of the domain. + * @param id vCPU ID of the vCPU. + * @returns x2APIC ID of the vCPU. + */ +uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t i= d); + #endif /* !XEN_LIB_X86_POLICIES_H */ =20 /* diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index f033d22785be..b70b22d55fcf 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -2,6 +2,17 @@ =20 #include =20 +uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t i= d) +{ + /* + * TODO: Derive x2APIC ID from the topology information inside `p` + * rather than from the vCPU ID alone. This bodge is a temporary + * measure until all infra is in place to retrieve or derive the + * initial x2APIC ID from migrated domains. + */ + return id * 2; +} + int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err) --=20 2.34.1 From nobody Sun Nov 24 17:07:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=cloud.com ARC-Seal: i=1; a=rsa-sha256; t=1716993186; cv=none; d=zohomail.com; s=zohoarc; b=j/Qf2TISVQOeVdU+Ho2kgK8SRssrexNRmmwRAVCEC6C77h8pXVwsDf4boD4Jdb+mmwvOW4Cq0MdidriyckdAlqJayinQVHUc7yPVgQ1CqRxMrTACnDwAbKXYpyUNy53AXu6IHKwpzvp2+1VY/iaHuB1urOzrsVjSACWB0rGSUk0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1716993186; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Ej2rPGHuJHoDIQ9G5DQv2lvCsMSRzdoop6CVxyhmmpE=; b=NP2TxrWvkpB88xcJ4PJBz3RwOuTksYVrzoUrzovBhO7j0rFquEYIQdlMawh25PJR5Xgtla440TcUlIVYkE+p1YPVvLWzHs1gOsYG5XHufT9M75PR/y3nE5cy5lCmu8L1cihD1dcf1P5E8JVEzBZpnNHK4WytQqXc5x6LJr53r2k= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716993186795598.0423670800484; Wed, 29 May 2024 07:33:06 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.732085.1137934 (Exim 4.92) (envelope-from ) id 1sCKMC-00012t-Tl; Wed, 29 May 2024 14:32:48 +0000 Received: by outflank-mailman (output) from mailman id 732085.1137934; Wed, 29 May 2024 14:32:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKMC-00012c-PB; Wed, 29 May 2024 14:32:48 +0000 Received: by outflank-mailman (input) for mailman id 732085; Wed, 29 May 2024 14:32:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKMB-0000Tu-FY for xen-devel@lists.xenproject.org; Wed, 29 May 2024 14:32:47 +0000 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [2a00:1450:4864:20::530]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 50b6373a-1dc8-11ef-b4bb-af5377834399; Wed, 29 May 2024 16:32:45 +0200 (CEST) Received: by mail-ed1-x530.google.com with SMTP id 4fb4d7f45d1cf-5789733769dso24214a12.1 for ; Wed, 29 May 2024 07:32:45 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([217.156.233.157]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a647b827400sm74614166b.69.2024.05.29.07.32.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 07:32:44 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 50b6373a-1dc8-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1716993165; x=1717597965; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ej2rPGHuJHoDIQ9G5DQv2lvCsMSRzdoop6CVxyhmmpE=; b=kfb3ILmihg9O58dmrYwLURACOw1LnUiC0e3/dSUNC4iGKQROEX1f9Xz9U9evB4QVHc 7/N1Kc7Ud3xsduISzqeFRwVJsSzGfQrTuL92dyXQoPiOXTM1/Li/k+PRj+GZFmnlip2Y fcWRwA+jUr0opqKpRt4/foreLtCuIakE4YopY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716993165; x=1717597965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ej2rPGHuJHoDIQ9G5DQv2lvCsMSRzdoop6CVxyhmmpE=; b=P1UYqOEOCPCLmiCU8/0mjXK20I/L1iHCOORnHBj2/itm2VzvSCUUykLdrSvzyGdqOY gPEZEq8laDpQPS0FwGUUD0KAjkJG8t4Pu5ZtBsdXGArKi6EDAQ80NVa/HHrc/bs7TtlC Lz44VaJjaTHRtQHdddFBXTHscchIAuztppqB6bfkHE/JWXsKSAq8ocqh8Io5EVeF+r9D qoYBYCeRLSS22xmxWnqSCI5/M7clqMiK5DzZ4mPu9inUbsnioM4TEI+7SpggXMbG0XaD o4bK96/WdldwMbEB5PMzL1bMFjPfJBDUrQHON821ich2L+KNJ1m8le7tYWuO/JXXPM4k haSA== X-Gm-Message-State: AOJu0YyYI4QiI6BjuPm5l0HmQvu8vXgQ9hdpsaK36IKQT1C87QC5QUB7 DWEHu9kokwlttuqdF6r1cAh5vHde4xyVthzcYMILofvPz4AwYVYCBqQ98rcSV6Vl5WatxFBRP18 / X-Google-Smtp-Source: AGHT+IHeS6/oDzvSmtImY2NDIfAASj8WslOmXwjKcdgQTIesJwXIWNp5FDLSzbKkBCsZx8a/GcTN4w== X-Received: by 2002:a17:906:1555:b0:a5a:24ab:f5e with SMTP id a640c23a62f3a-a642d7d7473mr185870466b.25.1716993164998; Wed, 29 May 2024 07:32:44 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v3 3/6] tools/hvmloader: Retrieve (x2)APIC IDs from the APs themselves Date: Wed, 29 May 2024 15:32:32 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @cloud.com) X-ZM-MESSAGEID: 1716993187109100006 Content-Type: text/plain; charset="utf-8" Make it so the APs expose their own APIC IDs in a LUT. We can use that LUT = to populate the MADT, decoupling the algorithm that relates CPU IDs and APIC I= Ds from hvmloader. While at this also remove ap_callin, as writing the APIC ID may serve the s= ame purpose. Signed-off-by: Alejandro Vallejo --- v3: * Moved ACCESS_ONCE() to common-macros.h * 8bit APIC IDs clipping at 255 was a bogus assumption. Stop relying on i= t. * APIC ID taken from leaf 0xb instead when the x2apic feature is suppor= ted. * Assert APIC IDs read by the APs are never zero, as that's always the BS= P. * Added comment about how CPU_TO_X2APICID serves as cross-vcpu synchroniz= er. --- tools/firmware/hvmloader/config.h | 6 ++- tools/firmware/hvmloader/hvmloader.c | 4 +- tools/firmware/hvmloader/smp.c | 54 ++++++++++++++++++++----- tools/include/xen-tools/common-macros.h | 5 +++ 4 files changed, 56 insertions(+), 13 deletions(-) diff --git a/tools/firmware/hvmloader/config.h b/tools/firmware/hvmloader/c= onfig.h index cd716bf39245..213ac1f28e17 100644 --- a/tools/firmware/hvmloader/config.h +++ b/tools/firmware/hvmloader/config.h @@ -4,6 +4,8 @@ #include #include =20 +#include + enum virtual_vga { VGA_none, VGA_std, VGA_cirrus, VGA_pt }; extern enum virtual_vga virtual_vga; =20 @@ -48,8 +50,10 @@ extern uint8_t ioapic_version; =20 #define IOAPIC_ID 0x01 =20 +extern uint32_t CPU_TO_X2APICID[HVM_MAX_VCPUS]; + #define LAPIC_BASE_ADDRESS 0xfee00000 -#define LAPIC_ID(vcpu_id) ((vcpu_id) * 2) +#define LAPIC_ID(vcpu_id) (CPU_TO_X2APICID[(vcpu_id)]) =20 #define PCI_ISA_DEVFN 0x08 /* dev 1, fn 0 */ #define PCI_ISA_IRQ_MASK 0x0c20U /* ISA IRQs 5,10,11 are PCI connected = */ diff --git a/tools/firmware/hvmloader/hvmloader.c b/tools/firmware/hvmloade= r/hvmloader.c index f8af88fabf24..5c02e8fc226a 100644 --- a/tools/firmware/hvmloader/hvmloader.c +++ b/tools/firmware/hvmloader/hvmloader.c @@ -341,11 +341,11 @@ int main(void) =20 printf("CPU speed is %u MHz\n", get_cpu_mhz()); =20 + smp_initialise(); + apic_setup(); pci_setup(); =20 - smp_initialise(); - perform_tests(); =20 if ( bios->bios_info_setup ) diff --git a/tools/firmware/hvmloader/smp.c b/tools/firmware/hvmloader/smp.c index 5d46eee1c5f4..f878d91898bf 100644 --- a/tools/firmware/hvmloader/smp.c +++ b/tools/firmware/hvmloader/smp.c @@ -29,7 +29,34 @@ =20 #include =20 -static int ap_callin; +/** + * Lookup table of x2APIC IDs. + * + * Each entry is populated its respective CPU as they come online. This is= required + * for generating the MADT with minimal assumptions about ID relationships. + */ +uint32_t CPU_TO_X2APICID[HVM_MAX_VCPUS]; + +/** Tristate about x2apic being supported. -1=3Dunknown */ +static int has_x2apic =3D -1; + +static uint32_t read_apic_id(void) +{ + uint32_t apic_id; + + if ( has_x2apic ) + cpuid(0xb, NULL, NULL, NULL, &apic_id); + else + { + cpuid(1, NULL, &apic_id, NULL, NULL); + apic_id >>=3D 24; + } + + /* Never called by cpu0, so should never return 0 */ + ASSERT(!apic_id); + + return apic_id; +} =20 static void __attribute__((regparm(1))) cpu_setup(unsigned int cpu) { @@ -37,13 +64,17 @@ static void __attribute__((regparm(1))) cpu_setup(unsig= ned int cpu) cacheattr_init(); printf("done.\n"); =20 - if ( !cpu ) /* Used on the BSP too */ + /* The BSP exits early because its APIC ID is known to be zero */ + if ( !cpu ) return; =20 wmb(); - ap_callin =3D 1; + ACCESS_ONCE(CPU_TO_X2APICID[cpu]) =3D read_apic_id(); =20 - /* After this point, the BSP will shut us down. */ + /* + * After this point the BSP will shut us down. A write to + * CPU_TO_X2APICID[cpu] signals the BSP to bring down `cpu`. + */ =20 for ( ;; ) asm volatile ( "hlt" ); @@ -54,10 +85,6 @@ static void boot_cpu(unsigned int cpu) static uint8_t ap_stack[PAGE_SIZE] __attribute__ ((aligned (16))); static struct vcpu_hvm_context ap; =20 - /* Initialise shared variables. */ - ap_callin =3D 0; - wmb(); - /* Wake up the secondary processor */ ap =3D (struct vcpu_hvm_context) { .mode =3D VCPU_HVM_MODE_32B, @@ -90,10 +117,11 @@ static void boot_cpu(unsigned int cpu) BUG(); =20 /* - * Wait for the secondary processor to complete initialisation. + * Wait for the secondary processor to complete initialisation, + * which is signaled by its x2APIC ID being written to the LUT. * Do not touch shared resources meanwhile. */ - while ( !ap_callin ) + while ( !ACCESS_ONCE(CPU_TO_X2APICID[cpu]) ) cpu_relax(); =20 /* Take the secondary processor offline. */ @@ -104,6 +132,12 @@ static void boot_cpu(unsigned int cpu) void smp_initialise(void) { unsigned int i, nr_cpus =3D hvm_info->nr_vcpus; + uint32_t ecx; + + cpuid(1, NULL, NULL, &ecx, NULL); + has_x2apic =3D (ecx >> 21) & 1; + if ( has_x2apic ) + printf("x2APIC supported\n"); =20 printf("Multiprocessor initialisation:\n"); cpu_setup(0); diff --git a/tools/include/xen-tools/common-macros.h b/tools/include/xen-to= ols/common-macros.h index 60912225cb7a..336c6309d96e 100644 --- a/tools/include/xen-tools/common-macros.h +++ b/tools/include/xen-tools/common-macros.h @@ -108,4 +108,9 @@ #define get_unaligned(ptr) get_unaligned_t(typeof(*(ptr)), ptr) #define put_unaligned(val, ptr) put_unaligned_t(typeof(*(ptr)), val, ptr) =20 +#define __ACCESS_ONCE(x) ({ \ + (void)(typeof(x))0; /* Scalar typecheck. */ \ + (volatile typeof(x) *)&(x); }) +#define ACCESS_ONCE(x) (*__ACCESS_ONCE(x)) + #endif /* __XEN_TOOLS_COMMON_MACROS__ */ --=20 2.34.1 From nobody Sun Nov 24 17:07:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=cloud.com ARC-Seal: i=1; a=rsa-sha256; t=1716993188; cv=none; d=zohomail.com; s=zohoarc; b=jm9qdM6vO3vYr18tpghl85iz2fOEfvYaWUkakm4kOswm4BjUo4791Vc2DSmcfN1PGNwvHMju92FitPjjMAdW1tPWauvhcWJVhQTiclvPG29/yfiQd7/6CYBeSAan0tSjdET8/1eqmudhg7GGYm9Qb+RAjKKu+Q72mj8uKD1xAPI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1716993188; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=AqAozDSX51jFkv79KS5nld9q87yWir7rtgXqZSiRO/Y=; b=D9wYScGRaLtxSRwbYQMhHn6ULW0gkRvIjKVzBd+rCH9FNzbsGCFU7s/YX4VoYLA6Amh+7K9spYO2Fs6uYaYcg7WiCztQ3uZ7kxajFAQxL/5PT4+2ctJo6pLQzewbfD2doYYn1HLpCpCMKIltpVfmNIlwQRwSkIDu015A8SO4CI8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716993188815486.8808209841486; Wed, 29 May 2024 07:33:08 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.732086.1137944 (Exim 4.92) (envelope-from ) id 1sCKME-0001K4-A9; Wed, 29 May 2024 14:32:50 +0000 Received: by outflank-mailman (output) from mailman id 732086.1137944; Wed, 29 May 2024 14:32:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKME-0001Jl-6j; Wed, 29 May 2024 14:32:50 +0000 Received: by outflank-mailman (input) for mailman id 732086; Wed, 29 May 2024 14:32:48 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKMC-0000Tu-OY for xen-devel@lists.xenproject.org; Wed, 29 May 2024 14:32:48 +0000 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [2a00:1450:4864:20::635]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 5143fb92-1dc8-11ef-b4bb-af5377834399; Wed, 29 May 2024 16:32:46 +0200 (CEST) Received: by mail-ej1-x635.google.com with SMTP id a640c23a62f3a-a626919d19dso6782666b.0 for ; Wed, 29 May 2024 07:32:46 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([217.156.233.157]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a647b827400sm74614166b.69.2024.05.29.07.32.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 07:32:45 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5143fb92-1dc8-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1716993166; x=1717597966; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AqAozDSX51jFkv79KS5nld9q87yWir7rtgXqZSiRO/Y=; b=jwjrHmgT3SvP8MH5Ab4BbsudgDqgJ0MRPvzKJ38usk5t3grbSMg6OWm+5ubmXmK9e5 c7vWafgVOyHm/kT2S0NiwUYpBavvJNM3YBMwOEcBzjQDDytidkb5cher0jypO1X8tefR iYzgIr4/9Uzy7aWsYha92iTTYQUPSWN/mvaow= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716993166; x=1717597966; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AqAozDSX51jFkv79KS5nld9q87yWir7rtgXqZSiRO/Y=; b=HArz5FSaChZFWqyUIOGcLsx+WI5iC0Pis8mYGTT+zMX+1oXhrHmR9V2lDNJRRRD5L3 QBr0WPFXthH3F1NBQ8kr27B8gMfgDd3HumQrpkFYIPDAZyu5f4kNtf3cAaT/Q+Ty9PAA uNddU1yen0IS7M8zj4rnmwejHZno1OrjCdPN1lQQjSEBtn+sIOnTEUPKfrlt713WG3Uu QV2ndZk7tCnvXBsaaOStBGmQvcxIb5bFyoBBAbrHw4/38bAUPRBHRNMnDLu/WyekBMMo Fb7T+dW0FjDhlSfqRHo6Rb/kKDjOb1WZq5m9LE87Z7IMfkO/F7OcmjzcXY5W7KbZoxMT qbvA== X-Gm-Message-State: AOJu0YxxaVeIgvJqdhd1HZjk6sEk3+dWAPSpGJzuFbdTe6DtDp3InE3r B40yHCgBWiJKkaALKbxNq38wXKBVGohYgpcyv1A8Hze5OkInSDS7imovCapR+m8UGWQhhPMOa7N 0 X-Google-Smtp-Source: AGHT+IEOThwnXaeq9fshTzbk9nrW6TnAafCr7FaExi1BI368bZLSekGyXbBXb69LLRWXbZvmyqDphQ== X-Received: by 2002:a17:906:348d:b0:a59:c62c:344d with SMTP id a640c23a62f3a-a642d37e41emr200072366b.9.1716993165836; Wed, 29 May 2024 07:32:45 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v3 4/6] xen/lib: Add topology generator for x86 Date: Wed, 29 May 2024 15:32:33 +0100 Message-Id: <22c291ff33d2fe88b92e24946304a73064cb247c.1716976271.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @cloud.com) X-ZM-MESSAGEID: 1716993189172100009 Content-Type: text/plain; charset="utf-8" Add a helper to populate topology leaves in the cpu policy from threads/core and cores/package counts. It's unit-tested in test-cpu-policy.= c, but it's not connected to the rest of the code yet. Adds the ASSERT() macro to xen/lib/x86/private.h, as it was missing. Signed-off-by: Alejandro Vallejo --- v3: * Style adjustments (linewraps, newlines...) * Slight refactor of the TOPO() macro in unit tests. * Reduce indentation of x86_topo_from_parts(). * Remove "no functional change" from commit message. * Assert n!=3D0 in clz(n) * Which implied adding the ASSERT() macro to private.h --- tools/tests/cpu-policy/test-cpu-policy.c | 133 +++++++++++++++++++++++ xen/include/xen/lib/x86/cpu-policy.h | 16 +++ xen/lib/x86/policy.c | 90 +++++++++++++++ xen/lib/x86/private.h | 4 + 4 files changed, 243 insertions(+) diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-pol= icy/test-cpu-policy.c index 301df2c00285..849d7cebaa7c 100644 --- a/tools/tests/cpu-policy/test-cpu-policy.c +++ b/tools/tests/cpu-policy/test-cpu-policy.c @@ -650,6 +650,137 @@ static void test_is_compatible_failure(void) } } =20 +static void test_topo_from_parts(void) +{ + static const struct test { + unsigned int threads_per_core; + unsigned int cores_per_pkg; + struct cpu_policy policy; + } tests[] =3D { + { + .threads_per_core =3D 3, .cores_per_pkg =3D 1, + .policy =3D { + .x86_vendor =3D X86_VENDOR_AMD, + .topo.subleaf =3D { + { .nr_logical =3D 3, .level =3D 0, .type =3D 1, .id_sh= ift =3D 2, }, + { .nr_logical =3D 1, .level =3D 1, .type =3D 2, .id_sh= ift =3D 2, }, + }, + }, + }, + { + .threads_per_core =3D 1, .cores_per_pkg =3D 3, + .policy =3D { + .x86_vendor =3D X86_VENDOR_AMD, + .topo.subleaf =3D { + { .nr_logical =3D 1, .level =3D 0, .type =3D 1, .id_sh= ift =3D 0, }, + { .nr_logical =3D 3, .level =3D 1, .type =3D 2, .id_sh= ift =3D 2, }, + }, + }, + }, + { + .threads_per_core =3D 7, .cores_per_pkg =3D 5, + .policy =3D { + .x86_vendor =3D X86_VENDOR_AMD, + .topo.subleaf =3D { + { .nr_logical =3D 7, .level =3D 0, .type =3D 1, .id_sh= ift =3D 3, }, + { .nr_logical =3D 5, .level =3D 1, .type =3D 2, .id_sh= ift =3D 6, }, + }, + }, + }, + { + .threads_per_core =3D 2, .cores_per_pkg =3D 128, + .policy =3D { + .x86_vendor =3D X86_VENDOR_AMD, + .topo.subleaf =3D { + { .nr_logical =3D 2, .level =3D 0, .type =3D 1, .id_sh= ift =3D 1, }, + { .nr_logical =3D 128, .level =3D 1, .type =3D 2, + .id_shift =3D 8, }, + }, + }, + }, + { + .threads_per_core =3D 3, .cores_per_pkg =3D 1, + .policy =3D { + .x86_vendor =3D X86_VENDOR_INTEL, + .topo.subleaf =3D { + { .nr_logical =3D 3, .level =3D 0, .type =3D 1, .id_sh= ift =3D 2, }, + { .nr_logical =3D 3, .level =3D 1, .type =3D 2, .id_sh= ift =3D 2, }, + }, + }, + }, + { + .threads_per_core =3D 1, .cores_per_pkg =3D 3, + .policy =3D { + .x86_vendor =3D X86_VENDOR_INTEL, + .topo.subleaf =3D { + { .nr_logical =3D 1, .level =3D 0, .type =3D 1, .id_sh= ift =3D 0, }, + { .nr_logical =3D 3, .level =3D 1, .type =3D 2, .id_sh= ift =3D 2, }, + }, + }, + }, + { + .threads_per_core =3D 7, .cores_per_pkg =3D 5, + .policy =3D { + .x86_vendor =3D X86_VENDOR_INTEL, + .topo.subleaf =3D { + { .nr_logical =3D 7, .level =3D 0, .type =3D 1, .id_sh= ift =3D 3, }, + { .nr_logical =3D 35, .level =3D 1, .type =3D 2, .id_s= hift =3D 6, }, + }, + }, + }, + { + .threads_per_core =3D 2, .cores_per_pkg =3D 128, + .policy =3D { + .x86_vendor =3D X86_VENDOR_INTEL, + .topo.subleaf =3D { + { .nr_logical =3D 2, .level =3D 0, .type =3D 1, .id_sh= ift =3D 1, }, + { .nr_logical =3D 256, .level =3D 1, .type =3D 2, + .id_shift =3D 8, }, + }, + }, + }, + }; + + printf("Testing topology synthesis from parts:\n"); + + for ( size_t i =3D 0; i < ARRAY_SIZE(tests); ++i ) + { + const struct test *t =3D &tests[i]; + struct cpu_policy actual =3D { .x86_vendor =3D t->policy.x86_vendo= r }; + int rc =3D x86_topo_from_parts(&actual, t->threads_per_core, + t->cores_per_pkg); + + if ( rc || memcmp(&actual.topo, &t->policy.topo, sizeof(actual.top= o)) ) + { +#define TOPO(n, f) t->policy.topo.subleaf[(n)].f, actual.topo.subleaf[(n)= ].f + fail("FAIL[%d] - '%s %u t/c, %u c/p'\n", + rc, + x86_cpuid_vendor_to_str(t->policy.x86_vendor), + t->threads_per_core, t->cores_per_pkg); + printf(" subleaf=3D%u expected_n=3D%u actual_n=3D%u\n" + " expected_lvl=3D%u actual_lvl=3D%u\n" + " expected_type=3D%u actual_type=3D%u\n" + " expected_shift=3D%u actual_shift=3D%u\n", + 0, + TOPO(0, nr_logical), + TOPO(0, level), + TOPO(0, type), + TOPO(0, id_shift)); + + printf(" subleaf=3D%u expected_n=3D%u actual_n=3D%u\n" + " expected_lvl=3D%u actual_lvl=3D%u\n" + " expected_type=3D%u actual_type=3D%u\n" + " expected_shift=3D%u actual_shift=3D%u\n", + 1, + TOPO(1, nr_logical), + TOPO(1, level), + TOPO(1, type), + TOPO(1, id_shift)); +#undef TOPO + } + } +} + int main(int argc, char **argv) { printf("CPU Policy unit tests\n"); @@ -667,6 +798,8 @@ int main(int argc, char **argv) test_is_compatible_success(); test_is_compatible_failure(); =20 + test_topo_from_parts(); + if ( nr_failures ) printf("Done: %u failures\n", nr_failures); else diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86= /cpu-policy.h index 392320b9adbe..f5df18e9f77c 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -551,6 +551,22 @@ int x86_cpu_policies_are_compatible(const struct cpu_p= olicy *host, */ uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t i= d); =20 +/** + * Synthesise topology information in `p` given high-level constraints + * + * Topology is given in various fields accross several leaves, some of + * which are vendor-specific. This function uses the policy itself to + * derive such leaves from threads/core and cores/package. + * + * @param p CPU policy of the domain. + * @param threads_per_core threads/core. Doesn't need to be a power of = 2. + * @param cores_per_package cores/package. Doesn't need to be a power of= 2. + * @return 0 on success; -errno on failure + */ +int x86_topo_from_parts(struct cpu_policy *p, + unsigned int threads_per_core, + unsigned int cores_per_pkg); + #endif /* !XEN_LIB_X86_POLICIES_H */ =20 /* diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index b70b22d55fcf..7709736a2812 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -13,6 +13,96 @@ uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_pol= icy *p, uint32_t id) return id * 2; } =20 +static unsigned int order(unsigned int n) +{ + ASSERT(n); /* clz(0) is UB */ + + return 8 * sizeof(n) - __builtin_clz(n); +} + +int x86_topo_from_parts(struct cpu_policy *p, + unsigned int threads_per_core, + unsigned int cores_per_pkg) +{ + unsigned int threads_per_pkg =3D threads_per_core * cores_per_pkg; + unsigned int apic_id_size; + + if ( !p || !threads_per_core || !cores_per_pkg ) + return -EINVAL; + + p->basic.max_leaf =3D MAX(0xb, p->basic.max_leaf); + + memset(p->topo.raw, 0, sizeof(p->topo.raw)); + + /* thread level */ + p->topo.subleaf[0].nr_logical =3D threads_per_core; + p->topo.subleaf[0].id_shift =3D 0; + p->topo.subleaf[0].level =3D 0; + p->topo.subleaf[0].type =3D 1; + if ( threads_per_core > 1 ) + p->topo.subleaf[0].id_shift =3D order(threads_per_core - 1); + + /* core level */ + p->topo.subleaf[1].nr_logical =3D cores_per_pkg; + if ( p->x86_vendor =3D=3D X86_VENDOR_INTEL ) + p->topo.subleaf[1].nr_logical =3D threads_per_pkg; + p->topo.subleaf[1].id_shift =3D p->topo.subleaf[0].id_shift; + p->topo.subleaf[1].level =3D 1; + p->topo.subleaf[1].type =3D 2; + if ( cores_per_pkg > 1 ) + p->topo.subleaf[1].id_shift +=3D order(cores_per_pkg - 1); + + apic_id_size =3D p->topo.subleaf[1].id_shift; + + /* + * Contrary to what the name might seem to imply. HTT is an enabler for + * SMP and there's no harm in setting it even with a single vCPU. + */ + p->basic.htt =3D true; + p->basic.lppp =3D MIN(0xff, p->basic.lppp); + + switch ( p->x86_vendor ) + { + case X86_VENDOR_INTEL: { + struct cpuid_cache_leaf *sl =3D p->cache.subleaf; + + for ( size_t i =3D 0; sl->type && + i < ARRAY_SIZE(p->cache.raw); i++, sl++ ) + { + sl->cores_per_package =3D cores_per_pkg - 1; + sl->threads_per_cache =3D threads_per_core - 1; + if ( sl->type =3D=3D 3 /* unified cache */ ) + sl->threads_per_cache =3D threads_per_pkg - 1; + } + break; + } + + case X86_VENDOR_AMD: + case X86_VENDOR_HYGON: + /* Expose p->basic.lppp */ + p->extd.cmp_legacy =3D true; + + /* Clip NC to the maximum value it can hold */ + p->extd.nc =3D 0xff; + if ( threads_per_pkg <=3D 0xff ) + p->extd.nc =3D threads_per_pkg - 1; + + /* TODO: Expose leaf e1E */ + p->extd.topoext =3D false; + + /* + * Clip APIC ID to 8 bits, as that's what high core-count machines= do. + * + * That's what AMD EPYC 9654 does with >256 CPUs. + */ + p->extd.apic_id_size =3D MIN(8, apic_id_size); + + break; + } + + return 0; +} + int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err) diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h index 60bb82a400b7..2ec9dbee33c2 100644 --- a/xen/lib/x86/private.h +++ b/xen/lib/x86/private.h @@ -4,6 +4,7 @@ #ifdef __XEN__ =20 #include +#include #include #include #include @@ -17,6 +18,7 @@ =20 #else =20 +#include #include #include #include @@ -28,6 +30,8 @@ =20 #include =20 +#define ASSERT(x) assert(x) + static inline bool test_bit(unsigned int bit, const void *vaddr) { const char *addr =3D vaddr; --=20 2.34.1 From nobody Sun Nov 24 17:07:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=cloud.com ARC-Seal: i=1; a=rsa-sha256; t=1716993190; cv=none; d=zohomail.com; s=zohoarc; b=UCUrXeLsNZprBnpEkxHov50PmRpUA9jyvQwlKu+R2J89SlEA12he2Gzsqg++70jzjilYxgu1K1IEJUJtnnBlQahxeyhyigfIrq9J2dGI08IxewsLwfqliGc0tTX7EA9HJH/g73v+825rKjXCZdfBp+MZ9Tw7u5JVmmDm7NkXJ5Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1716993190; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=hgRshC8PIUKcDrs5c2zLkCCg7umtJyxW/HyRPKfHKPk=; b=SwSXH2TW9n8qymxY2T9cG+etG/xHZKzqSLYlFWV/0c1SB5SkKDPT1POb6YzI9qvu+ucEeLX7c8bi7dgkPVcgRiPFMt1s6CKHqeRq+xRIDnlpV/o7U4mbAYOSiWX7m3r3Y6dZc6QMuaejWmaluPM+f9Wp8sIpu01/OPiHz0rtNPA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716993190553350.7325596170284; Wed, 29 May 2024 07:33:10 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.732087.1137949 (Exim 4.92) (envelope-from ) id 1sCKME-0001MR-Le; Wed, 29 May 2024 14:32:50 +0000 Received: by outflank-mailman (output) from mailman id 732087.1137949; Wed, 29 May 2024 14:32:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKME-0001LK-Fa; Wed, 29 May 2024 14:32:50 +0000 Received: by outflank-mailman (input) for mailman id 732087; Wed, 29 May 2024 14:32:49 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKMD-0000Tu-GG for xen-devel@lists.xenproject.org; Wed, 29 May 2024 14:32:49 +0000 Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com [2a00:1450:4864:20::636]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 51ceb76e-1dc8-11ef-b4bb-af5377834399; Wed, 29 May 2024 16:32:47 +0200 (CEST) Received: by mail-ej1-x636.google.com with SMTP id a640c23a62f3a-a63359aaaa6so274373666b.2 for ; Wed, 29 May 2024 07:32:47 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([217.156.233.157]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a647b827400sm74614166b.69.2024.05.29.07.32.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 07:32:46 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 51ceb76e-1dc8-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1716993167; x=1717597967; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hgRshC8PIUKcDrs5c2zLkCCg7umtJyxW/HyRPKfHKPk=; b=OJkv0CUzDkTO2zcH2V3p4wXROqFdWmjNnd/tcAg3wzx5XrwIVaF31S2tKAp88xrCDM XckDpv4J2JQZTpvY+9kXcQ3ErfAzC2i8zafdcSN8itjCEoVmV/aZsH4B9wnIDsNLrCJF IPdmsdgLJf2ZmeqVPcYGVgFY2Kv9F0n9iq6Ck= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716993167; x=1717597967; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hgRshC8PIUKcDrs5c2zLkCCg7umtJyxW/HyRPKfHKPk=; b=OK+iFZ4wQ6NyTvKPCkaTm27zOj0zEHszbxdwXMRggIlVZkNk1D2aIHZ+ZqI7L+ToyS 2+BQ3fkBQ5ukApKe4n6PEHRLVG53NSWYFTXkfgE2s2VTFykLPqrjQ9DmQLoh0Ax6Eg6y lxvykrRRKjMMNWYoR786xZgi2LNJ3ey/w8r94tNabg7aG/qPicVkps4cfKra3v2kgMbS mlRF53a20mhWkTJPfzeGp9BjSgjGvL0c8zxet+DnVNvyXqlBF98lZ1ORrP9BxZbklhD7 kDQYJpFKV/SYWMjNwpm8pOg/HvvHtsN8TghGdT2Aj2vCY654G+y49VUIGRYTDICquZ2a aMlw== X-Gm-Message-State: AOJu0YwdnGsnNEkDJVfMBJE/edB41FS5ob7jW6TaazKuh7alurzrNDyO uSsLUXsnzwY8wKslXEZ86NE7hO6sNA0/3x2lNCv/X+CM8xYPAjWk0s9ClTDAtdLcQTvlKrxjL/d x X-Google-Smtp-Source: AGHT+IGjD11cLXk8jwlNmuLmPdqqildXHoRCJVxhX1JxaCN4sTVT/mV9v7sxuuC1d7V1axehTvoTqg== X-Received: by 2002:a17:906:888:b0:a59:bc75:5000 with SMTP id a640c23a62f3a-a62642da520mr1056775166b.12.1716993166772; Wed, 29 May 2024 07:32:46 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v3 5/6] xen/x86: Derive topologically correct x2APIC IDs from the policy Date: Wed, 29 May 2024 15:32:34 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @cloud.com) X-ZM-MESSAGEID: 1716993191138100011 Content-Type: text/plain; charset="utf-8" Implements the helper for mapping vcpu_id to x2apic_id given a valid topology in a policy. The algo is written with the intention of extending it to leaves 0x1f and extended 0x26 in the future. Toolstack doesn't set leaf 0xb and the HVM default policy has it cleared, so the leaf is not implemented. In that case, the new helper just returns the legacy mapping. Signed-off-by: Alejandro Vallejo --- v3: * Formatting adjustments. * Replace a conditional in x86_topo_from_parts() with MIN() * Was meant to happen in v2, but fell between the cracks. * Moved the `policy` variable to the inner scope so it's clean for every = test. * Rewrote commit message to say "extended 0x26" rather than e26. --- tools/tests/cpu-policy/test-cpu-policy.c | 68 ++++++++++++++++++++++ xen/include/xen/lib/x86/cpu-policy.h | 2 + xen/lib/x86/policy.c | 73 ++++++++++++++++++++++-- 3 files changed, 138 insertions(+), 5 deletions(-) diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-pol= icy/test-cpu-policy.c index 849d7cebaa7c..e5f9b8f7ee39 100644 --- a/tools/tests/cpu-policy/test-cpu-policy.c +++ b/tools/tests/cpu-policy/test-cpu-policy.c @@ -781,6 +781,73 @@ static void test_topo_from_parts(void) } } =20 +static void test_x2apic_id_from_vcpu_id_success(void) +{ + static const struct test { + unsigned int vcpu_id; + unsigned int threads_per_core; + unsigned int cores_per_pkg; + uint32_t x2apic_id; + uint8_t x86_vendor; + } tests[] =3D { + { + .vcpu_id =3D 3, .threads_per_core =3D 3, .cores_per_pkg =3D 8, + .x2apic_id =3D 1 << 2, + }, + { + .vcpu_id =3D 6, .threads_per_core =3D 3, .cores_per_pkg =3D 8, + .x2apic_id =3D 2 << 2, + }, + { + .vcpu_id =3D 24, .threads_per_core =3D 3, .cores_per_pkg =3D 8, + .x2apic_id =3D 1 << 5, + }, + { + .vcpu_id =3D 35, .threads_per_core =3D 3, .cores_per_pkg =3D 8, + .x2apic_id =3D (35 % 3) | (((35 / 3) % 8) << 2) | ((35 / 24) <= < 5), + }, + { + .vcpu_id =3D 96, .threads_per_core =3D 7, .cores_per_pkg =3D 3, + .x2apic_id =3D (96 % 7) | (((96 / 7) % 3) << 3) | ((96 / 21) <= < 5), + }, + }; + + const uint8_t vendors[] =3D { + X86_VENDOR_INTEL, + X86_VENDOR_AMD, + X86_VENDOR_CENTAUR, + X86_VENDOR_SHANGHAI, + X86_VENDOR_HYGON, + }; + + printf("Testing x2apic id from vcpu id success:\n"); + + /* Perform the test run on every vendor we know about */ + for ( size_t i =3D 0; i < ARRAY_SIZE(vendors); ++i ) + { + for ( size_t j =3D 0; j < ARRAY_SIZE(tests); ++j ) + { + struct cpu_policy policy =3D { .x86_vendor =3D vendors[i] }; + const struct test *t =3D &tests[j]; + uint32_t x2apic_id; + int rc =3D x86_topo_from_parts(&policy, t->threads_per_core, + t->cores_per_pkg); + + if ( rc ) { + fail("FAIL[%d] - 'x86_topo_from_parts() failed", rc); + continue; + } + + x2apic_id =3D x86_x2apic_id_from_vcpu_id(&policy, t->vcpu_id); + if ( x2apic_id !=3D t->x2apic_id ) + fail("FAIL - '%s cpu%u %u t/c %u c/p'. bad x2apic_id: expe= cted=3D%u actual=3D%u\n", + x86_cpuid_vendor_to_str(policy.x86_vendor), + t->vcpu_id, t->threads_per_core, t->cores_per_pkg, + t->x2apic_id, x2apic_id); + } + } +} + int main(int argc, char **argv) { printf("CPU Policy unit tests\n"); @@ -799,6 +866,7 @@ int main(int argc, char **argv) test_is_compatible_failure(); =20 test_topo_from_parts(); + test_x2apic_id_from_vcpu_id_success(); =20 if ( nr_failures ) printf("Done: %u failures\n", nr_failures); diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86= /cpu-policy.h index f5df18e9f77c..2cbc2726a861 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -545,6 +545,8 @@ int x86_cpu_policies_are_compatible(const struct cpu_po= licy *host, /** * Calculates the x2APIC ID of a vCPU given a CPU policy * + * If the policy lacks leaf 0xb falls back to legacy mapping of apic_id=3D= cpu*2 + * * @param p CPU policy of the domain. * @param id vCPU ID of the vCPU. * @returns x2APIC ID of the vCPU. diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index 7709736a2812..239386b71769 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -2,15 +2,78 @@ =20 #include =20 +static uint32_t parts_per_higher_scoped_level(const struct cpu_policy *p, = size_t lvl) +{ + /* + * `nr_logical` reported by Intel is the number of THREADS contained in + * the next topological scope. For example, assuming a system with 2 + * threads/core and 3 cores/module in a fully symmetric topology, + * `nr_logical` at the core level will report 6. Because it's reporting + * the number of threads in a module. + * + * On AMD/Hygon, nr_logical is already normalized by the higher scoped + * level (cores/complex, etc) so we can return it as-is. + */ + if ( p->x86_vendor !=3D X86_VENDOR_INTEL || !lvl ) + return p->topo.subleaf[lvl].nr_logical; + + return p->topo.subleaf[lvl].nr_logical / p->topo.subleaf[lvl - 1].nr_l= ogical; +} + uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t i= d) { + uint32_t shift =3D 0, x2apic_id =3D 0; + + /* In the absence of topology leaves, fallback to traditional mapping = */ + if ( !p->topo.subleaf[0].type ) + return id * 2; + /* - * TODO: Derive x2APIC ID from the topology information inside `p` - * rather than from the vCPU ID alone. This bodge is a temporary - * measure until all infra is in place to retrieve or derive the - * initial x2APIC ID from migrated domains. + * `id` means different things at different points of the algo + * + * At lvl=3D0: global thread_id (same as vcpu_id) + * At lvl=3D1: global core_id + * At lvl=3D2: global socket_id (actually complex_id in AMD, module_id + * in Intel, but the name is inconsequenti= al) + * + * +--+ + * ____ |#0| ______ <=3D 1 socket + * / +--+ \+--+ + * __#0__ __|#1|__ <=3D 2 cores/socket + * / | \ +--+/ +-|+ \ + * #0 #1 #2 |#3| #4 #5 <=3D 3 threads/core + * +--+ + * + * ... and so on. Global in this context means that it's a unique + * identifier for the whole topology, and not relative to the level + * it's in. For example, in the diagram shown above, we're looking at + * thread #3 in the global sense, though it's #0 within its core. + * + * Note that dividing a global thread_id by the number of threads per + * core returns the global core id that contains it. e.g: 0, 1 or 2 + * divided by 3 returns core_id=3D0. 3, 4 or 5 divided by 3 returns co= re + * 1, and so on. An analogous argument holds for higher levels. This is + * the property we exploit to derive x2apic_id from vcpu_id. + * + * NOTE: `topo` is currently derived from leaf 0xb, which is bound to = two + * levels, but once we track leaves 0x1f (or extended 0x26) there will= be a + * few more. The algorithm is written to cope with that case. */ - return id * 2; + for ( uint32_t i =3D 0; i < ARRAY_SIZE(p->topo.raw); i++ ) + { + uint32_t nr_parts; + + if ( !p->topo.subleaf[i].type ) + /* sentinel subleaf */ + break; + + nr_parts =3D parts_per_higher_scoped_level(p, i); + x2apic_id |=3D (id % nr_parts) << shift; + id /=3D nr_parts; + shift =3D p->topo.subleaf[i].id_shift; + } + + return (id << shift) | x2apic_id; } =20 static unsigned int order(unsigned int n) --=20 2.34.1 From nobody Sun Nov 24 17:07:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=cloud.com ARC-Seal: i=1; a=rsa-sha256; t=1716993185; cv=none; d=zohomail.com; s=zohoarc; b=ma2FoGsc6nlFNApDrefQ4RjlzyvCNwTaHHrDxrLIu7iWfHcJay2Lzi10LeCwA6zrFZpMCIoWQEyBRc/0xRG2F53ikq6QP1z8EX48bDxHFIXeU9XijLUk3w0JzjzXBDVzDXkNyfaj20IcMjcbVUD/Gf7m40aBv5k1DJiT3rM4Hek= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1716993185; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=V5Itkas4jSxEAne1qNgyzUKuJE5iH1oRoQ5V5cxab3I=; b=WcKNb+Ko6YQvFLrEamWtV3q5qIZTSWuOw35i4TZqzVx2N87UnI7EE/TV+W0aV+T0z/Vmyaw3DYgRWRjpxsgbylm7ssmVirh9rykvcNXpbA81s6YWa5neJyuBSiRsibnAWYyHEXkbEbZQ1d2D6yP6fKT24WX4/LxYUwIth8HJQpw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 17169931854971015.2833330352927; Wed, 29 May 2024 07:33:05 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.732088.1137953 (Exim 4.92) (envelope-from ) id 1sCKMF-0001T8-0s; Wed, 29 May 2024 14:32:51 +0000 Received: by outflank-mailman (output) from mailman id 732088.1137953; Wed, 29 May 2024 14:32:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKME-0001Sk-QU; Wed, 29 May 2024 14:32:50 +0000 Received: by outflank-mailman (input) for mailman id 732088; Wed, 29 May 2024 14:32:49 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCKMD-0000Tu-TP for xen-devel@lists.xenproject.org; Wed, 29 May 2024 14:32:49 +0000 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [2a00:1450:4864:20::632]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 5223fbe6-1dc8-11ef-b4bb-af5377834399; Wed, 29 May 2024 16:32:48 +0200 (CEST) Received: by mail-ej1-x632.google.com with SMTP id a640c23a62f3a-a5a89787ea4so230993566b.2 for ; Wed, 29 May 2024 07:32:48 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([217.156.233.157]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a647b827400sm74614166b.69.2024.05.29.07.32.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 07:32:47 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5223fbe6-1dc8-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1716993167; x=1717597967; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V5Itkas4jSxEAne1qNgyzUKuJE5iH1oRoQ5V5cxab3I=; b=D/p9ABnMHMSmnyXHpGToXGTmtf2KeovJ2ytBtWPyoo2zTjwIEBsq2hWbyWZ8CBAHMR dJLX/g4A1sOG2PnOiN8t8t7LC+7/x84sBBGHExzU7gZ0hdJEhR1rRWPpvsH3XeDe2GFa ZUAgKp3dVT+iJEKsjszcvCLOyAP8QyqcDiZwk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716993167; x=1717597967; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V5Itkas4jSxEAne1qNgyzUKuJE5iH1oRoQ5V5cxab3I=; b=Iq/cVy6TrMsAYHZVaWWJfgFX7ojptSDWOQ8xr6R4i0ZLutw+OMOkHTQkDGgM4zX8l+ 4/iZRAJocSFZ88JImsOJugMBxuzUKKd9axgXcBJEyYmfeG8jINAGVf6r6l4uHjMspgSx JZB4DERiogv4CnYqVh4Z/4GZY4PIlSkfMcANuVWI8MSO/Rns6aEjmNRUPEXWKsrxO80T RgL0kFQjI8d+90j9n5frW9MlOpZT9TA21jktuoJpxiBpsl6zIPPqP0o9FOuy8zrA5yQ4 a8KnNMnb6NxXahXmBsrnE2EhYuZRkB+U6UPjyq6fpVVEj29MzcURRZUNuE3wbROV/I4I 8X6w== X-Gm-Message-State: AOJu0YyIa0375UDFQjWb+DV6GKKOCtjftuDY+lgAZgMBTgnjJhHAmxh3 WLyDK4nXs3RxGp05lxKXhVc7fEtCim+KOW8BefXdtWjYp6iL7C6EnxJWnUE/Qu6WIpnSZwUfi1w c X-Google-Smtp-Source: AGHT+IHPnuAu70v7eX7J2gWR++BS4a0v2Gol7RrzKimIuNA8uGHadRzfXiOzDpkwtstDZCJyuaoerg== X-Received: by 2002:a17:906:3c4b:b0:a59:c52b:9933 with SMTP id a640c23a62f3a-a62643e0792mr936602766b.30.1716993167521; Wed, 29 May 2024 07:32:47 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Anthony PERARD , Juergen Gross , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Subject: [PATCH v3 6/6] xen/x86: Synthesise domain topologies Date: Wed, 29 May 2024 15:32:35 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @cloud.com) X-ZM-MESSAGEID: 1716993187094100005 Content-Type: text/plain; charset="utf-8" Expose sensible topologies in leaf 0xb. At the moment it synthesises non-HT systems, in line with the previous code intent. Leaf 0xb in the host policy is no longer zapped and the guest {max,def} pol= icies have their topology leaves zapped instead. The intent is for toolstack to populate them. There's no current use for the topology information in the h= ost policy, but it makes no harm. Signed-off-by: Alejandro Vallejo --- v3: * Formatting adjustments. * Restored previous topology logic and gated it through the "restore" var= iable * Print return code on topo generation failures. * Adjusted comment on wiping the topology in the guest policies. * Described the changes in topology zero-out in the commit message. --- tools/libs/guest/xg_cpuid_x86.c | 24 +++++++++++++++++++++++- xen/arch/x86/cpu-policy.c | 9 ++++++--- xen/lib/x86/policy.c | 9 ++++++--- 3 files changed, 35 insertions(+), 7 deletions(-) diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x8= 6.c index 4453178100ad..6062dcab01ce 100644 --- a/tools/libs/guest/xg_cpuid_x86.c +++ b/tools/libs/guest/xg_cpuid_x86.c @@ -725,8 +725,16 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t = domid, bool restore, p->policy.basic.htt =3D test_bit(X86_FEATURE_HTT, host_featu= reset); p->policy.extd.cmp_legacy =3D test_bit(X86_FEATURE_CMP_LEGACY, hos= t_featureset); } - else + else if ( restore ) { + /* + * Reconstruct the topology exposed on Xen <=3D 4.13. It makes ver= y little + * sense, but it's what those guests saw so it's set in stone now. + * + * Guests from Xen 4.14 onwards carry their own CPUID leaves in the + * migration stream so they don't need special treatment. + */ + /* * Topology for HVM guests is entirely controlled by Xen. For now= , we * hardcode APIC_ID =3D vcpu_id * 2 to give the illusion of no SMT. @@ -782,6 +790,20 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t = domid, bool restore, break; } } + else + { + /* TODO: Expose the ability to choose a custom topology for HVM/PV= H */ + unsigned int threads_per_core =3D 1; + unsigned int cores_per_pkg =3D di.max_vcpu_id + 1; + + rc =3D x86_topo_from_parts(&p->policy, threads_per_core, cores_per= _pkg); + if ( rc ) + { + ERROR("Failed to generate topology: rc=3D%d t/c=3D%u c/p=3D%u", + rc, threads_per_core, cores_per_pkg); + goto out; + } + } =20 nr_leaves =3D ARRAY_SIZE(p->leaves); rc =3D x86_cpuid_copy_to_buffer(&p->policy, p->leaves, &nr_leaves); diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c index b96f4ee55cc4..ecbe98302df2 100644 --- a/xen/arch/x86/cpu-policy.c +++ b/xen/arch/x86/cpu-policy.c @@ -278,9 +278,6 @@ static void recalculate_misc(struct cpu_policy *p) =20 p->basic.raw[0x8] =3D EMPTY_LEAF; =20 - /* TODO: Rework topology logic. */ - memset(p->topo.raw, 0, sizeof(p->topo.raw)); - p->basic.raw[0xc] =3D EMPTY_LEAF; =20 p->extd.e1d &=3D ~CPUID_COMMON_1D_FEATURES; @@ -628,6 +625,9 @@ static void __init calculate_pv_max_policy(void) recalculate_xstate(p); =20 p->extd.raw[0xa] =3D EMPTY_LEAF; /* No SVM for PV guests. */ + + /* Wipe host topology. Populated by toolstack */ + memset(p->topo.raw, 0, sizeof(p->topo.raw)); } =20 static void __init calculate_pv_def_policy(void) @@ -791,6 +791,9 @@ static void __init calculate_hvm_max_policy(void) =20 /* It's always possible to emulate CPUID faulting for HVM guests */ p->platform_info.cpuid_faulting =3D true; + + /* Wipe host topology. Populated by toolstack */ + memset(p->topo.raw, 0, sizeof(p->topo.raw)); } =20 static void __init calculate_hvm_def_policy(void) diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index 239386b71769..01b9ed39d597 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -2,7 +2,8 @@ =20 #include =20 -static uint32_t parts_per_higher_scoped_level(const struct cpu_policy *p, = size_t lvl) +static uint32_t parts_per_higher_scoped_level(const struct cpu_policy *p, + size_t lvl) { /* * `nr_logical` reported by Intel is the number of THREADS contained in @@ -17,10 +18,12 @@ static uint32_t parts_per_higher_scoped_level(const str= uct cpu_policy *p, size_t if ( p->x86_vendor !=3D X86_VENDOR_INTEL || !lvl ) return p->topo.subleaf[lvl].nr_logical; =20 - return p->topo.subleaf[lvl].nr_logical / p->topo.subleaf[lvl - 1].nr_l= ogical; + return p->topo.subleaf[lvl].nr_logical / + p->topo.subleaf[lvl - 1].nr_logical; } =20 -uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t i= d) +uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, + uint32_t id) { uint32_t shift =3D 0, x2apic_id =3D 0; =20 --=20 2.34.1