From nobody Wed Apr 24 08:07:10 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=oracle.com ARC-Seal: i=1; a=rsa-sha256; t=1557422835; cv=none; d=zoho.com; s=zohoarc; b=K3Qzh0zRXpcdmeuAu6zaOlQ3o+5nQ59Lt7+neniBPuSGLFxtmT2Yatu5ZcXidmFZT/foetnyTx8LalAXldJaQHOHi1MMv6mcohAljtmhE3N3XQ8U2OZtQxHuezS+hX/5zJx+GpiRlzCNSxcfcQhsbxK2nSOmWb17kL3zCzcEjgE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557422835; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=3OtczHcc8L7KB6NJq6nqnBv/6eQDdRTct2nj+sn+NWg=; b=H+Iai6e3tlIsvr4Nmm5C0HB0qW/CeFLOeWcpUOW/vGIk7+lCkGVSx6/+PKBBdpK9hRCcxIiIcnCgqf1dNMKNlk4oy/Z8iw+tZMfO+NOOlsOfviuNYtv+UJS51jtNeidJ3VFgTtOkopr0Sfu3LtvjN4tvd+XEhPJnUJ4C+ogB5K4= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1557422834991312.3883857111854; Thu, 9 May 2019 10:27:14 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hOmnl-0000fh-7V; Thu, 09 May 2019 17:25:49 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hOmnk-0000ec-9a for xen-devel@lists.xenproject.org; Thu, 09 May 2019 17:25:48 +0000 Received: from aserp2130.oracle.com (unknown [141.146.126.79]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 79d38f96-727f-11e9-9064-db2b4d5a665c; Thu, 09 May 2019 17:25:44 +0000 (UTC) Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x49HJdZJ162617; Thu, 9 May 2019 17:25:42 GMT Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by aserp2130.oracle.com with ESMTP id 2s94b6ceys-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 09 May 2019 17:25:42 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x49HOMIN110350; Thu, 9 May 2019 17:25:41 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by userp3030.oracle.com with ESMTP id 2sagyvcg4r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 09 May 2019 17:25:41 +0000 Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x49HPewo011061; Thu, 9 May 2019 17:25:40 GMT Received: from aa1-ca-oracle-com.ca.oracle.com (/10.156.75.204) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 09 May 2019 10:25:40 -0700 X-Inumbo-ID: 79d38f96-727f-11e9-9064-db2b4d5a665c DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2018-07-02; bh=tvxEVQHSjGEEQ2j+NCO0wtnQzFGpPG5tooLTFGk+YKs=; b=R+PspFkmBGjY1zA4xTkjO91hNkxNSLURJi2IYBIEA+DiTzJbCFNuPj8rgEeWREDdZHqp b3rGtBzgCq0M6ZAoaTXu45YDfWuDn3qEtVXZDQZhoI1yjmmE2bRxtcmGBFvG+FFzvIDD GjpjzSnPkJVuab1x3F3ZVIz/HaVXjfgWZIR/zvlgHbRZzg8Gz5QdKln/p+VyQutjZX03 B+omb60+0vmBZQlWviVbYEL6JGSSisD86BrqzzN7xcawuBvn9kjm6PZBxO1wBvfcZDUf iLnMzAFRJc9RrDkcWTFQ9xFoEUMqb/nuqCuBOFAfPwiuvNGijN2Rj2zJvYCG6XnDASQG jg== From: Ankur Arora To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org Date: Thu, 9 May 2019 10:25:31 -0700 Message-Id: <20190509172540.12398-8-ankur.a.arora@oracle.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190509172540.12398-1-ankur.a.arora@oracle.com> References: <20190509172540.12398-1-ankur.a.arora@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9252 signatures=668686 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1905090100 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9252 signatures=668686 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1905090100 Subject: [Xen-devel] [RFC PATCH 07/16] x86/xen: make vcpu_info part of xenhost_t X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: jgross@suse.com, sstabellini@kernel.org, konrad.wilk@oracle.com, ankur.a.arora@oracle.com, pbonzini@redhat.com, boris.ostrovsky@oracle.com, joao.m.martins@oracle.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Abstract out xen_vcpu_id probing via (*probe_vcpu_id)(). Once that is availab,e the vcpu_info registration happens via the VCPUOP hypercall. Note that for the nested case, there are two vcpu_ids, and two vcpu_info areas, one each for the default xenhost and the remote xenhost. The vcpu_info is used via pv_irq_ops, and evtchn signaling. The other VCPUOP hypercalls are used for management (and scheduling) which is expected to be done purely in the default hypervisor. However, scheduling of L1-guest does imply L0-Xen-vcpu_info switching, which might mean that the remote hypervisor needs some visibility into related events/hypercalls in the default hypervisor. TODO: - percpu data structures for xen_vcpu Signed-off-by: Ankur Arora --- arch/x86/xen/enlighten.c | 93 +++++++++++++------------------- arch/x86/xen/enlighten_hvm.c | 87 ++++++++++++++++++------------ arch/x86/xen/enlighten_pv.c | 60 ++++++++++++++------- arch/x86/xen/enlighten_pvh.c | 3 +- arch/x86/xen/irq.c | 10 ++-- arch/x86/xen/mmu_pv.c | 6 +-- arch/x86/xen/pci-swiotlb-xen.c | 1 + arch/x86/xen/setup.c | 1 + arch/x86/xen/smp.c | 9 +++- arch/x86/xen/smp_hvm.c | 17 +++--- arch/x86/xen/smp_pv.c | 12 ++--- arch/x86/xen/time.c | 23 ++++---- arch/x86/xen/xen-ops.h | 5 +- drivers/xen/events/events_base.c | 14 ++--- drivers/xen/events/events_fifo.c | 2 +- drivers/xen/evtchn.c | 2 +- drivers/xen/time.c | 2 +- include/xen/xen-ops.h | 7 +-- include/xen/xenhost.h | 47 ++++++++++++++++ 19 files changed, 240 insertions(+), 161 deletions(-) diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index 20e0de844442..0dafbbc838ef 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -20,35 +20,6 @@ #include "smp.h" #include "pmu.h" =20 -/* - * Pointer to the xen_vcpu_info structure or - * &HYPERVISOR_shared_info->vcpu_info[cpu]. See xen_hvm_init_shared_info - * and xen_vcpu_setup for details. By default it points to share_info->vcp= u_info - * but if the hypervisor supports VCPUOP_register_vcpu_info then it can po= int - * to xen_vcpu_info. The pointer is used in __xen_evtchn_do_upcall to - * acknowledge pending events. - * Also more subtly it is used by the patched version of irq enable/disable - * e.g. xen_irq_enable_direct and xen_iret in PV mode. - * - * The desire to be able to do those mask/unmask operations as a single - * instruction by using the per-cpu offset held in %gs is the real reason - * vcpu info is in a per-cpu pointer and the original reason for this - * hypercall. - * - */ -DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu); - -/* - * Per CPU pages used if hypervisor supports VCPUOP_register_vcpu_info - * hypercall. This can be used both in PV and PVHVM mode. The structure - * overrides the default per_cpu(xen_vcpu, cpu) value. - */ -DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info); - -/* Linux <-> Xen vCPU id mapping */ -DEFINE_PER_CPU(uint32_t, xen_vcpu_id); -EXPORT_PER_CPU_SYMBOL(xen_vcpu_id); - enum xen_domain_type xen_domain_type =3D XEN_NATIVE; EXPORT_SYMBOL_GPL(xen_domain_type); =20 @@ -112,12 +83,12 @@ int xen_cpuhp_setup(int (*cpu_up_prepare_cb)(unsigned = int), return rc >=3D 0 ? 0 : rc; } =20 -static int xen_vcpu_setup_restore(int cpu) +static int xen_vcpu_setup_restore(xenhost_t *xh, int cpu) { int rc =3D 0; =20 /* Any per_cpu(xen_vcpu) is stale, so reset it */ - xen_vcpu_info_reset(cpu); + xen_vcpu_info_reset(xh, cpu); =20 /* * For PVH and PVHVM, setup online VCPUs only. The rest will @@ -125,7 +96,7 @@ static int xen_vcpu_setup_restore(int cpu) */ if (xen_pv_domain() || (xen_hvm_domain() && cpu_online(cpu))) { - rc =3D xen_vcpu_setup(cpu); + rc =3D xen_vcpu_setup(xh, cpu); } =20 return rc; @@ -138,30 +109,42 @@ static int xen_vcpu_setup_restore(int cpu) */ void xen_vcpu_restore(void) { - int cpu, rc; + int cpu, rc =3D 0; =20 + /* + * VCPU management is primarily the responsibility of xh_default and + * xh_remote only needs VCPUOP_register_vcpu_info. + * So, we do VPUOP_down and VCPUOP_up only on xh_default. + * + * (Currently, however, VCPUOP_register_vcpu_info is allowed only + * on VCPUs that are self or down, so we might need a new model + * there.) + */ for_each_possible_cpu(cpu) { bool other_cpu =3D (cpu !=3D smp_processor_id()); bool is_up; + xenhost_t **xh; =20 - if (xen_vcpu_nr(cpu) =3D=3D XEN_VCPU_ID_INVALID) + if (xen_vcpu_nr(xh_default, cpu) =3D=3D XEN_VCPU_ID_INVALID) continue; =20 /* Only Xen 4.5 and higher support this. */ is_up =3D HYPERVISOR_vcpu_op(VCPUOP_is_up, - xen_vcpu_nr(cpu), NULL) > 0; + xen_vcpu_nr(xh_default, cpu), NULL) > 0; =20 if (other_cpu && is_up && - HYPERVISOR_vcpu_op(VCPUOP_down, xen_vcpu_nr(cpu), NULL)) + HYPERVISOR_vcpu_op(VCPUOP_down, xen_vcpu_nr(xh_default, cpu), NULL)) BUG(); =20 if (xen_pv_domain() || xen_feature(XENFEAT_hvm_safe_pvclock)) xen_setup_runstate_info(cpu); =20 - rc =3D xen_vcpu_setup_restore(cpu); - if (rc) - pr_emerg_once("vcpu restore failed for cpu=3D%d err=3D%d. " - "System will hang.\n", cpu, rc); + for_each_xenhost(xh) { + rc =3D xen_vcpu_setup_restore(*xh, cpu); + if (rc) + pr_emerg_once("vcpu restore failed for cpu=3D%d err=3D%d. " + "System will hang.\n", cpu, rc); + } /* * In case xen_vcpu_setup_restore() fails, do not bring up the * VCPU. This helps us avoid the resulting OOPS when the VCPU @@ -172,29 +155,29 @@ void xen_vcpu_restore(void) * VCPUs to come up. */ if (other_cpu && is_up && (rc =3D=3D 0) && - HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL)) + HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(xh_default, cpu), NULL)) BUG(); } } =20 -void xen_vcpu_info_reset(int cpu) +void xen_vcpu_info_reset(xenhost_t *xh, int cpu) { - if (xen_vcpu_nr(cpu) < MAX_VIRT_CPUS) { - per_cpu(xen_vcpu, cpu) =3D - &xh_default->HYPERVISOR_shared_info->vcpu_info[xen_vcpu_nr(cpu)]; + if (xen_vcpu_nr(xh, cpu) < MAX_VIRT_CPUS) { + xh->xen_vcpu[cpu] =3D + &xh->HYPERVISOR_shared_info->vcpu_info[xen_vcpu_nr(xh, cpu)]; } else { /* Set to NULL so that if somebody accesses it we get an OOPS */ - per_cpu(xen_vcpu, cpu) =3D NULL; + xh->xen_vcpu[cpu] =3D NULL; } } =20 -int xen_vcpu_setup(int cpu) +int xen_vcpu_setup(xenhost_t *xh, int cpu) { struct vcpu_register_vcpu_info info; int err; struct vcpu_info *vcpup; =20 - BUG_ON(xh_default->HYPERVISOR_shared_info =3D=3D &xen_dummy_shared_info); + BUG_ON(xh->HYPERVISOR_shared_info =3D=3D &xen_dummy_shared_info); =20 /* * This path is called on PVHVM at bootup (xen_hvm_smp_prepare_boot_cpu) @@ -208,12 +191,12 @@ int xen_vcpu_setup(int cpu) * use this function. */ if (xen_hvm_domain()) { - if (per_cpu(xen_vcpu, cpu) =3D=3D &per_cpu(xen_vcpu_info, cpu)) + if (xh->xen_vcpu[cpu] =3D=3D &xh->xen_vcpu_info[cpu]) return 0; } =20 if (xen_have_vcpu_info_placement) { - vcpup =3D &per_cpu(xen_vcpu_info, cpu); + vcpup =3D &xh->xen_vcpu_info[cpu]; info.mfn =3D arbitrary_virt_to_mfn(vcpup); info.offset =3D offset_in_page(vcpup); =20 @@ -227,8 +210,8 @@ int xen_vcpu_setup(int cpu) * hypercall does not allow to over-write info.mfn and * info.offset. */ - err =3D HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, - xen_vcpu_nr(cpu), &info); + err =3D hypervisor_vcpu_op(xh, VCPUOP_register_vcpu_info, + xen_vcpu_nr(xh, cpu), &info); =20 if (err) { pr_warn_once("register_vcpu_info failed: cpu=3D%d err=3D%d\n", @@ -239,14 +222,14 @@ int xen_vcpu_setup(int cpu) * This cpu is using the registered vcpu info, even if * later ones fail to. */ - per_cpu(xen_vcpu, cpu) =3D vcpup; + xh->xen_vcpu[cpu] =3D vcpup; } } =20 if (!xen_have_vcpu_info_placement) - xen_vcpu_info_reset(cpu); + xen_vcpu_info_reset(xh, cpu); =20 - return ((per_cpu(xen_vcpu, cpu) =3D=3D NULL) ? -ENODEV : 0); + return ((xh->xen_vcpu[cpu] =3D=3D NULL) ? -ENODEV : 0); } =20 void xen_reboot(int reason) diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c index 0e53363f9d1f..c1981a3e4989 100644 --- a/arch/x86/xen/enlighten_hvm.c +++ b/arch/x86/xen/enlighten_hvm.c @@ -5,6 +5,7 @@ #include #include =20 +#include #include #include #include @@ -72,22 +73,22 @@ static void __init xen_hvm_init_mem_mapping(void) { xenhost_t **xh; =20 - for_each_xenhost(xh) + for_each_xenhost(xh) { xenhost_reset_shared_info(*xh); =20 - /* - * The virtual address of the shared_info page has changed, so - * the vcpu_info pointer for VCPU 0 is now stale. - * - * The prepare_boot_cpu callback will re-initialize it via - * xen_vcpu_setup, but we can't rely on that to be called for - * old Xen versions (xen_have_vector_callback =3D=3D 0). - * - * It is, in any case, bad to have a stale vcpu_info pointer - * so reset it now. - * For now, this uses xh_default implictly. - */ - xen_vcpu_info_reset(0); + /* + * The virtual address of the shared_info page has changed, so + * the vcpu_info pointer for VCPU 0 is now stale. + * + * The prepare_boot_cpu callback will re-initialize it via + * xen_vcpu_setup, but we can't rely on that to be called for + * old Xen versions (xen_have_vector_callback =3D=3D 0). + * + * It is, in any case, bad to have a stale vcpu_info pointer + * so reset it now. + */ + xen_vcpu_info_reset(*xh, 0); + } } =20 extern uint32_t xen_pv_cpuid_base(xenhost_t *xh); @@ -103,11 +104,32 @@ void xen_hvm_setup_hypercall_page(xenhost_t *xh) xh->hypercall_page =3D xen_hypercall_page; } =20 +static void xen_hvm_probe_vcpu_id(xenhost_t *xh, int cpu) +{ + uint32_t eax, ebx, ecx, edx, base; + + base =3D xenhost_cpuid_base(xh); + + if (cpu =3D=3D 0) { + cpuid(base + 4, &eax, &ebx, &ecx, &edx); + if (eax & XEN_HVM_CPUID_VCPU_ID_PRESENT) + xh->xen_vcpu_id[cpu] =3D ebx; + else + xh->xen_vcpu_id[cpu] =3D smp_processor_id(); + } else { + if (cpu_acpi_id(cpu) !=3D U32_MAX) + xh->xen_vcpu_id[cpu] =3D cpu_acpi_id(cpu); + else + xh->xen_vcpu_id[cpu] =3D cpu; + } +} + xenhost_ops_t xh_hvm_ops =3D { .cpuid_base =3D xen_pv_cpuid_base, .setup_hypercall_page =3D xen_hvm_setup_hypercall_page, .setup_shared_info =3D xen_hvm_init_shared_info, .reset_shared_info =3D xen_hvm_reset_shared_info, + .probe_vcpu_id =3D xen_hvm_probe_vcpu_id, }; =20 xenhost_ops_t xh_hvm_nested_ops =3D { @@ -116,7 +138,7 @@ xenhost_ops_t xh_hvm_nested_ops =3D { static void __init init_hvm_pv_info(void) { int major, minor; - uint32_t eax, ebx, ecx, edx, base; + uint32_t eax, base; xenhost_t **xh; =20 base =3D xenhost_cpuid_base(xh_default); @@ -147,11 +169,8 @@ static void __init init_hvm_pv_info(void) if (xen_validate_features() =3D=3D false) __xenhost_unregister(xenhost_r2); =20 - cpuid(base + 4, &eax, &ebx, &ecx, &edx); - if (eax & XEN_HVM_CPUID_VCPU_ID_PRESENT) - this_cpu_write(xen_vcpu_id, ebx); - else - this_cpu_write(xen_vcpu_id, smp_processor_id()); + for_each_xenhost(xh) + xenhost_probe_vcpu_id(*xh, smp_processor_id()); } =20 #ifdef CONFIG_KEXEC_CORE @@ -172,6 +191,7 @@ static void xen_hvm_crash_shutdown(struct pt_regs *regs) static int xen_cpu_up_prepare_hvm(unsigned int cpu) { int rc =3D 0; + xenhost_t **xh; =20 /* * This can happen if CPU was offlined earlier and @@ -182,13 +202,12 @@ static int xen_cpu_up_prepare_hvm(unsigned int cpu) xen_uninit_lock_cpu(cpu); } =20 - if (cpu_acpi_id(cpu) !=3D U32_MAX) - per_cpu(xen_vcpu_id, cpu) =3D cpu_acpi_id(cpu); - else - per_cpu(xen_vcpu_id, cpu) =3D cpu; - rc =3D xen_vcpu_setup(cpu); - if (rc) - return rc; + for_each_xenhost(xh) { + xenhost_probe_vcpu_id(*xh, cpu); + rc =3D xen_vcpu_setup(*xh, cpu); + if (rc) + return rc; + } =20 if (xen_have_vector_callback && xen_feature(XENFEAT_hvm_safe_pvclock)) xen_setup_timer(cpu); @@ -229,15 +248,15 @@ static void __init xen_hvm_guest_init(void) for_each_xenhost(xh) { reserve_shared_info(*xh); xenhost_setup_shared_info(*xh); + + /* + * xen_vcpu is a pointer to the vcpu_info struct in the + * shared_info page, we use it in the event channel upcall + * and in some pvclock related functions. + */ + xen_vcpu_info_reset(*xh, 0); } =20 - /* - * xen_vcpu is a pointer to the vcpu_info struct in the shared_info - * page, we use it in the event channel upcall and in some pvclock - * related functions. - * For now, this uses xh_default implictly. - */ - xen_vcpu_info_reset(0); =20 xen_panic_handler_init(); =20 diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c index 1a9eded4b76b..5f6a1475ec0c 100644 --- a/arch/x86/xen/enlighten_pv.c +++ b/arch/x86/xen/enlighten_pv.c @@ -36,8 +36,8 @@ =20 #include #include -#include #include +#include #include #include #include @@ -126,12 +126,12 @@ static void __init xen_pv_init_platform(void) =20 populate_extra_pte(fix_to_virt(FIX_PARAVIRT_BOOTMAP)); =20 - for_each_xenhost(xh) + for_each_xenhost(xh) { xenhost_setup_shared_info(*xh); =20 - /* xen clock uses per-cpu vcpu_info, need to init it for boot cpu */ - /* For now this uses xh_default implicitly. */ - xen_vcpu_info_reset(0); + /* xen clock uses per-cpu vcpu_info, need to init it for boot cpu */ + xen_vcpu_info_reset(*xh, 0); + } =20 /* pvclock is in shared info area */ xen_init_time_ops(); @@ -973,28 +973,31 @@ static void xen_write_msr(unsigned int msr, unsigned = low, unsigned high) /* This is called once we have the cpu_possible_mask */ void __init xen_setup_vcpu_info_placement(void) { + xenhost_t **xh; int cpu; =20 for_each_possible_cpu(cpu) { - /* Set up direct vCPU id mapping for PV guests. */ - per_cpu(xen_vcpu_id, cpu) =3D cpu; + for_each_xenhost(xh) { + xenhost_probe_vcpu_id(*xh, cpu); =20 - /* - * xen_vcpu_setup(cpu) can fail -- in which case it - * falls back to the shared_info version for cpus - * where xen_vcpu_nr(cpu) < MAX_VIRT_CPUS. - * - * xen_cpu_up_prepare_pv() handles the rest by failing - * them in hotplug. - */ - (void) xen_vcpu_setup(cpu); + /* + * xen_vcpu_setup(cpu) can fail -- in which case it + * falls back to the shared_info version for cpus + * where xen_vcpu_nr(cpu) < MAX_VIRT_CPUS. + * + * xen_cpu_up_prepare_pv() handles the rest by failing + * them in hotplug. + */ + (void) xen_vcpu_setup(*xh, cpu); + } } =20 /* * xen_vcpu_setup managed to place the vcpu_info within the * percpu area for all cpus, so make use of it. */ - if (xen_have_vcpu_info_placement) { + if (xen_have_vcpu_info_placement && false) { + /* Disable direct access until we have proper pcpu data structures. */ pv_ops.irq.save_fl =3D __PV_IS_CALLEE_SAVE(xen_save_fl_direct); pv_ops.irq.restore_fl =3D __PV_IS_CALLEE_SAVE(xen_restore_fl_direct); @@ -1110,6 +1113,11 @@ static unsigned char xen_get_nmi_reason(void) { unsigned char reason =3D 0; =20 + /* + * We could get this information from all the xenhosts and OR it. + * But, the remote xenhost isn't really expected to send us NMIs. + */ + /* Construct a value which looks like it came from port 0x61. */ if (test_bit(_XEN_NMIREASON_io_error, &xh_default->HYPERVISOR_shared_info->arch.nmi_reason)) @@ -1222,6 +1230,12 @@ static void xen_pv_reset_shared_info(xenhost_t *xh) BUG(); } =20 +void xen_pv_probe_vcpu_id(xenhost_t *xh, int cpu) +{ + /* Set up direct vCPU id mapping for PV guests. */ + xh->xen_vcpu_id[cpu] =3D cpu; +} + xenhost_ops_t xh_pv_ops =3D { .cpuid_base =3D xen_pv_cpuid_base, =20 @@ -1229,6 +1243,8 @@ xenhost_ops_t xh_pv_ops =3D { =20 .setup_shared_info =3D xen_pv_setup_shared_info, .reset_shared_info =3D xen_pv_reset_shared_info, + + .probe_vcpu_id =3D xen_pv_probe_vcpu_id, }; =20 xenhost_ops_t xh_pv_nested_ops =3D { @@ -1283,7 +1299,9 @@ asmlinkage __visible void __init xen_start_kernel(voi= d) * Don't do the full vcpu_info placement stuff until we have * the cpu_possible_mask and a non-dummy shared_info. */ - xen_vcpu_info_reset(0); + for_each_xenhost(xh) { + xen_vcpu_info_reset(*xh, 0); + } =20 x86_platform.get_nmi_reason =3D xen_get_nmi_reason; =20 @@ -1328,7 +1346,9 @@ asmlinkage __visible void __init xen_start_kernel(voi= d) get_cpu_address_sizes(&boot_cpu_data); =20 /* Let's presume PV guests always boot on vCPU with id 0. */ - per_cpu(xen_vcpu_id, 0) =3D 0; + /* Note: we should be doing this before xen_vcpu_info_reset above. */ + for_each_xenhost(xh) + xenhost_probe_vcpu_id(*xh, 0); =20 idt_setup_early_handler(); =20 @@ -1485,7 +1505,7 @@ static int xen_cpu_up_prepare_pv(unsigned int cpu) { int rc; =20 - if (per_cpu(xen_vcpu, cpu) =3D=3D NULL) + if (xh_default->xen_vcpu[cpu] =3D=3D NULL) return -ENODEV; =20 xen_setup_timer(cpu); diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c index 50277dfbdf30..3f98526dd041 100644 --- a/arch/x86/xen/enlighten_pvh.c +++ b/arch/x86/xen/enlighten_pvh.c @@ -2,13 +2,14 @@ #include =20 #include +#include =20 #include #include #include =20 -#include #include +#include #include #include =20 diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c index 850c93f346c7..38ad1a1c4763 100644 --- a/arch/x86/xen/irq.c +++ b/arch/x86/xen/irq.c @@ -29,7 +29,7 @@ asmlinkage __visible unsigned long xen_save_fl(void) struct vcpu_info *vcpu; unsigned long flags; =20 - vcpu =3D this_cpu_read(xen_vcpu); + vcpu =3D xh_default->xen_vcpu[smp_processor_id()]; =20 /* flag has opposite sense of mask */ flags =3D !vcpu->evtchn_upcall_mask; @@ -51,7 +51,7 @@ __visible void xen_restore_fl(unsigned long flags) =20 /* See xen_irq_enable() for why preemption must be disabled. */ preempt_disable(); - vcpu =3D this_cpu_read(xen_vcpu); + vcpu =3D xh_default->xen_vcpu[smp_processor_id()]; vcpu->evtchn_upcall_mask =3D flags; =20 if (flags =3D=3D 0) { @@ -70,7 +70,7 @@ asmlinkage __visible void xen_irq_disable(void) make sure we're don't switch CPUs between getting the vcpu pointer and updating the mask. */ preempt_disable(); - this_cpu_read(xen_vcpu)->evtchn_upcall_mask =3D 1; + xh_default->xen_vcpu[smp_processor_id()]->evtchn_upcall_mask =3D 1; preempt_enable_no_resched(); } PV_CALLEE_SAVE_REGS_THUNK(xen_irq_disable); @@ -86,7 +86,7 @@ asmlinkage __visible void xen_irq_enable(void) */ preempt_disable(); =20 - vcpu =3D this_cpu_read(xen_vcpu); + vcpu =3D xh_default->xen_vcpu[smp_processor_id()]; vcpu->evtchn_upcall_mask =3D 0; =20 /* Doesn't matter if we get preempted here, because any @@ -111,7 +111,7 @@ static void xen_halt(void) { if (irqs_disabled()) HYPERVISOR_vcpu_op(VCPUOP_down, - xen_vcpu_nr(smp_processor_id()), NULL); + xen_vcpu_nr(xh_default, smp_processor_id()), NULL); else xen_safe_halt(); } diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index 0f4fe206dcc2..e99af51ab481 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -1304,17 +1304,17 @@ static void __init xen_pagetable_init(void) } static void xen_write_cr2(unsigned long cr2) { - this_cpu_read(xen_vcpu)->arch.cr2 =3D cr2; + xh_default->xen_vcpu[smp_processor_id()]->arch.cr2 =3D cr2; } =20 static unsigned long xen_read_cr2(void) { - return this_cpu_read(xen_vcpu)->arch.cr2; + return xh_default->xen_vcpu[smp_processor_id()]->arch.cr2; } =20 unsigned long xen_read_cr2_direct(void) { - return this_cpu_read(xen_vcpu_info.arch.cr2); + return xh_default->xen_vcpu_info[smp_processor_id()].arch.cr2; } =20 static noinline void xen_flush_tlb(void) diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c index 33293ce01d8d..04f9b2e92f06 100644 --- a/arch/x86/xen/pci-swiotlb-xen.c +++ b/arch/x86/xen/pci-swiotlb-xen.c @@ -4,6 +4,7 @@ =20 #include #include +#include #include =20 #include diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c index d5f303c0e656..ec8f22a54f6e 100644 --- a/arch/x86/xen/setup.c +++ b/arch/x86/xen/setup.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include =20 diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c index 7a43b2ae19f1..867524be0065 100644 --- a/arch/x86/xen/smp.c +++ b/arch/x86/xen/smp.c @@ -6,6 +6,7 @@ #include =20 #include +#include =20 #include #include "xen-ops.h" @@ -129,7 +130,10 @@ void __init xen_smp_cpus_done(unsigned int max_cpus) return; =20 for_each_online_cpu(cpu) { - if (xen_vcpu_nr(cpu) < MAX_VIRT_CPUS) + xenhost_t **xh; + + if ((xen_vcpu_nr(xh_default, cpu) < MAX_VIRT_CPUS) && + (!xh_remote || (xen_vcpu_nr(xh_remote, cpu) < MAX_VIRT_CPUS))) continue; =20 rc =3D cpu_down(cpu); @@ -138,7 +142,8 @@ void __init xen_smp_cpus_done(unsigned int max_cpus) /* * Reset vcpu_info so this cpu cannot be onlined again. */ - xen_vcpu_info_reset(cpu); + for_each_xenhost(xh) + xen_vcpu_info_reset(*xh, cpu); count++; } else { pr_warn("%s: failed to bring CPU %d down, error %d\n", diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c index f8d39440b292..5e7f591bfdd9 100644 --- a/arch/x86/xen/smp_hvm.c +++ b/arch/x86/xen/smp_hvm.c @@ -9,6 +9,7 @@ =20 static void __init xen_hvm_smp_prepare_boot_cpu(void) { + xenhost_t **xh; BUG_ON(smp_processor_id() !=3D 0); native_smp_prepare_boot_cpu(); =20 @@ -16,7 +17,8 @@ static void __init xen_hvm_smp_prepare_boot_cpu(void) * Setup vcpu_info for boot CPU. Secondary CPUs get their vcpu_info * in xen_cpu_up_prepare_hvm(). */ - xen_vcpu_setup(0); + for_each_xenhost(xh) + xen_vcpu_setup(*xh, 0); =20 /* * The alternative logic (which patches the unlock/lock) runs before @@ -29,6 +31,7 @@ static void __init xen_hvm_smp_prepare_boot_cpu(void) =20 static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus) { + xenhost_t **xh; int cpu; =20 native_smp_prepare_cpus(max_cpus); @@ -36,12 +39,14 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned in= t max_cpus) =20 xen_init_lock_cpu(0); =20 - for_each_possible_cpu(cpu) { - if (cpu =3D=3D 0) - continue; + for_each_xenhost(xh) { + for_each_possible_cpu(cpu) { + if (cpu =3D=3D 0) + continue; =20 - /* Set default vcpu_id to make sure that we don't use cpu-0's */ - per_cpu(xen_vcpu_id, cpu) =3D XEN_VCPU_ID_INVALID; + /* Set default vcpu_id to make sure that we don't use cpu-0's */ + (*xh)->xen_vcpu_id[cpu] =3D XEN_VCPU_ID_INVALID; + } } } =20 diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c index 145506f9fdbe..6d9c3e6611ef 100644 --- a/arch/x86/xen/smp_pv.c +++ b/arch/x86/xen/smp_pv.c @@ -350,7 +350,7 @@ cpu_initialize_context(unsigned int cpu, struct task_st= ruct *idle) per_cpu(xen_cr3, cpu) =3D __pa(swapper_pg_dir); =20 ctxt->ctrlreg[3] =3D xen_pfn_to_cr3(virt_to_gfn(swapper_pg_dir)); - if (HYPERVISOR_vcpu_op(VCPUOP_initialise, xen_vcpu_nr(cpu), ctxt)) + if (HYPERVISOR_vcpu_op(VCPUOP_initialise, xen_vcpu_nr(xh_default, cpu), c= txt)) BUG(); =20 kfree(ctxt); @@ -374,7 +374,7 @@ static int xen_pv_cpu_up(unsigned int cpu, struct task_= struct *idle) return rc; =20 /* make sure interrupts start blocked */ - per_cpu(xen_vcpu, cpu)->evtchn_upcall_mask =3D 1; + xh_default->xen_vcpu[cpu]->evtchn_upcall_mask =3D 1; =20 rc =3D cpu_initialize_context(cpu, idle); if (rc) @@ -382,7 +382,7 @@ static int xen_pv_cpu_up(unsigned int cpu, struct task_= struct *idle) =20 xen_pmu_init(cpu); =20 - rc =3D HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL); + rc =3D HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(xh_default, cpu), NULL); BUG_ON(rc); =20 while (cpu_report_state(cpu) !=3D CPU_ONLINE) @@ -407,7 +407,7 @@ static int xen_pv_cpu_disable(void) static void xen_pv_cpu_die(unsigned int cpu) { while (HYPERVISOR_vcpu_op(VCPUOP_is_up, - xen_vcpu_nr(cpu), NULL)) { + xen_vcpu_nr(xh_default, cpu), NULL)) { __set_current_state(TASK_UNINTERRUPTIBLE); schedule_timeout(HZ/10); } @@ -423,7 +423,7 @@ static void xen_pv_cpu_die(unsigned int cpu) static void xen_pv_play_dead(void) /* used only with HOTPLUG_CPU */ { play_dead_common(); - HYPERVISOR_vcpu_op(VCPUOP_down, xen_vcpu_nr(smp_processor_id()), NULL); + HYPERVISOR_vcpu_op(VCPUOP_down, xen_vcpu_nr(xh_default, smp_processor_id(= )), NULL); cpu_bringup(); /* * commit 4b0c0f294 (tick: Cleanup NOHZ per cpu data on cpu down) @@ -464,7 +464,7 @@ static void stop_self(void *v) =20 set_cpu_online(cpu, false); =20 - HYPERVISOR_vcpu_op(VCPUOP_down, xen_vcpu_nr(cpu), NULL); + HYPERVISOR_vcpu_op(VCPUOP_down, xen_vcpu_nr(xh_default, cpu), NULL); BUG(); } =20 diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c index d4bb1f8b4f58..217bc4de07ee 100644 --- a/arch/x86/xen/time.c +++ b/arch/x86/xen/time.c @@ -18,12 +18,12 @@ #include =20 #include +#include #include #include =20 #include #include -#include #include =20 #include "xen-ops.h" @@ -48,7 +48,7 @@ static u64 xen_clocksource_read(void) u64 ret; =20 preempt_disable_notrace(); - src =3D &__this_cpu_read(xen_vcpu)->time; + src =3D &xh_default->xen_vcpu[smp_processor_id()]->time; ret =3D pvclock_clocksource_read(src); preempt_enable_notrace(); return ret; @@ -70,9 +70,10 @@ static void xen_read_wallclock(struct timespec64 *ts) struct pvclock_wall_clock *wall_clock =3D &(s->wc); struct pvclock_vcpu_time_info *vcpu_time; =20 - vcpu_time =3D &get_cpu_var(xen_vcpu)->time; + preempt_disable_notrace(); + vcpu_time =3D &xh_default->xen_vcpu[smp_processor_id()]->time; pvclock_read_wallclock(wall_clock, vcpu_time, ts); - put_cpu_var(xen_vcpu); + preempt_enable_notrace(); } =20 static void xen_get_wallclock(struct timespec64 *now) @@ -233,9 +234,9 @@ static int xen_vcpuop_shutdown(struct clock_event_devic= e *evt) { int cpu =3D smp_processor_id(); =20 - if (HYPERVISOR_vcpu_op(VCPUOP_stop_singleshot_timer, xen_vcpu_nr(cpu), + if (HYPERVISOR_vcpu_op(VCPUOP_stop_singleshot_timer, xen_vcpu_nr(xh_defau= lt, cpu), NULL) || - HYPERVISOR_vcpu_op(VCPUOP_stop_periodic_timer, xen_vcpu_nr(cpu), + HYPERVISOR_vcpu_op(VCPUOP_stop_periodic_timer, xen_vcpu_nr(xh_default= , cpu), NULL)) BUG(); =20 @@ -246,7 +247,7 @@ static int xen_vcpuop_set_oneshot(struct clock_event_de= vice *evt) { int cpu =3D smp_processor_id(); =20 - if (HYPERVISOR_vcpu_op(VCPUOP_stop_periodic_timer, xen_vcpu_nr(cpu), + if (HYPERVISOR_vcpu_op(VCPUOP_stop_periodic_timer, xen_vcpu_nr(xh_default= , cpu), NULL)) BUG(); =20 @@ -266,7 +267,7 @@ static int xen_vcpuop_set_next_event(unsigned long delt= a, /* Get an event anyway, even if the timeout is already expired */ single.flags =3D 0; =20 - ret =3D HYPERVISOR_vcpu_op(VCPUOP_set_singleshot_timer, xen_vcpu_nr(cpu), + ret =3D HYPERVISOR_vcpu_op(VCPUOP_set_singleshot_timer, xen_vcpu_nr(xh_de= fault, cpu), &single); BUG_ON(ret !=3D 0); =20 @@ -366,7 +367,7 @@ void xen_timer_resume(void) =20 for_each_online_cpu(cpu) { if (HYPERVISOR_vcpu_op(VCPUOP_stop_periodic_timer, - xen_vcpu_nr(cpu), NULL)) + xen_vcpu_nr(xh_default, cpu), NULL)) BUG(); } } @@ -482,7 +483,7 @@ static void __init xen_time_init(void) =20 clocksource_register_hz(&xen_clocksource, NSEC_PER_SEC); =20 - if (HYPERVISOR_vcpu_op(VCPUOP_stop_periodic_timer, xen_vcpu_nr(cpu), + if (HYPERVISOR_vcpu_op(VCPUOP_stop_periodic_timer, xen_vcpu_nr(xh_default= , cpu), NULL) =3D=3D 0) { /* Successfully turned off 100Hz tick, so we have the vcpuop-based timer interface */ @@ -500,7 +501,7 @@ static void __init xen_time_init(void) * We check ahead on the primary time info if this * bit is supported hence speeding up Xen clocksource. */ - pvti =3D &__this_cpu_read(xen_vcpu)->time; + pvti =3D &xh_default->xen_vcpu[smp_processor_id()]->time; if (pvti->flags & PVCLOCK_TSC_STABLE_BIT) { pvclock_set_flags(PVCLOCK_TSC_STABLE_BIT); xen_setup_vsyscall_time_info(); diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h index 5085ce88a8d7..96fd7edea7e9 100644 --- a/arch/x86/xen/xen-ops.h +++ b/arch/x86/xen/xen-ops.h @@ -22,7 +22,6 @@ extern void *xen_initial_gdt; struct trap_info; void xen_copy_trap_info(struct trap_info *traps); =20 -DECLARE_PER_CPU(struct vcpu_info, xen_vcpu_info); DECLARE_PER_CPU(unsigned long, xen_cr3); DECLARE_PER_CPU(unsigned long, xen_current_cr3); =20 @@ -76,8 +75,8 @@ bool xen_vcpu_stolen(int vcpu); =20 extern int xen_have_vcpu_info_placement; =20 -int xen_vcpu_setup(int cpu); -void xen_vcpu_info_reset(int cpu); +int xen_vcpu_setup(xenhost_t *xh, int cpu); +void xen_vcpu_info_reset(xenhost_t *xh, int cpu); void xen_setup_vcpu_info_placement(void); =20 #ifdef CONFIG_SMP diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_b= ase.c index 117e76b2f939..ae497876fe41 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -884,7 +884,7 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned i= nt cpu) irq_set_chip_and_handler_name(irq, &xen_percpu_chip, handle_percpu_irq, "ipi"); =20 - bind_ipi.vcpu =3D xen_vcpu_nr(cpu); + bind_ipi.vcpu =3D xen_vcpu_nr(xh_default, cpu); if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_ipi, &bind_ipi) !=3D 0) BUG(); @@ -937,7 +937,7 @@ static int find_virq(unsigned int virq, unsigned int cp= u) continue; if (status.status !=3D EVTCHNSTAT_virq) continue; - if (status.u.virq =3D=3D virq && status.vcpu =3D=3D xen_vcpu_nr(cpu)) { + if (status.u.virq =3D=3D virq && status.vcpu =3D=3D xen_vcpu_nr(xh_defau= lt, cpu)) { rc =3D port; break; } @@ -980,7 +980,7 @@ int bind_virq_to_irq(unsigned int virq, unsigned int cp= u, bool percpu) handle_edge_irq, "virq"); =20 bind_virq.virq =3D virq; - bind_virq.vcpu =3D xen_vcpu_nr(cpu); + bind_virq.vcpu =3D xen_vcpu_nr(xh_default, cpu); ret =3D HYPERVISOR_event_channel_op(EVTCHNOP_bind_virq, &bind_virq); if (ret =3D=3D 0) @@ -1200,7 +1200,7 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vect= or vector) =20 #ifdef CONFIG_X86 if (unlikely(vector =3D=3D XEN_NMI_VECTOR)) { - int rc =3D HYPERVISOR_vcpu_op(VCPUOP_send_nmi, xen_vcpu_nr(cpu), + int rc =3D HYPERVISOR_vcpu_op(VCPUOP_send_nmi, xen_vcpu_nr(xh_default, = cpu), NULL); if (rc < 0) printk(KERN_WARNING "Sending nmi to CPU%d failed (rc:%d)\n", cpu, rc); @@ -1306,7 +1306,7 @@ int xen_rebind_evtchn_to_cpu(int evtchn, unsigned tcp= u) =20 /* Send future instances of this interrupt to other vcpu. */ bind_vcpu.port =3D evtchn; - bind_vcpu.vcpu =3D xen_vcpu_nr(tcpu); + bind_vcpu.vcpu =3D xen_vcpu_nr(xh_default, tcpu); =20 /* * Mask the event while changing the VCPU binding to prevent @@ -1451,7 +1451,7 @@ static void restore_cpu_virqs(unsigned int cpu) =20 /* Get a new binding from Xen. */ bind_virq.virq =3D virq; - bind_virq.vcpu =3D xen_vcpu_nr(cpu); + bind_virq.vcpu =3D xen_vcpu_nr(xh_default, cpu); if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_virq, &bind_virq) !=3D 0) BUG(); @@ -1475,7 +1475,7 @@ static void restore_cpu_ipis(unsigned int cpu) BUG_ON(ipi_from_irq(irq) !=3D ipi); =20 /* Get a new binding from Xen. */ - bind_ipi.vcpu =3D xen_vcpu_nr(cpu); + bind_ipi.vcpu =3D xen_vcpu_nr(xh_default, cpu); if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_ipi, &bind_ipi) !=3D 0) BUG(); diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_f= ifo.c index 76b318e88382..eed766219dd0 100644 --- a/drivers/xen/events/events_fifo.c +++ b/drivers/xen/events/events_fifo.c @@ -113,7 +113,7 @@ static int init_control_block(int cpu, =20 init_control.control_gfn =3D virt_to_gfn(control_block); init_control.offset =3D 0; - init_control.vcpu =3D xen_vcpu_nr(cpu); + init_control.vcpu =3D xen_vcpu_nr(xh_default, cpu); =20 return HYPERVISOR_event_channel_op(EVTCHNOP_init_control, &init_control); } diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c index 6d1a5e58968f..66622109f2be 100644 --- a/drivers/xen/evtchn.c +++ b/drivers/xen/evtchn.c @@ -475,7 +475,7 @@ static long evtchn_ioctl(struct file *file, break; =20 bind_virq.virq =3D bind.virq; - bind_virq.vcpu =3D xen_vcpu_nr(0); + bind_virq.vcpu =3D xen_vcpu_nr(xh_default, 0); rc =3D HYPERVISOR_event_channel_op(EVTCHNOP_bind_virq, &bind_virq); if (rc !=3D 0) diff --git a/drivers/xen/time.c b/drivers/xen/time.c index 0968859c29d0..feee74bbab0a 100644 --- a/drivers/xen/time.c +++ b/drivers/xen/time.c @@ -164,7 +164,7 @@ void xen_setup_runstate_info(int cpu) area.addr.v =3D &per_cpu(xen_runstate, cpu); =20 if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area, - xen_vcpu_nr(cpu), &area)) + xen_vcpu_nr(xh_default, cpu), &area)) BUG(); } =20 diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h index 4969817124a8..75be9059893f 100644 --- a/include/xen/xen-ops.h +++ b/include/xen/xen-ops.h @@ -9,12 +9,9 @@ #include #include =20 -DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu); - -DECLARE_PER_CPU(uint32_t, xen_vcpu_id); -static inline uint32_t xen_vcpu_nr(int cpu) +static inline uint32_t xen_vcpu_nr(xenhost_t *xh, int cpu) { - return per_cpu(xen_vcpu_id, cpu); + return xh->xen_vcpu_id[cpu]; } =20 #define XEN_VCPU_ID_INVALID U32_MAX diff --git a/include/xen/xenhost.h b/include/xen/xenhost.h index 7c19c361d16e..f6092a8987f1 100644 --- a/include/xen/xenhost.h +++ b/include/xen/xenhost.h @@ -90,6 +90,28 @@ typedef struct { struct shared_info *HYPERVISOR_shared_info; unsigned long shared_info_pfn; }; + + struct { + /* + * Events on xen-evtchn ports show up in struct vcpu_info. + * With multiple xenhosts, the evtchn-port numbering space that + * was global so far is now attached to a xenhost. + * + * So, now we allocate vcpu_info for each processor (we had space + * for only MAX_VIRT_CPUS in the shared_info above.) + * + * FIXME we statically allocate for NR_CPUS because alloc_percpu() + * isn't available at PV boot time but this is slow. + */ + struct vcpu_info xen_vcpu_info[NR_CPUS]; + struct vcpu_info *xen_vcpu[NR_CPUS]; + + /* + * Different xenhosts might have different Linux <-> Xen vCPU-id + * mapping. + */ + uint32_t xen_vcpu_id[NR_CPUS]; + }; } xenhost_t; =20 typedef struct xenhost_ops { @@ -139,6 +161,26 @@ typedef struct xenhost_ops { */ void (*setup_shared_info)(xenhost_t *xenhost); void (*reset_shared_info)(xenhost_t *xenhost); + + /* + * vcpu_info, vcpu_id: needs to be setup early -- all IRQ code accesses + * relevant bits. + * + * vcpu_id is probed on PVH/PVHVM via xen_cpuid(). For PV, its direct + * mapped to smp_processor_id(). + * + * This is part of xenhost_t because we might be registered with two + * different xenhosts and both of those might have their own vcpu + * numbering. + * + * After the vcpu numbering is identified, we can go ahead and register + * vcpu_info with the xenhost; on the default xenhost this happens via + * the register_vcpu_info hypercall. + * + * Once vcpu_info is setup (this or the shared_info version), it would + * get accessed via pv_ops.irq.* and the evtchn logic. + */ + void (*probe_vcpu_id)(xenhost_t *xenhost, int cpu); } xenhost_ops_t; =20 extern xenhost_t *xh_default, *xh_remote; @@ -185,4 +227,9 @@ static inline void xenhost_reset_shared_info(xenhost_t = *xh) (xh->ops->reset_shared_info)(xh); } =20 +static inline void xenhost_probe_vcpu_id(xenhost_t *xh, int cpu) +{ + (xh->ops->probe_vcpu_id)(xh, cpu); +} + #endif /* __XENHOST_H */ --=20 2.20.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel