From nobody Sun Feb 8 03:57:49 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990752; cv=none; d=zoho.com; s=zohoarc; b=XN4JOs2gEt8YuJhTkjWb/vdI2bGriECHSES52JQjvcN7xBMW7X/j96BW9ZGnk2LCDBFminJ4PprZezqTQ4t2sMT041zzRHCAnymvf3REYq3tg6BIVJEQSFcfkZA0FX/JYvGtrdOBFtohQcUpHI8XP6G7oiacBRCLrIihSR3wH14= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990752; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=/6W5r2PgrueHaaAfinAWyyd5Tdsdj66CUOoL/5isZi4=; b=eu+RKKPU1OOjbTgixkKghYOhRKzHdmzvJxbIkKYf7oN5j1O4BXfz/hI66C2Uzto5Pz0AChld1oye5DvjwjeBO7echoTaw68sNzcg2tshcLcPT+YSPX3QwHl4oQqfn7SCdxKNHT4jdzhWNwcY6Plfk2ij/XBclNmdRTSZHcgUe5Q= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 156099075234075.47910210864904; Wed, 19 Jun 2019 17:32:32 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz1-000082-LC; Thu, 20 Jun 2019 00:31:19 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkyz-00007p-R5 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:17 +0000 Received: from mail-io1-xd2f.google.com (unknown [2607:f8b0:4864:20::d2f]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id b6cf5637-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:16 +0000 (UTC) Received: by mail-io1-xd2f.google.com with SMTP id k20so12996ios.10 for ; Wed, 19 Jun 2019 17:31:15 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:14 -0700 (PDT) X-Inumbo-ID: b6cf5637-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xQgHrIP1YOSfpHBq5LyzHdsL+ykX+RqTUSSWyEDnICM=; b=UH7OpnJGT3Y3uabVepgE4xoY68FiMZBgusyzFGOxqu8mnrV4l5DdESA6NLCrkN6hyr gwWLKPY84b4Y1xtPl47i9t6TTYCSBRfJm0uP7p8zHUAq/ZbittDueBQcHYeHaQ1EfYsM DmfkU6Ggfadv67myFQgxAxeTWqL87KtHIxPUAemFd0Uw8qEOGlgDLxl8Ewugu9wxr6al 7qzCAzf9Nj6MsUfzeLtf7wmwmYNivpA6iR3HgkGIutHmpI/A5zLuEkDos1sAZNuewE2b 5+ElEsDNo4yuZITHKOsOrwupRg0w32tI5I0V7QSP5FzGozhcaXqgFfwldybh8nT0Scyq j1+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xQgHrIP1YOSfpHBq5LyzHdsL+ykX+RqTUSSWyEDnICM=; b=iFJEIpmN6LZgw9ustjFv/s37bqC0A9EitXF/8Khrjya3dD7c/MNSDNuMsYWJGoxwsd WyJjccdLl7imvLUz4tgukwAKU92iAq2tt9o8HtpEh8IrLWYyL0Ef0QtgPJU3Z3NWOZnT WWeCn5RRYe/EzcpJc90EiEY3ICJObysUnpo3nDp54hmyJbzwCAG4h6POtpyEk92Njo+F 8FNaPZ4jj++iC9xAyHSpZvib5QiE8HDvN6k0ID962Nh2oae1G05ARmWJJiwsQey4asr2 ffj7Yzsb22QLxFhAsqowaqiae7CofnsyC2f+Vvd8LfpcYY67/rQQGCCdpWVzqYa11mw/ loag== X-Gm-Message-State: APjAAAWlkd4/LjehXgarv192rSiVo/xuw8O7U2Eos6fF+oX+CrbUgDH9 AHO9bgKhKnCEBGlUsldv3EN4MR3sXf8= X-Google-Smtp-Source: APXvYqzdIJlESRkNRE7tZMHqjA9jAyFt7Y9gZdgEzher7Qr2Vu0ntfKEl0g+MXstLV7MU7ntqpKIsw== X-Received: by 2002:a6b:c886:: with SMTP id y128mr15383585iof.100.1560990674945; Wed, 19 Jun 2019 17:31:14 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:45 -0700 Message-Id: <20190620003053.21993-2-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 1/9] x86/guest: code movement to separate Xen detection from guest functions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Wei Liu , Andrew Cooper , Rich Persaud , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Move some logic from: xen/arch/x86/guest/xen.c into a new file: xen/arch/x86/guest/xen-guest.c xen.c then contains the functions for basic Xen detection and xen-guest.c implements the intended behaviour changes when Xen is running as a guest. Since CONFIG_XEN_GUEST must currently be defined for any of this code to be included, making xen-guest.o conditional upon it here works correctly and avoids further change to it in later patches in the series. No functional change intended. Signed-off-by: Christopher Clark --- xen/arch/x86/guest/Makefile | 1 + xen/arch/x86/guest/xen-guest.c | 301 +++++++++++++++++++++++++++++++++ xen/arch/x86/guest/xen.c | 254 ---------------------------- 3 files changed, 302 insertions(+), 254 deletions(-) create mode 100644 xen/arch/x86/guest/xen-guest.c diff --git a/xen/arch/x86/guest/Makefile b/xen/arch/x86/guest/Makefile index 26fb4b1007..6ddaa3748f 100644 --- a/xen/arch/x86/guest/Makefile +++ b/xen/arch/x86/guest/Makefile @@ -1,4 +1,5 @@ obj-y +=3D hypercall_page.o obj-y +=3D xen.o +obj-$(CONFIG_XEN_GUEST) +=3D xen-guest.o =20 obj-bin-$(CONFIG_PVH_GUEST) +=3D pvh-boot.init.o diff --git a/xen/arch/x86/guest/xen-guest.c b/xen/arch/x86/guest/xen-guest.c new file mode 100644 index 0000000000..65596ab1b1 --- /dev/null +++ b/xen/arch/x86/guest/xen-guest.c @@ -0,0 +1,301 @@ +/*************************************************************************= ***** + * arch/x86/guest/xen-guest.c + * + * Support for running a single VM with Xen as a guest. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; If not, see . + * + * Copyright (c) 2017 Citrix Systems Ltd. + */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include +#include + +bool __read_mostly xen_guest; + +static struct rangeset *mem; + +DEFINE_PER_CPU(unsigned int, vcpu_id); + +static struct vcpu_info *vcpu_info; +static unsigned long vcpu_info_mapped[BITS_TO_LONGS(NR_CPUS)]; +DEFINE_PER_CPU(struct vcpu_info *, vcpu_info); + +static void map_shared_info(void) +{ + mfn_t mfn; + struct xen_add_to_physmap xatp =3D { + .domid =3D DOMID_SELF, + .space =3D XENMAPSPACE_shared_info, + }; + unsigned int i; + unsigned long rc; + + if ( hypervisor_alloc_unused_page(&mfn) ) + panic("unable to reserve shared info memory page\n"); + + xatp.gpfn =3D mfn_x(mfn); + rc =3D xen_hypercall_memory_op(XENMEM_add_to_physmap, &xatp); + if ( rc ) + panic("failed to map shared_info page: %ld\n", rc); + + set_fixmap(FIX_XEN_SHARED_INFO, mfn_x(mfn) << PAGE_SHIFT); + + /* Mask all upcalls */ + for ( i =3D 0; i < ARRAY_SIZE(XEN_shared_info->evtchn_mask); i++ ) + write_atomic(&XEN_shared_info->evtchn_mask[i], ~0ul); +} + +static int map_vcpuinfo(void) +{ + unsigned int vcpu =3D this_cpu(vcpu_id); + struct vcpu_register_vcpu_info info; + int rc; + + if ( !vcpu_info ) + { + this_cpu(vcpu_info) =3D &XEN_shared_info->vcpu_info[vcpu]; + return 0; + } + + if ( test_bit(vcpu, vcpu_info_mapped) ) + { + this_cpu(vcpu_info) =3D &vcpu_info[vcpu]; + return 0; + } + + info.mfn =3D virt_to_mfn(&vcpu_info[vcpu]); + info.offset =3D (unsigned long)&vcpu_info[vcpu] & ~PAGE_MASK; + rc =3D xen_hypercall_vcpu_op(VCPUOP_register_vcpu_info, vcpu, &info); + if ( rc ) + { + BUG_ON(vcpu >=3D XEN_LEGACY_MAX_VCPUS); + this_cpu(vcpu_info) =3D &XEN_shared_info->vcpu_info[vcpu]; + } + else + { + this_cpu(vcpu_info) =3D &vcpu_info[vcpu]; + set_bit(vcpu, vcpu_info_mapped); + } + + return rc; +} + +static void set_vcpu_id(void) +{ + uint32_t cpuid_base, eax, ebx, ecx, edx; + + cpuid_base =3D hypervisor_cpuid_base(); + + ASSERT(cpuid_base); + + /* Fetch vcpu id from cpuid. */ + cpuid(cpuid_base + 4, &eax, &ebx, &ecx, &edx); + if ( eax & XEN_HVM_CPUID_VCPU_ID_PRESENT ) + this_cpu(vcpu_id) =3D ebx; + else + this_cpu(vcpu_id) =3D smp_processor_id(); +} + +static void __init init_memmap(void) +{ + unsigned int i; + + mem =3D rangeset_new(NULL, "host memory map", 0); + if ( !mem ) + panic("failed to allocate PFN usage rangeset\n"); + + /* + * Mark up to the last memory page (or 4GiB) as RAM. This is done beca= use + * Xen doesn't know the position of possible MMIO holes, so at least t= ry to + * avoid the know MMIO hole below 4GiB. Note that this is subject to f= uture + * discussion and improvements. + */ + if ( rangeset_add_range(mem, 0, max_t(unsigned long, max_page - 1, + PFN_DOWN(GB(4) - 1))) ) + panic("unable to add RAM to in-use PFN rangeset\n"); + + for ( i =3D 0; i < e820.nr_map; i++ ) + { + struct e820entry *e =3D &e820.map[i]; + + if ( rangeset_add_range(mem, PFN_DOWN(e->addr), + PFN_UP(e->addr + e->size - 1)) ) + panic("unable to add range [%#lx, %#lx] to in-use PFN rangeset= \n", + PFN_DOWN(e->addr), PFN_UP(e->addr + e->size - 1)); + } +} + +static void xen_evtchn_upcall(struct cpu_user_regs *regs) +{ + struct vcpu_info *vcpu_info =3D this_cpu(vcpu_info); + unsigned long pending; + + vcpu_info->evtchn_upcall_pending =3D 0; + pending =3D xchg(&vcpu_info->evtchn_pending_sel, 0); + + while ( pending ) + { + unsigned int l1 =3D find_first_set_bit(pending); + unsigned long evtchn =3D xchg(&XEN_shared_info->evtchn_pending[l1]= , 0); + + __clear_bit(l1, &pending); + evtchn &=3D ~XEN_shared_info->evtchn_mask[l1]; + while ( evtchn ) + { + unsigned int port =3D find_first_set_bit(evtchn); + + __clear_bit(port, &evtchn); + port +=3D l1 * BITS_PER_LONG; + + if ( pv_console && port =3D=3D pv_console_evtchn() ) + pv_console_rx(regs); + else if ( pv_shim ) + pv_shim_inject_evtchn(port); + } + } + + ack_APIC_irq(); +} + +static void init_evtchn(void) +{ + static uint8_t evtchn_upcall_vector; + int rc; + + if ( !evtchn_upcall_vector ) + alloc_direct_apic_vector(&evtchn_upcall_vector, xen_evtchn_upcall); + + ASSERT(evtchn_upcall_vector); + + rc =3D xen_hypercall_set_evtchn_upcall_vector(this_cpu(vcpu_id), + evtchn_upcall_vector); + if ( rc ) + panic("Unable to set evtchn upcall vector: %d\n", rc); + + /* Trick toolstack to think we are enlightened */ + { + struct xen_hvm_param a =3D { + .domid =3D DOMID_SELF, + .index =3D HVM_PARAM_CALLBACK_IRQ, + .value =3D 1, + }; + + BUG_ON(xen_hypercall_hvm_op(HVMOP_set_param, &a)); + } +} + +void __init hypervisor_setup(void) +{ + init_memmap(); + + map_shared_info(); + + set_vcpu_id(); + vcpu_info =3D xzalloc_array(struct vcpu_info, nr_cpu_ids); + if ( map_vcpuinfo() ) + { + xfree(vcpu_info); + vcpu_info =3D NULL; + } + if ( !vcpu_info && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS ) + { + unsigned int i; + + for ( i =3D XEN_LEGACY_MAX_VCPUS; i < nr_cpu_ids; i++ ) + __cpumask_clear_cpu(i, &cpu_present_map); + nr_cpu_ids =3D XEN_LEGACY_MAX_VCPUS; + printk(XENLOG_WARNING + "unable to map vCPU info, limiting vCPUs to: %u\n", + XEN_LEGACY_MAX_VCPUS); + } + + init_evtchn(); +} + +void hypervisor_ap_setup(void) +{ + set_vcpu_id(); + map_vcpuinfo(); + init_evtchn(); +} + +int hypervisor_alloc_unused_page(mfn_t *mfn) +{ + unsigned long m; + int rc; + + rc =3D rangeset_claim_range(mem, 1, &m); + if ( !rc ) + *mfn =3D _mfn(m); + + return rc; +} + +int hypervisor_free_unused_page(mfn_t mfn) +{ + return rangeset_remove_range(mem, mfn_x(mfn), mfn_x(mfn)); +} + +static void ap_resume(void *unused) +{ + map_vcpuinfo(); + init_evtchn(); +} + +void hypervisor_resume(void) +{ + /* Reset shared info page. */ + map_shared_info(); + + /* + * Reset vcpu_info. Just clean the mapped bitmap and try to map the vc= pu + * area again. On failure to map (when it was previously mapped) panic + * since it's impossible to safely shut down running guest vCPUs in or= der + * to meet the new XEN_LEGACY_MAX_VCPUS requirement. + */ + bitmap_zero(vcpu_info_mapped, NR_CPUS); + if ( map_vcpuinfo() && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS ) + panic("unable to remap vCPU info and vCPUs > legacy limit\n"); + + /* Setup event channel upcall vector. */ + init_evtchn(); + smp_call_function(ap_resume, NULL, 1); + + if ( pv_console ) + pv_console_init(); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/x86/guest/xen.c b/xen/arch/x86/guest/xen.c index 7b7a5badab..90d464bdbd 100644 --- a/xen/arch/x86/guest/xen.c +++ b/xen/arch/x86/guest/xen.c @@ -22,9 +22,7 @@ #include #include #include -#include #include -#include =20 #include #include @@ -35,17 +33,8 @@ #include #include =20 -bool __read_mostly xen_guest; - static __read_mostly uint32_t xen_cpuid_base; extern char hypercall_page[]; -static struct rangeset *mem; - -DEFINE_PER_CPU(unsigned int, vcpu_id); - -static struct vcpu_info *vcpu_info; -static unsigned long vcpu_info_mapped[BITS_TO_LONGS(NR_CPUS)]; -DEFINE_PER_CPU(struct vcpu_info *, vcpu_info); =20 static void __init find_xen_leaves(void) { @@ -87,254 +76,11 @@ void __init probe_hypervisor(void) xen_guest =3D true; } =20 -static void map_shared_info(void) -{ - mfn_t mfn; - struct xen_add_to_physmap xatp =3D { - .domid =3D DOMID_SELF, - .space =3D XENMAPSPACE_shared_info, - }; - unsigned int i; - unsigned long rc; - - if ( hypervisor_alloc_unused_page(&mfn) ) - panic("unable to reserve shared info memory page\n"); - - xatp.gpfn =3D mfn_x(mfn); - rc =3D xen_hypercall_memory_op(XENMEM_add_to_physmap, &xatp); - if ( rc ) - panic("failed to map shared_info page: %ld\n", rc); - - set_fixmap(FIX_XEN_SHARED_INFO, mfn_x(mfn) << PAGE_SHIFT); - - /* Mask all upcalls */ - for ( i =3D 0; i < ARRAY_SIZE(XEN_shared_info->evtchn_mask); i++ ) - write_atomic(&XEN_shared_info->evtchn_mask[i], ~0ul); -} - -static int map_vcpuinfo(void) -{ - unsigned int vcpu =3D this_cpu(vcpu_id); - struct vcpu_register_vcpu_info info; - int rc; - - if ( !vcpu_info ) - { - this_cpu(vcpu_info) =3D &XEN_shared_info->vcpu_info[vcpu]; - return 0; - } - - if ( test_bit(vcpu, vcpu_info_mapped) ) - { - this_cpu(vcpu_info) =3D &vcpu_info[vcpu]; - return 0; - } - - info.mfn =3D virt_to_mfn(&vcpu_info[vcpu]); - info.offset =3D (unsigned long)&vcpu_info[vcpu] & ~PAGE_MASK; - rc =3D xen_hypercall_vcpu_op(VCPUOP_register_vcpu_info, vcpu, &info); - if ( rc ) - { - BUG_ON(vcpu >=3D XEN_LEGACY_MAX_VCPUS); - this_cpu(vcpu_info) =3D &XEN_shared_info->vcpu_info[vcpu]; - } - else - { - this_cpu(vcpu_info) =3D &vcpu_info[vcpu]; - set_bit(vcpu, vcpu_info_mapped); - } - - return rc; -} - -static void set_vcpu_id(void) -{ - uint32_t eax, ebx, ecx, edx; - - ASSERT(xen_cpuid_base); - - /* Fetch vcpu id from cpuid. */ - cpuid(xen_cpuid_base + 4, &eax, &ebx, &ecx, &edx); - if ( eax & XEN_HVM_CPUID_VCPU_ID_PRESENT ) - this_cpu(vcpu_id) =3D ebx; - else - this_cpu(vcpu_id) =3D smp_processor_id(); -} - -static void __init init_memmap(void) -{ - unsigned int i; - - mem =3D rangeset_new(NULL, "host memory map", 0); - if ( !mem ) - panic("failed to allocate PFN usage rangeset\n"); - - /* - * Mark up to the last memory page (or 4GiB) as RAM. This is done beca= use - * Xen doesn't know the position of possible MMIO holes, so at least t= ry to - * avoid the know MMIO hole below 4GiB. Note that this is subject to f= uture - * discussion and improvements. - */ - if ( rangeset_add_range(mem, 0, max_t(unsigned long, max_page - 1, - PFN_DOWN(GB(4) - 1))) ) - panic("unable to add RAM to in-use PFN rangeset\n"); - - for ( i =3D 0; i < e820.nr_map; i++ ) - { - struct e820entry *e =3D &e820.map[i]; - - if ( rangeset_add_range(mem, PFN_DOWN(e->addr), - PFN_UP(e->addr + e->size - 1)) ) - panic("unable to add range [%#lx, %#lx] to in-use PFN rangeset= \n", - PFN_DOWN(e->addr), PFN_UP(e->addr + e->size - 1)); - } -} - -static void xen_evtchn_upcall(struct cpu_user_regs *regs) -{ - struct vcpu_info *vcpu_info =3D this_cpu(vcpu_info); - unsigned long pending; - - vcpu_info->evtchn_upcall_pending =3D 0; - pending =3D xchg(&vcpu_info->evtchn_pending_sel, 0); - - while ( pending ) - { - unsigned int l1 =3D find_first_set_bit(pending); - unsigned long evtchn =3D xchg(&XEN_shared_info->evtchn_pending[l1]= , 0); - - __clear_bit(l1, &pending); - evtchn &=3D ~XEN_shared_info->evtchn_mask[l1]; - while ( evtchn ) - { - unsigned int port =3D find_first_set_bit(evtchn); - - __clear_bit(port, &evtchn); - port +=3D l1 * BITS_PER_LONG; - - if ( pv_console && port =3D=3D pv_console_evtchn() ) - pv_console_rx(regs); - else if ( pv_shim ) - pv_shim_inject_evtchn(port); - } - } - - ack_APIC_irq(); -} - -static void init_evtchn(void) -{ - static uint8_t evtchn_upcall_vector; - int rc; - - if ( !evtchn_upcall_vector ) - alloc_direct_apic_vector(&evtchn_upcall_vector, xen_evtchn_upcall); - - ASSERT(evtchn_upcall_vector); - - rc =3D xen_hypercall_set_evtchn_upcall_vector(this_cpu(vcpu_id), - evtchn_upcall_vector); - if ( rc ) - panic("Unable to set evtchn upcall vector: %d\n", rc); - - /* Trick toolstack to think we are enlightened */ - { - struct xen_hvm_param a =3D { - .domid =3D DOMID_SELF, - .index =3D HVM_PARAM_CALLBACK_IRQ, - .value =3D 1, - }; - - BUG_ON(xen_hypercall_hvm_op(HVMOP_set_param, &a)); - } -} - -void __init hypervisor_setup(void) -{ - init_memmap(); - - map_shared_info(); - - set_vcpu_id(); - vcpu_info =3D xzalloc_array(struct vcpu_info, nr_cpu_ids); - if ( map_vcpuinfo() ) - { - xfree(vcpu_info); - vcpu_info =3D NULL; - } - if ( !vcpu_info && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS ) - { - unsigned int i; - - for ( i =3D XEN_LEGACY_MAX_VCPUS; i < nr_cpu_ids; i++ ) - __cpumask_clear_cpu(i, &cpu_present_map); - nr_cpu_ids =3D XEN_LEGACY_MAX_VCPUS; - printk(XENLOG_WARNING - "unable to map vCPU info, limiting vCPUs to: %u\n", - XEN_LEGACY_MAX_VCPUS); - } - - init_evtchn(); -} - -void hypervisor_ap_setup(void) -{ - set_vcpu_id(); - map_vcpuinfo(); - init_evtchn(); -} - -int hypervisor_alloc_unused_page(mfn_t *mfn) -{ - unsigned long m; - int rc; - - rc =3D rangeset_claim_range(mem, 1, &m); - if ( !rc ) - *mfn =3D _mfn(m); - - return rc; -} - -int hypervisor_free_unused_page(mfn_t mfn) -{ - return rangeset_remove_range(mem, mfn_x(mfn), mfn_x(mfn)); -} - uint32_t hypervisor_cpuid_base(void) { return xen_cpuid_base; } =20 -static void ap_resume(void *unused) -{ - map_vcpuinfo(); - init_evtchn(); -} - -void hypervisor_resume(void) -{ - /* Reset shared info page. */ - map_shared_info(); - - /* - * Reset vcpu_info. Just clean the mapped bitmap and try to map the vc= pu - * area again. On failure to map (when it was previously mapped) panic - * since it's impossible to safely shut down running guest vCPUs in or= der - * to meet the new XEN_LEGACY_MAX_VCPUS requirement. - */ - bitmap_zero(vcpu_info_mapped, NR_CPUS); - if ( map_vcpuinfo() && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS ) - panic("unable to remap vCPU info and vCPUs > legacy limit\n"); - - /* Setup event channel upcall vector. */ - init_evtchn(); - smp_call_function(ap_resume, NULL, 1); - - if ( pv_console ) - pv_console_init(); -} - /* * Local variables: * mode: C --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel