From nobody Fri Apr 26 11:29:03 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990752; cv=none; d=zoho.com; s=zohoarc; b=XN4JOs2gEt8YuJhTkjWb/vdI2bGriECHSES52JQjvcN7xBMW7X/j96BW9ZGnk2LCDBFminJ4PprZezqTQ4t2sMT041zzRHCAnymvf3REYq3tg6BIVJEQSFcfkZA0FX/JYvGtrdOBFtohQcUpHI8XP6G7oiacBRCLrIihSR3wH14= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990752; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=/6W5r2PgrueHaaAfinAWyyd5Tdsdj66CUOoL/5isZi4=; b=eu+RKKPU1OOjbTgixkKghYOhRKzHdmzvJxbIkKYf7oN5j1O4BXfz/hI66C2Uzto5Pz0AChld1oye5DvjwjeBO7echoTaw68sNzcg2tshcLcPT+YSPX3QwHl4oQqfn7SCdxKNHT4jdzhWNwcY6Plfk2ij/XBclNmdRTSZHcgUe5Q= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 156099075234075.47910210864904; Wed, 19 Jun 2019 17:32:32 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz1-000082-LC; Thu, 20 Jun 2019 00:31:19 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkyz-00007p-R5 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:17 +0000 Received: from mail-io1-xd2f.google.com (unknown [2607:f8b0:4864:20::d2f]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id b6cf5637-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:16 +0000 (UTC) Received: by mail-io1-xd2f.google.com with SMTP id k20so12996ios.10 for ; Wed, 19 Jun 2019 17:31:15 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:14 -0700 (PDT) X-Inumbo-ID: b6cf5637-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xQgHrIP1YOSfpHBq5LyzHdsL+ykX+RqTUSSWyEDnICM=; b=UH7OpnJGT3Y3uabVepgE4xoY68FiMZBgusyzFGOxqu8mnrV4l5DdESA6NLCrkN6hyr gwWLKPY84b4Y1xtPl47i9t6TTYCSBRfJm0uP7p8zHUAq/ZbittDueBQcHYeHaQ1EfYsM DmfkU6Ggfadv67myFQgxAxeTWqL87KtHIxPUAemFd0Uw8qEOGlgDLxl8Ewugu9wxr6al 7qzCAzf9Nj6MsUfzeLtf7wmwmYNivpA6iR3HgkGIutHmpI/A5zLuEkDos1sAZNuewE2b 5+ElEsDNo4yuZITHKOsOrwupRg0w32tI5I0V7QSP5FzGozhcaXqgFfwldybh8nT0Scyq j1+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xQgHrIP1YOSfpHBq5LyzHdsL+ykX+RqTUSSWyEDnICM=; b=iFJEIpmN6LZgw9ustjFv/s37bqC0A9EitXF/8Khrjya3dD7c/MNSDNuMsYWJGoxwsd WyJjccdLl7imvLUz4tgukwAKU92iAq2tt9o8HtpEh8IrLWYyL0Ef0QtgPJU3Z3NWOZnT WWeCn5RRYe/EzcpJc90EiEY3ICJObysUnpo3nDp54hmyJbzwCAG4h6POtpyEk92Njo+F 8FNaPZ4jj++iC9xAyHSpZvib5QiE8HDvN6k0ID962Nh2oae1G05ARmWJJiwsQey4asr2 ffj7Yzsb22QLxFhAsqowaqiae7CofnsyC2f+Vvd8LfpcYY67/rQQGCCdpWVzqYa11mw/ loag== X-Gm-Message-State: APjAAAWlkd4/LjehXgarv192rSiVo/xuw8O7U2Eos6fF+oX+CrbUgDH9 AHO9bgKhKnCEBGlUsldv3EN4MR3sXf8= X-Google-Smtp-Source: APXvYqzdIJlESRkNRE7tZMHqjA9jAyFt7Y9gZdgEzher7Qr2Vu0ntfKEl0g+MXstLV7MU7ntqpKIsw== X-Received: by 2002:a6b:c886:: with SMTP id y128mr15383585iof.100.1560990674945; Wed, 19 Jun 2019 17:31:14 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:45 -0700 Message-Id: <20190620003053.21993-2-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 1/9] x86/guest: code movement to separate Xen detection from guest functions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Wei Liu , Andrew Cooper , Rich Persaud , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Move some logic from: xen/arch/x86/guest/xen.c into a new file: xen/arch/x86/guest/xen-guest.c xen.c then contains the functions for basic Xen detection and xen-guest.c implements the intended behaviour changes when Xen is running as a guest. Since CONFIG_XEN_GUEST must currently be defined for any of this code to be included, making xen-guest.o conditional upon it here works correctly and avoids further change to it in later patches in the series. No functional change intended. Signed-off-by: Christopher Clark --- xen/arch/x86/guest/Makefile | 1 + xen/arch/x86/guest/xen-guest.c | 301 +++++++++++++++++++++++++++++++++ xen/arch/x86/guest/xen.c | 254 ---------------------------- 3 files changed, 302 insertions(+), 254 deletions(-) create mode 100644 xen/arch/x86/guest/xen-guest.c diff --git a/xen/arch/x86/guest/Makefile b/xen/arch/x86/guest/Makefile index 26fb4b1007..6ddaa3748f 100644 --- a/xen/arch/x86/guest/Makefile +++ b/xen/arch/x86/guest/Makefile @@ -1,4 +1,5 @@ obj-y +=3D hypercall_page.o obj-y +=3D xen.o +obj-$(CONFIG_XEN_GUEST) +=3D xen-guest.o =20 obj-bin-$(CONFIG_PVH_GUEST) +=3D pvh-boot.init.o diff --git a/xen/arch/x86/guest/xen-guest.c b/xen/arch/x86/guest/xen-guest.c new file mode 100644 index 0000000000..65596ab1b1 --- /dev/null +++ b/xen/arch/x86/guest/xen-guest.c @@ -0,0 +1,301 @@ +/*************************************************************************= ***** + * arch/x86/guest/xen-guest.c + * + * Support for running a single VM with Xen as a guest. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; If not, see . + * + * Copyright (c) 2017 Citrix Systems Ltd. + */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include +#include + +bool __read_mostly xen_guest; + +static struct rangeset *mem; + +DEFINE_PER_CPU(unsigned int, vcpu_id); + +static struct vcpu_info *vcpu_info; +static unsigned long vcpu_info_mapped[BITS_TO_LONGS(NR_CPUS)]; +DEFINE_PER_CPU(struct vcpu_info *, vcpu_info); + +static void map_shared_info(void) +{ + mfn_t mfn; + struct xen_add_to_physmap xatp =3D { + .domid =3D DOMID_SELF, + .space =3D XENMAPSPACE_shared_info, + }; + unsigned int i; + unsigned long rc; + + if ( hypervisor_alloc_unused_page(&mfn) ) + panic("unable to reserve shared info memory page\n"); + + xatp.gpfn =3D mfn_x(mfn); + rc =3D xen_hypercall_memory_op(XENMEM_add_to_physmap, &xatp); + if ( rc ) + panic("failed to map shared_info page: %ld\n", rc); + + set_fixmap(FIX_XEN_SHARED_INFO, mfn_x(mfn) << PAGE_SHIFT); + + /* Mask all upcalls */ + for ( i =3D 0; i < ARRAY_SIZE(XEN_shared_info->evtchn_mask); i++ ) + write_atomic(&XEN_shared_info->evtchn_mask[i], ~0ul); +} + +static int map_vcpuinfo(void) +{ + unsigned int vcpu =3D this_cpu(vcpu_id); + struct vcpu_register_vcpu_info info; + int rc; + + if ( !vcpu_info ) + { + this_cpu(vcpu_info) =3D &XEN_shared_info->vcpu_info[vcpu]; + return 0; + } + + if ( test_bit(vcpu, vcpu_info_mapped) ) + { + this_cpu(vcpu_info) =3D &vcpu_info[vcpu]; + return 0; + } + + info.mfn =3D virt_to_mfn(&vcpu_info[vcpu]); + info.offset =3D (unsigned long)&vcpu_info[vcpu] & ~PAGE_MASK; + rc =3D xen_hypercall_vcpu_op(VCPUOP_register_vcpu_info, vcpu, &info); + if ( rc ) + { + BUG_ON(vcpu >=3D XEN_LEGACY_MAX_VCPUS); + this_cpu(vcpu_info) =3D &XEN_shared_info->vcpu_info[vcpu]; + } + else + { + this_cpu(vcpu_info) =3D &vcpu_info[vcpu]; + set_bit(vcpu, vcpu_info_mapped); + } + + return rc; +} + +static void set_vcpu_id(void) +{ + uint32_t cpuid_base, eax, ebx, ecx, edx; + + cpuid_base =3D hypervisor_cpuid_base(); + + ASSERT(cpuid_base); + + /* Fetch vcpu id from cpuid. */ + cpuid(cpuid_base + 4, &eax, &ebx, &ecx, &edx); + if ( eax & XEN_HVM_CPUID_VCPU_ID_PRESENT ) + this_cpu(vcpu_id) =3D ebx; + else + this_cpu(vcpu_id) =3D smp_processor_id(); +} + +static void __init init_memmap(void) +{ + unsigned int i; + + mem =3D rangeset_new(NULL, "host memory map", 0); + if ( !mem ) + panic("failed to allocate PFN usage rangeset\n"); + + /* + * Mark up to the last memory page (or 4GiB) as RAM. This is done beca= use + * Xen doesn't know the position of possible MMIO holes, so at least t= ry to + * avoid the know MMIO hole below 4GiB. Note that this is subject to f= uture + * discussion and improvements. + */ + if ( rangeset_add_range(mem, 0, max_t(unsigned long, max_page - 1, + PFN_DOWN(GB(4) - 1))) ) + panic("unable to add RAM to in-use PFN rangeset\n"); + + for ( i =3D 0; i < e820.nr_map; i++ ) + { + struct e820entry *e =3D &e820.map[i]; + + if ( rangeset_add_range(mem, PFN_DOWN(e->addr), + PFN_UP(e->addr + e->size - 1)) ) + panic("unable to add range [%#lx, %#lx] to in-use PFN rangeset= \n", + PFN_DOWN(e->addr), PFN_UP(e->addr + e->size - 1)); + } +} + +static void xen_evtchn_upcall(struct cpu_user_regs *regs) +{ + struct vcpu_info *vcpu_info =3D this_cpu(vcpu_info); + unsigned long pending; + + vcpu_info->evtchn_upcall_pending =3D 0; + pending =3D xchg(&vcpu_info->evtchn_pending_sel, 0); + + while ( pending ) + { + unsigned int l1 =3D find_first_set_bit(pending); + unsigned long evtchn =3D xchg(&XEN_shared_info->evtchn_pending[l1]= , 0); + + __clear_bit(l1, &pending); + evtchn &=3D ~XEN_shared_info->evtchn_mask[l1]; + while ( evtchn ) + { + unsigned int port =3D find_first_set_bit(evtchn); + + __clear_bit(port, &evtchn); + port +=3D l1 * BITS_PER_LONG; + + if ( pv_console && port =3D=3D pv_console_evtchn() ) + pv_console_rx(regs); + else if ( pv_shim ) + pv_shim_inject_evtchn(port); + } + } + + ack_APIC_irq(); +} + +static void init_evtchn(void) +{ + static uint8_t evtchn_upcall_vector; + int rc; + + if ( !evtchn_upcall_vector ) + alloc_direct_apic_vector(&evtchn_upcall_vector, xen_evtchn_upcall); + + ASSERT(evtchn_upcall_vector); + + rc =3D xen_hypercall_set_evtchn_upcall_vector(this_cpu(vcpu_id), + evtchn_upcall_vector); + if ( rc ) + panic("Unable to set evtchn upcall vector: %d\n", rc); + + /* Trick toolstack to think we are enlightened */ + { + struct xen_hvm_param a =3D { + .domid =3D DOMID_SELF, + .index =3D HVM_PARAM_CALLBACK_IRQ, + .value =3D 1, + }; + + BUG_ON(xen_hypercall_hvm_op(HVMOP_set_param, &a)); + } +} + +void __init hypervisor_setup(void) +{ + init_memmap(); + + map_shared_info(); + + set_vcpu_id(); + vcpu_info =3D xzalloc_array(struct vcpu_info, nr_cpu_ids); + if ( map_vcpuinfo() ) + { + xfree(vcpu_info); + vcpu_info =3D NULL; + } + if ( !vcpu_info && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS ) + { + unsigned int i; + + for ( i =3D XEN_LEGACY_MAX_VCPUS; i < nr_cpu_ids; i++ ) + __cpumask_clear_cpu(i, &cpu_present_map); + nr_cpu_ids =3D XEN_LEGACY_MAX_VCPUS; + printk(XENLOG_WARNING + "unable to map vCPU info, limiting vCPUs to: %u\n", + XEN_LEGACY_MAX_VCPUS); + } + + init_evtchn(); +} + +void hypervisor_ap_setup(void) +{ + set_vcpu_id(); + map_vcpuinfo(); + init_evtchn(); +} + +int hypervisor_alloc_unused_page(mfn_t *mfn) +{ + unsigned long m; + int rc; + + rc =3D rangeset_claim_range(mem, 1, &m); + if ( !rc ) + *mfn =3D _mfn(m); + + return rc; +} + +int hypervisor_free_unused_page(mfn_t mfn) +{ + return rangeset_remove_range(mem, mfn_x(mfn), mfn_x(mfn)); +} + +static void ap_resume(void *unused) +{ + map_vcpuinfo(); + init_evtchn(); +} + +void hypervisor_resume(void) +{ + /* Reset shared info page. */ + map_shared_info(); + + /* + * Reset vcpu_info. Just clean the mapped bitmap and try to map the vc= pu + * area again. On failure to map (when it was previously mapped) panic + * since it's impossible to safely shut down running guest vCPUs in or= der + * to meet the new XEN_LEGACY_MAX_VCPUS requirement. + */ + bitmap_zero(vcpu_info_mapped, NR_CPUS); + if ( map_vcpuinfo() && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS ) + panic("unable to remap vCPU info and vCPUs > legacy limit\n"); + + /* Setup event channel upcall vector. */ + init_evtchn(); + smp_call_function(ap_resume, NULL, 1); + + if ( pv_console ) + pv_console_init(); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/x86/guest/xen.c b/xen/arch/x86/guest/xen.c index 7b7a5badab..90d464bdbd 100644 --- a/xen/arch/x86/guest/xen.c +++ b/xen/arch/x86/guest/xen.c @@ -22,9 +22,7 @@ #include #include #include -#include #include -#include =20 #include #include @@ -35,17 +33,8 @@ #include #include =20 -bool __read_mostly xen_guest; - static __read_mostly uint32_t xen_cpuid_base; extern char hypercall_page[]; -static struct rangeset *mem; - -DEFINE_PER_CPU(unsigned int, vcpu_id); - -static struct vcpu_info *vcpu_info; -static unsigned long vcpu_info_mapped[BITS_TO_LONGS(NR_CPUS)]; -DEFINE_PER_CPU(struct vcpu_info *, vcpu_info); =20 static void __init find_xen_leaves(void) { @@ -87,254 +76,11 @@ void __init probe_hypervisor(void) xen_guest =3D true; } =20 -static void map_shared_info(void) -{ - mfn_t mfn; - struct xen_add_to_physmap xatp =3D { - .domid =3D DOMID_SELF, - .space =3D XENMAPSPACE_shared_info, - }; - unsigned int i; - unsigned long rc; - - if ( hypervisor_alloc_unused_page(&mfn) ) - panic("unable to reserve shared info memory page\n"); - - xatp.gpfn =3D mfn_x(mfn); - rc =3D xen_hypercall_memory_op(XENMEM_add_to_physmap, &xatp); - if ( rc ) - panic("failed to map shared_info page: %ld\n", rc); - - set_fixmap(FIX_XEN_SHARED_INFO, mfn_x(mfn) << PAGE_SHIFT); - - /* Mask all upcalls */ - for ( i =3D 0; i < ARRAY_SIZE(XEN_shared_info->evtchn_mask); i++ ) - write_atomic(&XEN_shared_info->evtchn_mask[i], ~0ul); -} - -static int map_vcpuinfo(void) -{ - unsigned int vcpu =3D this_cpu(vcpu_id); - struct vcpu_register_vcpu_info info; - int rc; - - if ( !vcpu_info ) - { - this_cpu(vcpu_info) =3D &XEN_shared_info->vcpu_info[vcpu]; - return 0; - } - - if ( test_bit(vcpu, vcpu_info_mapped) ) - { - this_cpu(vcpu_info) =3D &vcpu_info[vcpu]; - return 0; - } - - info.mfn =3D virt_to_mfn(&vcpu_info[vcpu]); - info.offset =3D (unsigned long)&vcpu_info[vcpu] & ~PAGE_MASK; - rc =3D xen_hypercall_vcpu_op(VCPUOP_register_vcpu_info, vcpu, &info); - if ( rc ) - { - BUG_ON(vcpu >=3D XEN_LEGACY_MAX_VCPUS); - this_cpu(vcpu_info) =3D &XEN_shared_info->vcpu_info[vcpu]; - } - else - { - this_cpu(vcpu_info) =3D &vcpu_info[vcpu]; - set_bit(vcpu, vcpu_info_mapped); - } - - return rc; -} - -static void set_vcpu_id(void) -{ - uint32_t eax, ebx, ecx, edx; - - ASSERT(xen_cpuid_base); - - /* Fetch vcpu id from cpuid. */ - cpuid(xen_cpuid_base + 4, &eax, &ebx, &ecx, &edx); - if ( eax & XEN_HVM_CPUID_VCPU_ID_PRESENT ) - this_cpu(vcpu_id) =3D ebx; - else - this_cpu(vcpu_id) =3D smp_processor_id(); -} - -static void __init init_memmap(void) -{ - unsigned int i; - - mem =3D rangeset_new(NULL, "host memory map", 0); - if ( !mem ) - panic("failed to allocate PFN usage rangeset\n"); - - /* - * Mark up to the last memory page (or 4GiB) as RAM. This is done beca= use - * Xen doesn't know the position of possible MMIO holes, so at least t= ry to - * avoid the know MMIO hole below 4GiB. Note that this is subject to f= uture - * discussion and improvements. - */ - if ( rangeset_add_range(mem, 0, max_t(unsigned long, max_page - 1, - PFN_DOWN(GB(4) - 1))) ) - panic("unable to add RAM to in-use PFN rangeset\n"); - - for ( i =3D 0; i < e820.nr_map; i++ ) - { - struct e820entry *e =3D &e820.map[i]; - - if ( rangeset_add_range(mem, PFN_DOWN(e->addr), - PFN_UP(e->addr + e->size - 1)) ) - panic("unable to add range [%#lx, %#lx] to in-use PFN rangeset= \n", - PFN_DOWN(e->addr), PFN_UP(e->addr + e->size - 1)); - } -} - -static void xen_evtchn_upcall(struct cpu_user_regs *regs) -{ - struct vcpu_info *vcpu_info =3D this_cpu(vcpu_info); - unsigned long pending; - - vcpu_info->evtchn_upcall_pending =3D 0; - pending =3D xchg(&vcpu_info->evtchn_pending_sel, 0); - - while ( pending ) - { - unsigned int l1 =3D find_first_set_bit(pending); - unsigned long evtchn =3D xchg(&XEN_shared_info->evtchn_pending[l1]= , 0); - - __clear_bit(l1, &pending); - evtchn &=3D ~XEN_shared_info->evtchn_mask[l1]; - while ( evtchn ) - { - unsigned int port =3D find_first_set_bit(evtchn); - - __clear_bit(port, &evtchn); - port +=3D l1 * BITS_PER_LONG; - - if ( pv_console && port =3D=3D pv_console_evtchn() ) - pv_console_rx(regs); - else if ( pv_shim ) - pv_shim_inject_evtchn(port); - } - } - - ack_APIC_irq(); -} - -static void init_evtchn(void) -{ - static uint8_t evtchn_upcall_vector; - int rc; - - if ( !evtchn_upcall_vector ) - alloc_direct_apic_vector(&evtchn_upcall_vector, xen_evtchn_upcall); - - ASSERT(evtchn_upcall_vector); - - rc =3D xen_hypercall_set_evtchn_upcall_vector(this_cpu(vcpu_id), - evtchn_upcall_vector); - if ( rc ) - panic("Unable to set evtchn upcall vector: %d\n", rc); - - /* Trick toolstack to think we are enlightened */ - { - struct xen_hvm_param a =3D { - .domid =3D DOMID_SELF, - .index =3D HVM_PARAM_CALLBACK_IRQ, - .value =3D 1, - }; - - BUG_ON(xen_hypercall_hvm_op(HVMOP_set_param, &a)); - } -} - -void __init hypervisor_setup(void) -{ - init_memmap(); - - map_shared_info(); - - set_vcpu_id(); - vcpu_info =3D xzalloc_array(struct vcpu_info, nr_cpu_ids); - if ( map_vcpuinfo() ) - { - xfree(vcpu_info); - vcpu_info =3D NULL; - } - if ( !vcpu_info && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS ) - { - unsigned int i; - - for ( i =3D XEN_LEGACY_MAX_VCPUS; i < nr_cpu_ids; i++ ) - __cpumask_clear_cpu(i, &cpu_present_map); - nr_cpu_ids =3D XEN_LEGACY_MAX_VCPUS; - printk(XENLOG_WARNING - "unable to map vCPU info, limiting vCPUs to: %u\n", - XEN_LEGACY_MAX_VCPUS); - } - - init_evtchn(); -} - -void hypervisor_ap_setup(void) -{ - set_vcpu_id(); - map_vcpuinfo(); - init_evtchn(); -} - -int hypervisor_alloc_unused_page(mfn_t *mfn) -{ - unsigned long m; - int rc; - - rc =3D rangeset_claim_range(mem, 1, &m); - if ( !rc ) - *mfn =3D _mfn(m); - - return rc; -} - -int hypervisor_free_unused_page(mfn_t mfn) -{ - return rangeset_remove_range(mem, mfn_x(mfn), mfn_x(mfn)); -} - uint32_t hypervisor_cpuid_base(void) { return xen_cpuid_base; } =20 -static void ap_resume(void *unused) -{ - map_vcpuinfo(); - init_evtchn(); -} - -void hypervisor_resume(void) -{ - /* Reset shared info page. */ - map_shared_info(); - - /* - * Reset vcpu_info. Just clean the mapped bitmap and try to map the vc= pu - * area again. On failure to map (when it was previously mapped) panic - * since it's impossible to safely shut down running guest vCPUs in or= der - * to meet the new XEN_LEGACY_MAX_VCPUS requirement. - */ - bitmap_zero(vcpu_info_mapped, NR_CPUS); - if ( map_vcpuinfo() && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS ) - panic("unable to remap vCPU info and vCPUs > legacy limit\n"); - - /* Setup event channel upcall vector. */ - init_evtchn(); - smp_call_function(ap_resume, NULL, 1); - - if ( pv_console ) - pv_console_init(); -} - /* * Local variables: * mode: C --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Apr 26 11:29:03 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990747; cv=none; d=zoho.com; s=zohoarc; b=Sh6YCHM8mcNXxCw4aBr/c71H5LkN2QyAthyOFujxogW7SyZoKjQEVFNA9/PcFZ+EUmQutuLw6GDD+t4MD+QHA3QNbQByRjSNae8cI9Bfnu4sqCpx6pmc/Jq9n+p8SkEEiOVk3n9sCy+24B9Ke9+1dnTI/ccKQ76bm7WiXQk+3bQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990747; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=E2+muMKjTZn+LDpLdK8IzDnF2VMvnfbwAf4utHkCBYA=; b=Y3Wu/jarKYakusXEojcSLGDckyRT+x6fNgj9j9DhLhLhgWjpmKjXvYy/Q/nptJDyn0Jn/Ez7v4woJq0VFgLLJ7lImKk+b8cHiDpHO99dmhB4lqK8Vmo6YiHm2jgvPnW1Eeuv09qijHtVGmpJB26D4AZ2LuwQJGtj2IQwUbKRKAo= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1560990747152650.8413038092044; Wed, 19 Jun 2019 17:32:27 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz2-00008h-VD; Thu, 20 Jun 2019 00:31:20 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz1-00007x-Eh for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:19 +0000 Received: from mail-io1-xd42.google.com (unknown [2607:f8b0:4864:20::d42]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id b85a27a8-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:18 +0000 (UTC) Received: by mail-io1-xd42.google.com with SMTP id r185so199926iod.6 for ; Wed, 19 Jun 2019 17:31:18 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:16 -0700 (PDT) X-Inumbo-ID: b85a27a8-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AovapQVVTQQkmzcMTzk8JCvrp7vgRfayvjJZaS3T4LE=; b=ZPLGYcuewZuqVFVaFwv1q2+rhPU20zHVAw0jIF8slVGmOopj+4xFR6TkmvbtkYZeyH TJMk+tV+2uKt3s8q04oeJKwSiRDxHOK9IfNybZV6MvibV3bN8Ufe5d1pCEDmSOq4kgFl b0VzU7rlmA14Vbb9kNtWJiSm80L+n7aKDdRhOZdP24q6y1PfW1Dias4fr0NQt6IC9O8A ObUG8S2Flx9cMZ/zxOrHFwvjXe6N5+ab2anbEyh3B/bTRjX1La3NKo7YaEfca1dG0Exq 2D9RRrPLOJMqtoohHyMazQqjWq3gCYAsovUQrrk0l0I8xzBLLCcgD/hu8sbAKewOAfiy Z+xQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AovapQVVTQQkmzcMTzk8JCvrp7vgRfayvjJZaS3T4LE=; b=PK5W6IDMP1IlBuMDD1EUrqu2VJsJaJKuZny65RkEHLpNyudl3boCD/3O4wxmF5a1Xh 2HyTMw0kw6lrbikcoWYlP9JU/iLS8z8oAxmC4P0OjHCCHKwphXAhbWFn0BcI1paS4woN kT0h2vHJ8KKB6QH77sg71G9bNda1+Z0iW5dr2/Sro1N+Li0dUtB6aacRrR1tHqz6EnVu pT56Y3S31IE2PxuP4BXOdTKWlejXfXm4aa9wc5ALZZO9vw4f8gV9sRCjfaozEGDrR30a kXtZmjO82GdxmaGaktMXMaN0AHdlgMdII+gMrEuZzoHul2oxBv+jQYEeY0FSvWfl0XqH Ae0A== X-Gm-Message-State: APjAAAWqEIjMMIC9j2T7W0+GEC3yYJk72rlyGFC+v1LZCzbuvlBgD/gf Ai0/QMT2HGz6pxtEB3N+ODO50Y3Sano= X-Google-Smtp-Source: APXvYqzGlSpiZIQtMj7yCzrhX6rFhS+7cPOwQc8eNg9DoeEKbdcODYvPG11Tkvqw/xWtOkR5gjU0JQ== X-Received: by 2002:a6b:4f14:: with SMTP id d20mr3528730iob.219.1560990677736; Wed, 19 Jun 2019 17:31:17 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:46 -0700 Message-Id: <20190620003053.21993-3-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 2/9] x86: Introduce Xen detection as separate logic from Xen Guest support. X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Wei Liu , Andrew Cooper , Rich Persaud , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Add Kconfig option XEN_DETECT for: "Support for Xen detecting when it is running under Xen". If running under Xen is detected, a boot message will indicate the hypervisor version obtained from cpuid. Update the XEN_GUEST Kconfig option text to reflect its current purpose: "Common PVH_GUEST and PV_SHIM logic for Xen as a Xen-aware guest". Update calibrate_APIC_clock to use Xen-specific init if nested Xen is detected, even if not operating as a PV shim or booted as PVH. This work is a precursor to adding the interface for support of PV drivers on nested Xen. Signed-off-by: Christopher Clark --- xen/arch/x86/Kconfig | 11 ++++++++++- xen/arch/x86/Makefile | 2 +- xen/arch/x86/apic.c | 4 ++-- xen/arch/x86/guest/Makefile | 2 +- xen/arch/x86/guest/xen-guest.c | 10 ++++++++++ xen/arch/x86/guest/xen.c | 23 ++++++++++++++++++----- xen/arch/x86/setup.c | 3 +++ xen/include/asm-x86/guest/xen.h | 26 +++++++++++++++++++++----- 8 files changed, 66 insertions(+), 15 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index f502d765ba..31e5ffd2f2 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -161,11 +161,20 @@ config XEN_ALIGN_2M =20 endchoice =20 +config XEN_DETECT + def_bool y + prompt "Xen Detection" + ---help--- + Support for Xen detecting when it is running under Xen. + + If unsure, say Y. + config XEN_GUEST def_bool n prompt "Xen Guest" + depends on XEN_DETECT ---help--- - Support for Xen detecting when it is running under Xen. + Common PVH_GUEST and PV_SHIM logic for Xen as a Xen-aware guest. =20 If unsure, say N. =20 diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile index 8a8d8f060f..763077b0a3 100644 --- a/xen/arch/x86/Makefile +++ b/xen/arch/x86/Makefile @@ -1,7 +1,7 @@ subdir-y +=3D acpi subdir-y +=3D cpu subdir-y +=3D genapic -subdir-$(CONFIG_XEN_GUEST) +=3D guest +subdir-$(CONFIG_XEN_DETECT) +=3D guest subdir-$(CONFIG_HVM) +=3D hvm subdir-y +=3D mm subdir-$(CONFIG_XENOPROF) +=3D oprofile diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c index 9c3c998d34..5949a95d58 100644 --- a/xen/arch/x86/apic.c +++ b/xen/arch/x86/apic.c @@ -1247,7 +1247,7 @@ static int __init calibrate_APIC_clock(void) */ __setup_APIC_LVTT(1000000000); =20 - if ( !xen_guest ) + if ( !xen_detected ) /* * The timer chip counts down to zero. Let's wait * for a wraparound to start exact measurement: @@ -1267,7 +1267,7 @@ static int __init calibrate_APIC_clock(void) * Let's wait LOOPS ticks: */ for (i =3D 0; i < LOOPS; i++) - if ( !xen_guest ) + if ( !xen_detected ) wait_8254_wraparound(); else wait_tick_pvh(); diff --git a/xen/arch/x86/guest/Makefile b/xen/arch/x86/guest/Makefile index 6ddaa3748f..d3a7844e61 100644 --- a/xen/arch/x86/guest/Makefile +++ b/xen/arch/x86/guest/Makefile @@ -1,4 +1,4 @@ -obj-y +=3D hypercall_page.o +obj-$(CONFIG_XEN_GUEST) +=3D hypercall_page.o obj-y +=3D xen.o obj-$(CONFIG_XEN_GUEST) +=3D xen-guest.o =20 diff --git a/xen/arch/x86/guest/xen-guest.c b/xen/arch/x86/guest/xen-guest.c index 65596ab1b1..b6d89e02a3 100644 --- a/xen/arch/x86/guest/xen-guest.c +++ b/xen/arch/x86/guest/xen-guest.c @@ -35,6 +35,8 @@ #include #include =20 +extern char hypercall_page[]; + bool __read_mostly xen_guest; =20 static struct rangeset *mem; @@ -45,6 +47,14 @@ static struct vcpu_info *vcpu_info; static unsigned long vcpu_info_mapped[BITS_TO_LONGS(NR_CPUS)]; DEFINE_PER_CPU(struct vcpu_info *, vcpu_info); =20 +void xen_guest_enable(void) +{ + /* Fill the hypercall page. */ + wrmsrl(cpuid_ebx(hypervisor_cpuid_base() + 2), __pa(hypercall_page)); + + xen_guest =3D true; +} + static void map_shared_info(void) { mfn_t mfn; diff --git a/xen/arch/x86/guest/xen.c b/xen/arch/x86/guest/xen.c index 90d464bdbd..b0b603a11a 100644 --- a/xen/arch/x86/guest/xen.c +++ b/xen/arch/x86/guest/xen.c @@ -33,8 +33,10 @@ #include #include =20 +/* xen_detected: Xen running on Xen detected */ +bool __read_mostly xen_detected; + static __read_mostly uint32_t xen_cpuid_base; -extern char hypercall_page[]; =20 static void __init find_xen_leaves(void) { @@ -58,7 +60,7 @@ static void __init find_xen_leaves(void) =20 void __init probe_hypervisor(void) { - if ( xen_guest ) + if ( xen_detected ) return; =20 /* Too early to use cpu_has_hypervisor */ @@ -70,10 +72,21 @@ void __init probe_hypervisor(void) if ( !xen_cpuid_base ) return; =20 - /* Fill the hypercall page. */ - wrmsrl(cpuid_ebx(xen_cpuid_base + 2), __pa(hypercall_page)); + xen_detected =3D true; + + xen_guest_enable(); +} + +void __init hypervisor_print_info(void) +{ + uint32_t eax, ebx, ecx, edx; + unsigned int major, minor; + + cpuid(xen_cpuid_base + 1, &eax, &ebx, &ecx, &edx); =20 - xen_guest =3D true; + major =3D eax >> 16; + minor =3D eax & 0xffff; + printk("Nested Xen version %u.%u.\n", major, minor); } =20 uint32_t hypervisor_cpuid_base(void) diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index d2011910fa..58f499edaf 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -774,6 +774,9 @@ void __init noreturn __start_xen(unsigned long mbi_p) ehci_dbgp_init(); console_init_preirq(); =20 + if ( xen_detected ) + hypervisor_print_info(); + if ( pvh_boot ) pvh_print_info(); =20 diff --git a/xen/include/asm-x86/guest/xen.h b/xen/include/asm-x86/guest/xe= n.h index 7e04e4a7ab..27c854ab8a 100644 --- a/xen/include/asm-x86/guest/xen.h +++ b/xen/include/asm-x86/guest/xen.h @@ -24,20 +24,37 @@ #include #include =20 -#define XEN_shared_info ((struct shared_info *)fix_to_virt(FIX_XEN_SHARED_= INFO)) +#ifdef CONFIG_XEN_DETECT + +extern bool xen_detected; + +void probe_hypervisor(void); +void hypervisor_print_info(void); +uint32_t hypervisor_cpuid_base(void); + +#else + +#define xen_detected 0 + +static inline void probe_hypervisor(void) {} +static inline void hypervisor_print_info(void) { + ASSERT_UNREACHABLE(); +} + +#endif /* CONFIG_XEN_DETECT */ =20 #ifdef CONFIG_XEN_GUEST +#define XEN_shared_info ((struct shared_info *)fix_to_virt(FIX_XEN_SHARED_= INFO)) =20 extern bool xen_guest; extern bool pv_console; =20 -void probe_hypervisor(void); void hypervisor_setup(void); void hypervisor_ap_setup(void); int hypervisor_alloc_unused_page(mfn_t *mfn); int hypervisor_free_unused_page(mfn_t mfn); -uint32_t hypervisor_cpuid_base(void); void hypervisor_resume(void); +void xen_guest_enable(void); =20 DECLARE_PER_CPU(unsigned int, vcpu_id); DECLARE_PER_CPU(struct vcpu_info *, vcpu_info); @@ -47,8 +64,6 @@ DECLARE_PER_CPU(struct vcpu_info *, vcpu_info); #define xen_guest 0 #define pv_console 0 =20 -static inline void probe_hypervisor(void) {} - static inline void hypervisor_setup(void) { ASSERT_UNREACHABLE(); @@ -57,6 +72,7 @@ static inline void hypervisor_ap_setup(void) { ASSERT_UNREACHABLE(); } +static inline void xen_guest_enable(void) {} =20 #endif /* CONFIG_XEN_GUEST */ #endif /* __X86_GUEST_XEN_H__ */ --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Apr 26 11:29:03 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990749; cv=none; d=zoho.com; s=zohoarc; b=Wy7aqratTD0Z8AC5z6vmk/+r/w8LfO9BBTaXnd7+KtLee4kakGf+E+45xR8j9ZQ9mPFVq7Q7kHjjVqRQMXrEADn0ARnIwL8/iWlglQ6FLwFoR6wE0lYKai9VvnOir6BbSy3zGd5I7ri/VfbPeZ4kJuWMHKH6Er9vQhfeUJrjpok= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990749; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=290iaWfR469llYRSkz1GYw6t5W63hM51+fyVhxUswzg=; b=JZHG0kUvrdT7xLR2Ime3FCv0AWHDlFMrKfB2h4UUqzVNx8vpqGotqIjSbd70KdlljzcwDCsBSnqQWX1ViyNM0wwiYDQYzaTcmNgEGHtTk3hPdaiBF/FxDona+GYjsiv3gcOAcLuqK6oLqHx3zheZH3TpCD6Hr8/4QH12GgmsIaI= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1560990749950335.5157313500723; Wed, 19 Jun 2019 17:32:29 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz5-0000A9-G3; Thu, 20 Jun 2019 00:31:23 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz4-00009X-1s for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:22 +0000 Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id b9b198cf-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:20 +0000 (UTC) Received: by mail-io1-xd44.google.com with SMTP id i10so127155iol.13 for ; Wed, 19 Jun 2019 17:31:20 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:19 -0700 (PDT) X-Inumbo-ID: b9b198cf-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Vx7lhmTYaFpyhfeLiRC8+jqYuuCwLj5xxTvw1bCxbdw=; b=CDPzr4zF+hZv8Ais64ygWktsbiW0ERN+GrSU3nHkpvwQL9uAS6/BbzLZiREZT+QWIH HLWwmtf2VIqVU0v3AfOphwLQp4rtGisw2boVBEmhoT5TPxtQ/3gymGg3rphp+II1Kb6T ReX4bjGlv8poE2Qs9M+i3AL95L6fdQd3Zf77vdPhqHe1LugfN6RM1ylYBQ1O4k5bsBYv ucJPAQgBWZkMXXjQWcsnaSpSFdeXmHjvoX7RiIroCp7xPv82CXQgEyklxJqDESte4Gpq jzxzJSh3euzUDPTDJK6kA9KY5PHHxlzu/M3NkvTFRadlKBOQjdtCM/UsWSK1lbJ5xDLx n4/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Vx7lhmTYaFpyhfeLiRC8+jqYuuCwLj5xxTvw1bCxbdw=; b=CwecK+zw4ruoKTBPp6DcSOcbGUrXUfYtxF8pix/ZQZMLq40o1IZ1mryrE5E4v2dKNZ XtyR0uT+gDhusLrWHgjle6kCDMgwmmipKgKxPYSPWCxziyD5GT0gUuXFuovWbJ2V2QfB b0+UTeTLD75g7JtSC9tw7VOSbKojQ7L/lZJv0at5gwfXUX3NBfW0xGfHQAZoRi4aKzVs fvIRbwHKvPKpfgBDN1S2huM4Ud7FS7dJqXrL8zhFJ/Fk3csHznwa1Xazrck6hJahpixW IWgfobulf2rEw+NYeuE6T2QxvHHLhdwprABhwMlsCC1W3+doS9Cv6sdKVtkDCGtRXa4w YaCQ== X-Gm-Message-State: APjAAAWMYOtwvtbNqe0/xCakqLPZFB9dd/rMdzR+yebyQhiTKJRI/ulr UTI0xhw61yQxB2bsHZh7lCeFQpnYvko= X-Google-Smtp-Source: APXvYqw7oPcEQYXkmeDWyCvgkXsudPrwKOQ6iLQBbB7Aq0zQqWFFBy+FXDdGkavivTm0SZ2jaOUwnw== X-Received: by 2002:a5d:9b1a:: with SMTP id y26mr12033494ion.238.1560990679906; Wed, 19 Jun 2019 17:31:19 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:47 -0700 Message-Id: <20190620003053.21993-4-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 3/9] x86/nested: add nested_xen_version hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Rich Persaud , Tim Deegan , Julien Grall , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Provides proxying to the host hypervisor for XENVER_version and XENVER_get_features ops. The nested PV interface is only enabled when Xen is not running as either the PV shim or booted as PVH, since the initialization performed within the hypervisor in those cases - ie. as a Xen guest - claims resources that are normally operated by the control domain. This nested hypercall only permits access from the control domain. The XSM policy hook implementation is deferred to a subsequent commit. Signed-off-by: Christopher Clark --- xen/arch/x86/Kconfig | 22 +++++++ xen/arch/x86/guest/Makefile | 5 +- xen/arch/x86/guest/hypercall_page.S | 1 + xen/arch/x86/guest/xen-nested.c | 82 +++++++++++++++++++++++++++ xen/arch/x86/guest/xen.c | 5 +- xen/arch/x86/hypercall.c | 3 + xen/arch/x86/pv/hypercall.c | 3 + xen/include/asm-x86/guest/hypercall.h | 7 ++- xen/include/asm-x86/guest/xen.h | 10 ++++ xen/include/public/xen.h | 1 + xen/include/xen/hypercall.h | 6 ++ 11 files changed, 142 insertions(+), 3 deletions(-) create mode 100644 xen/arch/x86/guest/xen-nested.c diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 31e5ffd2f2..e31e8d3434 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -207,6 +207,28 @@ config PV_SHIM_EXCLUSIVE option is only intended for use when building a dedicated PV Shim firmware, and will not function correctly in other scenarios. =20 + If unsure, say N. + +config XEN_NESTED + bool "Xen PV driver interface for nested Xen" if EXPERT =3D "y" + depends on XEN_DETECT + ---help--- + Enables a second PV driver interface in the hypervisor to support runni= ng + two sets of PV drivers within a single privileged guest (eg. guest dom0) + of a system running Xen under Xen: + + 1) host set: frontends to access devices provided by lower hypervisor + 2) guest set: backends to support existing PV drivers in nested guest V= Ms + + This interface supports the host set of drivers and performs proxying o= f a + limited set of hypercall operations from the guest to the host hypervis= or. + + This feature is for the guest hypervisor and is transparent to the + host hypervisor. Guest VMs of the guest hypervisor use the standard + PV driver interfaces and unmodified drivers. + + Feature is also known as "The Xen-Blanket", presented at Eurosys 2012. + If unsure, say N. endmenu =20 diff --git a/xen/arch/x86/guest/Makefile b/xen/arch/x86/guest/Makefile index d3a7844e61..6d8b0186d4 100644 --- a/xen/arch/x86/guest/Makefile +++ b/xen/arch/x86/guest/Makefile @@ -1,5 +1,8 @@ -obj-$(CONFIG_XEN_GUEST) +=3D hypercall_page.o +ifneq ($(filter y,$(CONFIG_XEN_GUEST) $(CONFIG_XEN_NESTED) $(CONFIG_PVH_GU= EST)),) +obj-y +=3D hypercall_page.o +endif obj-y +=3D xen.o obj-$(CONFIG_XEN_GUEST) +=3D xen-guest.o +obj-$(CONFIG_XEN_NESTED) +=3D xen-nested.o =20 obj-bin-$(CONFIG_PVH_GUEST) +=3D pvh-boot.init.o diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hyper= call_page.S index 6485e9150e..2b1e35803a 100644 --- a/xen/arch/x86/guest/hypercall_page.S +++ b/xen/arch/x86/guest/hypercall_page.S @@ -60,6 +60,7 @@ DECLARE_HYPERCALL(domctl) DECLARE_HYPERCALL(kexec_op) DECLARE_HYPERCALL(argo_op) DECLARE_HYPERCALL(xenpmu_op) +DECLARE_HYPERCALL(nested_xen_version) =20 DECLARE_HYPERCALL(arch_0) DECLARE_HYPERCALL(arch_1) diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-neste= d.c new file mode 100644 index 0000000000..744592aa0c --- /dev/null +++ b/xen/arch/x86/guest/xen-nested.c @@ -0,0 +1,82 @@ +/* + * arch/x86/guest/xen-nested.c + * + * Hypercall implementations for nested PV drivers interface. + * + * Copyright (c) 2019 Star Lab Corp + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 U= SA + */ + +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +extern char hypercall_page[]; + +/* xen_nested: support for nested PV interface enabled */ +static bool __read_mostly xen_nested; + +void xen_nested_enable(void) +{ + /* Fill the hypercall page. */ + wrmsrl(cpuid_ebx(hypervisor_cpuid_base() + 2), __pa(hypercall_page)); + + xen_nested =3D true; +} + +long do_nested_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + long ret; + + if ( !xen_nested ) + return -ENOSYS; + + /* FIXME: apply XSM check here */ + if ( !is_control_domain(current->domain) ) + return -EPERM; + + gprintk(XENLOG_DEBUG, "Nested xen_version: %d.\n", cmd); + + switch ( cmd ) + { + case XENVER_version: + return xen_hypercall_xen_version(XENVER_version, 0); + + case XENVER_get_features: + { + xen_feature_info_t fi; + + if ( copy_from_guest(&fi, arg, 1) ) + return -EFAULT; + + ret =3D xen_hypercall_xen_version(XENVER_get_features, &fi); + if ( ret ) + return ret; + + if ( __copy_to_guest(arg, &fi, 1) ) + return -EFAULT; + + return 0; + } + + default: + gprintk(XENLOG_ERR, "Nested xen_version op %d not implemented.\n",= cmd); + return -EOPNOTSUPP; + } +} diff --git a/xen/arch/x86/guest/xen.c b/xen/arch/x86/guest/xen.c index b0b603a11a..78a5f40b22 100644 --- a/xen/arch/x86/guest/xen.c +++ b/xen/arch/x86/guest/xen.c @@ -74,7 +74,10 @@ void __init probe_hypervisor(void) =20 xen_detected =3D true; =20 - xen_guest_enable(); + if ( pv_shim || pvh_boot ) + xen_guest_enable(); + else + xen_nested_enable(); } =20 void __init hypervisor_print_info(void) diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c index d483dbaa6b..b22f0ca65a 100644 --- a/xen/arch/x86/hypercall.c +++ b/xen/arch/x86/hypercall.c @@ -72,6 +72,9 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls= ] =3D #ifdef CONFIG_HVM ARGS(hvm_op, 2), ARGS(dm_op, 3), +#endif +#ifdef CONFIG_XEN_NESTED + ARGS(nested_xen_version, 2), #endif ARGS(mca, 1), ARGS(arch_1, 1), diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c index 0c84c0b3a0..1e00d07273 100644 --- a/xen/arch/x86/pv/hypercall.c +++ b/xen/arch/x86/pv/hypercall.c @@ -83,6 +83,9 @@ const hypercall_table_t pv_hypercall_table[] =3D { #ifdef CONFIG_HVM HYPERCALL(hvm_op), COMPAT_CALL(dm_op), +#endif +#ifdef CONFIG_XEN_NESTED + HYPERCALL(nested_xen_version), #endif HYPERCALL(mca), HYPERCALL(arch_1), diff --git a/xen/include/asm-x86/guest/hypercall.h b/xen/include/asm-x86/gu= est/hypercall.h index d548816b30..86e11dd1d1 100644 --- a/xen/include/asm-x86/guest/hypercall.h +++ b/xen/include/asm-x86/guest/hypercall.h @@ -19,7 +19,7 @@ #ifndef __X86_XEN_HYPERCALL_H__ #define __X86_XEN_HYPERCALL_H__ =20 -#ifdef CONFIG_XEN_GUEST +#if defined(CONFIG_XEN_GUEST) || defined (CONFIG_XEN_NESTED) =20 #include =20 @@ -123,6 +123,11 @@ static inline long xen_hypercall_hvm_op(unsigned int o= p, void *arg) return _hypercall64_2(long, __HYPERVISOR_hvm_op, op, arg); } =20 +static inline long xen_hypercall_xen_version(unsigned int op, void *arg) +{ + return _hypercall64_2(long, __HYPERVISOR_xen_version, op, arg); +} + /* * Higher level hypercall helpers */ diff --git a/xen/include/asm-x86/guest/xen.h b/xen/include/asm-x86/guest/xe= n.h index 27c854ab8a..802aee5edb 100644 --- a/xen/include/asm-x86/guest/xen.h +++ b/xen/include/asm-x86/guest/xen.h @@ -43,6 +43,16 @@ static inline void hypervisor_print_info(void) { =20 #endif /* CONFIG_XEN_DETECT */ =20 +#ifdef CONFIG_XEN_NESTED + +void xen_nested_enable(void); + +#else + +static inline void xen_nested_enable(void) {} + +#endif /* CONFIG_XEN_NESTED */ + #ifdef CONFIG_XEN_GUEST #define XEN_shared_info ((struct shared_info *)fix_to_virt(FIX_XEN_SHARED_= INFO)) =20 diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index cb2917e74b..2f5ac5eedc 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -121,6 +121,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t); #define __HYPERVISOR_argo_op 39 #define __HYPERVISOR_xenpmu_op 40 #define __HYPERVISOR_dm_op 41 +#define __HYPERVISOR_nested_xen_version 42 =20 /* Architecture-specific hypercall definitions. */ #define __HYPERVISOR_arch_0 48 diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h index fc00a67448..15194002d6 100644 --- a/xen/include/xen/hypercall.h +++ b/xen/include/xen/hypercall.h @@ -150,6 +150,12 @@ do_dm_op( unsigned int nr_bufs, XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs); =20 +#ifdef CONFIG_XEN_NESTED +extern long do_nested_xen_version( + int cmd, + XEN_GUEST_HANDLE_PARAM(void) arg); +#endif + #ifdef CONFIG_COMPAT =20 extern int --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Apr 26 11:29:03 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990754; cv=none; d=zoho.com; s=zohoarc; b=kcZHx/XjKr1LqE48294w7kaWegQNGdlRppb91f8OOYTdqcQEWjlYggr/8NUK4kHmQ6wzuNvlnOOyvIm0J3enBNnWSPtsM8cQCEoSs9s+I5uVz+FSyKmCh/8aQY0EA8VMC32YiRASDVNqjOTje7eZArVnuHllAX48N0fBs1qKtmE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990754; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=hYHDiKrMnJ0pCDxFnmsclNsKubZDBMFmyCUDGwUjGhw=; b=IXoCdPG126siymWIyX5KjQcHPSrUswOaVQXMk+x4TJHAfSfg8sp38K0HRkbDdeadI5ltZxyA/aX9f4s0YXadO38soVshYxD+L221VxxspK08tblqIEvZoI4c5IK9ai2fABRS0aupP8tg7yBtJcm6GF7PiOLXdhkZfSheFRy7BHU= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1560990754830503.8335849030485; Wed, 19 Jun 2019 17:32:34 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz8-0000Bf-R7; Thu, 20 Jun 2019 00:31:26 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz6-0000Aw-W7 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:25 +0000 Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id bb0a0b81-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:23 +0000 (UTC) Received: by mail-io1-xd44.google.com with SMTP id k20so13512ios.10 for ; Wed, 19 Jun 2019 17:31:23 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:21 -0700 (PDT) X-Inumbo-ID: bb0a0b81-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Lzz+KQrHRLJ/adNCZGvpyblUPrhS5dxCfOzVt9up/WQ=; b=pTPwI0Z7koqx/eoEznnAfrOJtcnFEsATY7Oam3HIbz9/XtS3nnA2IEiTlJQNH3ptl3 1MlJQgyUclYOu3hIqof2O43eDDTaZ62zP0VVTDXXROW8tE1ES0+WMDIUPcT658ZRyOa9 q6FwlGmO+PXWV4yOLo1gnJFAElQY7oxdchHJ2D14fQMdNQqs5fMcgYm3CrysF76cqsB1 LPbdbPYcUgzIPuj4foVDSPWthWmuErh9WxXWUkrIKzBDz1R3pydVEl01KKwZGVbedXkk Jc748Sxr1WLiKPhfAyATSR1e/wGoKclt7vH/NjIaM0UlC5n7azlvNm+lnqd4nMiLryVO l7Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Lzz+KQrHRLJ/adNCZGvpyblUPrhS5dxCfOzVt9up/WQ=; b=JqV9sUzU6FDWEgQwsrwVWWx/xzTyUl2MD3HCpDgwploNxsuPsUDXKFIEOpHltWmE7k ZcymE9hddZPnY/TvkVPEGhvX9z4djz4vnnrt8/SxsaeV+mLimlWJywA+6FIbSBZ6zmlK bdigHCxW0RqSIl1mEs5b8lRJ+aAHMsENmKHsoI2sVOb096SF2rvY/xA115sBW8bCdak7 eUU2ig7hlf5Y53/OXDzVcIbN5FQ0SztgFhbzCxoSeDFLr9lvSAaJqjyg35nrfpfC+R/1 oTMdqeMKPSkQaVMmfhoeE9S4UYmEiPReMA1InKkkMDeMwIhHVIqdB76dazK+hcIfOdiB 8xXw== X-Gm-Message-State: APjAAAVcCj+ZKHPXEt2HSPLCSk52Eosipq9pdd6w5cHC3KuZwhP6qi7H a2wobPTWWGvqrHTuGc24m5SMcqgmGO0= X-Google-Smtp-Source: APXvYqwk2d7Lwq7qU6NyK3Q2aSmUrmCQ11RuTGQFzxhU0klz2hcDAAtFpKCev7NfZkbr82du/Pknvg== X-Received: by 2002:a02:6a19:: with SMTP id l25mr95790256jac.123.1560990682225; Wed, 19 Jun 2019 17:31:22 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:48 -0700 Message-Id: <20190620003053.21993-5-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 4/9] XSM: Add hook for nested xen version op; revises non-nested version op X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Wei Liu , Andrew Cooper , Ian Jackson , Rich Persaud , Jan Beulich , Daniel De Graaf , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Expand XSM control to the full set of Xen version ops, to allow for granular control over ops a domain is allowed to issue for the nested case. Applies const to args of xsm_default_action. Signed-off-by: Christopher Clark --- tools/flask/policy/modules/dom0.te | 7 ++- tools/flask/policy/modules/guest_features.te | 5 +- tools/flask/policy/modules/xen.te | 3 ++ tools/flask/policy/policy/initial_sids | 3 ++ xen/arch/x86/guest/xen-nested.c | 6 +-- xen/include/xsm/dummy.h | 12 ++++- xen/include/xsm/xsm.h | 13 ++++++ xen/xsm/dummy.c | 3 ++ xen/xsm/flask/hooks.c | 49 ++++++++++++++------ xen/xsm/flask/policy/access_vectors | 6 +++ xen/xsm/flask/policy/initial_sids | 1 + 11 files changed, 86 insertions(+), 22 deletions(-) diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/module= s/dom0.te index 9970f9dc08..9ed7ccb57b 100644 --- a/tools/flask/policy/modules/dom0.te +++ b/tools/flask/policy/modules/dom0.te @@ -22,9 +22,9 @@ allow dom0_t xen_t:xen2 { # Allow dom0 to use all XENVER_ subops that have checks. # Note that dom0 is part of domain_type so this has duplicates. allow dom0_t xen_t:version { - xen_extraversion xen_compile_info xen_capabilities + xen_version xen_extraversion xen_compile_info xen_capabilities xen_changeset xen_pagesize xen_guest_handle xen_commandline - xen_build_id + xen_build_id xen_get_features xen_platform_parameters }; =20 allow dom0_t xen_t:mmu memorymap; @@ -43,6 +43,9 @@ allow dom0_t dom0_t:domain2 { }; allow dom0_t dom0_t:resource { add remove }; =20 +# Allow dom0 to communicate with a nested Xen hypervisor +allow dom0_t nestedxen_t:version { xen_version xen_get_features }; + # These permissions allow using the FLASK security server to compute access # checks locally, which could be used by a domain or service (such as xens= tore) # that does not have its own security server to make access decisions base= d on diff --git a/tools/flask/policy/modules/guest_features.te b/tools/flask/pol= icy/modules/guest_features.te index 2797a22761..baade15f2e 100644 --- a/tools/flask/policy/modules/guest_features.te +++ b/tools/flask/policy/modules/guest_features.te @@ -21,8 +21,9 @@ if (guest_writeconsole) { =20 # For normal guests, allow all queries except XENVER_commandline. allow domain_type xen_t:version { - xen_extraversion xen_compile_info xen_capabilities - xen_changeset xen_pagesize xen_guest_handle + xen_version xen_extraversion xen_compile_info xen_capabilities + xen_changeset xen_pagesize xen_guest_handle xen_get_features + xen_platform_parameters }; =20 # Version queries don't need auditing when denied. They can be diff --git a/tools/flask/policy/modules/xen.te b/tools/flask/policy/modules= /xen.te index 3dbf93d2b8..fbd82334fd 100644 --- a/tools/flask/policy/modules/xen.te +++ b/tools/flask/policy/modules/xen.te @@ -26,6 +26,9 @@ attribute mls_priv; # The hypervisor itself type xen_t, xen_type, mls_priv; =20 +# A nested Xen hypervisor, if any +type nestedxen_t, xen_type; + # Domain 0 declare_singleton_domain(dom0_t, mls_priv); =20 diff --git a/tools/flask/policy/policy/initial_sids b/tools/flask/policy/po= licy/initial_sids index 6b7b7eff21..50b648df3b 100644 --- a/tools/flask/policy/policy/initial_sids +++ b/tools/flask/policy/policy/initial_sids @@ -16,3 +16,6 @@ sid device gen_context(system_u:object_r:device_t,s0) # Initial SIDs used by the toolstack for domains without defined labels sid domU gen_context(system_u:system_r:domU_t,s0) sid domDM gen_context(system_u:system_r:dm_dom_t,s0) + +# Initial SID for nested Xen on Xen +sid nestedxen gen_context(system_u:system_r:nestedxen_t,s0) diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-neste= d.c index 744592aa0c..fcfa5e1087 100644 --- a/xen/arch/x86/guest/xen-nested.c +++ b/xen/arch/x86/guest/xen-nested.c @@ -47,9 +47,9 @@ long do_nested_xen_version(int cmd, XEN_GUEST_HANDLE_PARA= M(void) arg) if ( !xen_nested ) return -ENOSYS; =20 - /* FIXME: apply XSM check here */ - if ( !is_control_domain(current->domain) ) - return -EPERM; + ret =3D xsm_nested_xen_version(XSM_PRIV, current->domain, cmd); + if ( ret ) + return ret; =20 gprintk(XENLOG_DEBUG, "Nested xen_version: %d.\n", cmd); =20 diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 01d2814fed..8011bf2cb4 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -69,7 +69,7 @@ void __xsm_action_mismatch_detected(void); #endif /* CONFIG_XSM */ =20 static always_inline int xsm_default_action( - xsm_default_t action, struct domain *src, struct domain *target) + xsm_default_t action, const struct domain *src, const struct domain *t= arget) { switch ( action ) { case XSM_HOOK: @@ -739,6 +739,16 @@ static XSM_INLINE int xsm_argo_send(const struct domai= n *d, =20 #endif /* CONFIG_ARGO */ =20 +#ifdef CONFIG_XEN_NESTED +static XSM_INLINE int xsm_nested_xen_version(XSM_DEFAULT_ARG + const struct domain *d, + unsigned int cmd) +{ + XSM_ASSERT_ACTION(XSM_PRIV); + return xsm_default_action(action, d, NULL); +} +#endif + #include static XSM_INLINE int xsm_xen_version (XSM_DEFAULT_ARG uint32_t op) { diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index b6141f6ab1..96044cb55a 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -187,6 +187,9 @@ struct xsm_operations { int (*argo_register_any_source) (const struct domain *d); int (*argo_send) (const struct domain *d, const struct domain *t); #endif +#ifdef CONFIG_XEN_NESTED + int (*nested_xen_version) (const struct domain *d, unsigned int cmd); +#endif }; =20 #ifdef CONFIG_XSM @@ -723,6 +726,16 @@ static inline int xsm_argo_send(const struct domain *d= , const struct domain *t) =20 #endif /* CONFIG_ARGO */ =20 +#ifdef CONFIG_XEN_NESTED +static inline int xsm_nested_xen_version(xsm_default_t def, + const struct domain *d, + unsigned int cmd) +{ + return xsm_ops->nested_xen_version(d, cmd); +} + +#endif /* CONFIG_XEN_NESTED */ + #endif /* XSM_NO_WRAPPERS */ =20 #ifdef CONFIG_MULTIBOOT diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index c9a566f2b5..ed0a4b0691 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -157,4 +157,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, argo_register_any_source); set_to_dummy_if_null(ops, argo_send); #endif +#ifdef CONFIG_XEN_NESTED + set_to_dummy_if_null(ops, nested_xen_version); +#endif } diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index a7d690ac3c..2835279fe7 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1666,46 +1666,56 @@ static int flask_dm_op(struct domain *d) =20 #endif /* CONFIG_X86 */ =20 -static int flask_xen_version (uint32_t op) +static int domain_has_xen_version (const struct domain *d, u32 tsid, + uint32_t op) { - u32 dsid =3D domain_sid(current->domain); + u32 dsid =3D domain_sid(d); =20 switch ( op ) { case XENVER_version: - case XENVER_platform_parameters: - case XENVER_get_features: - /* These sub-ops ignore the permission checks and return data. */ - return 0; + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, + VERSION__XEN_VERSION, NULL); case XENVER_extraversion: - return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION, + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, VERSION__XEN_EXTRAVERSION, NULL); case XENVER_compile_info: - return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION, + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, VERSION__XEN_COMPILE_INFO, NULL); case XENVER_capabilities: - return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION, + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, VERSION__XEN_CAPABILITIES, NULL); case XENVER_changeset: - return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION, + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, VERSION__XEN_CHANGESET, NULL); + case XENVER_platform_parameters: + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, + VERSION__XEN_PLATFORM_PARAMETERS, NULL); + case XENVER_get_features: + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, + VERSION__XEN_GET_FEATURES, NULL); case XENVER_pagesize: - return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION, + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, VERSION__XEN_PAGESIZE, NULL); case XENVER_guest_handle: - return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION, + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, VERSION__XEN_GUEST_HANDLE, NULL); case XENVER_commandline: - return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION, + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, VERSION__XEN_COMMANDLINE, NULL); case XENVER_build_id: - return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION, + return avc_has_perm(dsid, tsid, SECCLASS_VERSION, VERSION__XEN_BUILD_ID, NULL); default: return -EPERM; } } =20 +static int flask_xen_version (uint32_t op) +{ + return domain_has_xen_version(current->domain, SECINITSID_XEN, op); +} + static int flask_domain_resource_map(struct domain *d) { return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__RESOURCE_MAP); @@ -1738,6 +1748,14 @@ static int flask_argo_send(const struct domain *d, c= onst struct domain *t) =20 #endif =20 +#ifdef CONFIG_XEN_NESTED +static int flask_nested_xen_version(const struct domain *d, unsigned int o= p) +{ + return domain_has_xen_version(d, SECINITSID_NESTEDXEN, op); +} + +#endif + long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op); int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op); =20 @@ -1877,6 +1895,9 @@ static struct xsm_operations flask_ops =3D { .argo_register_any_source =3D flask_argo_register_any_source, .argo_send =3D flask_argo_send, #endif +#ifdef CONFIG_XEN_NESTED + .nested_xen_version =3D flask_nested_xen_version, +#endif }; =20 void __init flask_init(const void *policy_buffer, size_t policy_size) diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/acc= ess_vectors index 194d743a71..7e0d5aa7bf 100644 --- a/xen/xsm/flask/policy/access_vectors +++ b/xen/xsm/flask/policy/access_vectors @@ -510,6 +510,8 @@ class security # class version { +# Basic information + xen_version # Extra informations (-unstable). xen_extraversion # Compile information of the hypervisor. @@ -518,6 +520,10 @@ class version xen_capabilities # Source code changeset. xen_changeset +# Hypervisor virt start + xen_platform_parameters +# Query for bitmap of platform features + xen_get_features # Page size the hypervisor uses. xen_pagesize # An value that the control stack can choose. diff --git a/xen/xsm/flask/policy/initial_sids b/xen/xsm/flask/policy/initi= al_sids index 7eca70d339..c684cda873 100644 --- a/xen/xsm/flask/policy/initial_sids +++ b/xen/xsm/flask/policy/initial_sids @@ -15,4 +15,5 @@ sid irq sid device sid domU sid domDM +sid nestedxen # FLASK --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Apr 26 11:29:03 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990757; cv=none; d=zoho.com; s=zohoarc; b=JLlg6Q4mf6E35y2azfb9XrqOAzLahFg+l6eUwd+DoZszH74bcVGQrqiQVjtPcplumLjgXUqPsWhIgc8U72ZLKo+pQisKLZot3//WO47FilF8Bsl3f53o2TdOCON/hmP02ghdgTLxRwoRZMblddoUVtMIEhqj+VT6l93Q52ENTQY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990757; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=QOVROEMC5PyXMGv0TpudKAUkyo9LPnIU/ViaAaXh7Qc=; b=Ld64NIlKiUOX4ynPylaLVnn2uGtZ9rJj7nT1h7ctMBT+vxWFuUodLFodqE5Ovn5c343w8R32Jurs+WeINycCz62XB4BM7cWsIL4cBn5/jjZfd6iBH8VDMUfkaYzPIt2fhmnukhvEFsB5F0umDBP5c4tD6c+9KmyrfC/4pTkZLDA= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1560990757462709.605026291917; Wed, 19 Jun 2019 17:32:37 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz9-0000C5-7X; Thu, 20 Jun 2019 00:31:27 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkz8-0000BY-NB for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:26 +0000 Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id bc8c530a-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:25 +0000 (UTC) Received: by mail-io1-xd44.google.com with SMTP id u13so339601iop.0 for ; Wed, 19 Jun 2019 17:31:25 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:23 -0700 (PDT) X-Inumbo-ID: bc8c530a-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=u5ozOMfbSbMk3JbLlaojba34kLKjKijBWc0CEO34nXY=; b=sComS17bxzMmbpt5Hx4GYNzMVvgCRLTjxdVBUJhNZsMOP/vh9IxmUsYBaUcDphdyLq eOOaRiYOhYVBDVv0mppotFDZA7+0mUe4RhLBCGUelaZVXqZEqwp1oa7TNTskltbNdUYL AK9FJ/MjPRy9zRRqqavxtiQsstOsHa4S3c1Slo4f8+fASb+fItjdZ8JfUk2TDeYlkpIA XeGPWBGJdmz7xFd7n2Mt3xuhuLxPHFYZmQ9f6qV1/fhpq6+YUOQo1KqlDSGoAOEPdKzk zoY2qvNY77G3bOSfQcnkWIY0szK6ok5jpofZ4WTYVjikw5IVE8rbliQX1RHhTbA9RSMg gAIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=u5ozOMfbSbMk3JbLlaojba34kLKjKijBWc0CEO34nXY=; b=fX5K8dJ61tROVOAJWLFX0VZhgihhoWEE0agDUYaw+FqqfafpFvuQMzHOMahhwIAKzn n35HUyBjzTt0PltlFq2DzzJle9cMCD9fTapQXtM43K/QyjFJhQE/WI8Sn+Otr2o68PpT 6cgNkdVMftuTSYDAOJBQPwQ7pS4aCko0VS8kAIWdY0GdFH+hVdKPeemfIU/wRSLLbKge Jx7qem6qcE61SOrebNmMDscnTIL7YUHOBjCBzbH4ZVFpVk+8dpRaxWOos6nEPOZ+HT+Y 7WuHXlAtjAGyxobgmLxtATQ+cf5oF0Ye2xmmB2BeZ57y4yG6AT3dskZdh6rP/fTCKUoL 8OpA== X-Gm-Message-State: APjAAAXbIELJcGAfVuIiocE4NdsHXwMIcXpqScwEJ44ebdnQ3xVv6acl iajDp+8g6Y0Oa9/s8JtDx0XKw/nPj8Q= X-Google-Smtp-Source: APXvYqx0XM2egyArVGjPpsAqYTm7bPgbBiwBAfUFKBfyNYjGi7RHTYeIoZi1FJy227xYyvsmsGo6rw== X-Received: by 2002:a02:cc8e:: with SMTP id s14mr13086375jap.142.1560990684590; Wed, 19 Jun 2019 17:31:24 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:49 -0700 Message-Id: <20190620003053.21993-6-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 5/9] x86/nested, xsm: add nested_memory_op hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Rich Persaud , Tim Deegan , Julien Grall , Jan Beulich , Daniel De Graaf , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Provides proxying to the host hypervisor for the XENMEM_add_to_physmap op only for the XENMAPSPACE_shared_info and XENMAPSPACE_grant_table spaces, for DOMID_SELF. Both compat and native entry points. Signed-off-by: Christopher Clark --- tools/flask/policy/modules/dom0.te | 1 + xen/arch/x86/guest/hypercall_page.S | 1 + xen/arch/x86/guest/xen-nested.c | 80 +++++++++++++++++++++++++++++ xen/arch/x86/hypercall.c | 1 + xen/arch/x86/pv/hypercall.c | 1 + xen/include/public/xen.h | 1 + xen/include/xen/hypercall.h | 10 ++++ xen/include/xsm/dummy.h | 7 +++ xen/include/xsm/xsm.h | 7 +++ xen/xsm/dummy.c | 1 + xen/xsm/flask/hooks.c | 15 ++++++ 11 files changed, 125 insertions(+) diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/module= s/dom0.te index 9ed7ccb57b..1f564ff83b 100644 --- a/tools/flask/policy/modules/dom0.te +++ b/tools/flask/policy/modules/dom0.te @@ -45,6 +45,7 @@ allow dom0_t dom0_t:resource { add remove }; =20 # Allow dom0 to communicate with a nested Xen hypervisor allow dom0_t nestedxen_t:version { xen_version xen_get_features }; +allow dom0_t nestedxen_t:mmu physmap; =20 # These permissions allow using the FLASK security server to compute access # checks locally, which could be used by a domain or service (such as xens= tore) diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hyper= call_page.S index 2b1e35803a..1a8dd0ea4f 100644 --- a/xen/arch/x86/guest/hypercall_page.S +++ b/xen/arch/x86/guest/hypercall_page.S @@ -61,6 +61,7 @@ DECLARE_HYPERCALL(kexec_op) DECLARE_HYPERCALL(argo_op) DECLARE_HYPERCALL(xenpmu_op) DECLARE_HYPERCALL(nested_xen_version) +DECLARE_HYPERCALL(nested_memory_op) =20 DECLARE_HYPERCALL(arch_0) DECLARE_HYPERCALL(arch_1) diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-neste= d.c index fcfa5e1087..a76983cc2d 100644 --- a/xen/arch/x86/guest/xen-nested.c +++ b/xen/arch/x86/guest/xen-nested.c @@ -22,11 +22,17 @@ #include #include =20 +#include #include +#include =20 #include #include =20 +#ifdef CONFIG_COMPAT +#include +#endif + extern char hypercall_page[]; =20 /* xen_nested: support for nested PV interface enabled */ @@ -80,3 +86,77 @@ long do_nested_xen_version(int cmd, XEN_GUEST_HANDLE_PAR= AM(void) arg) return -EOPNOTSUPP; } } + +static long nested_add_to_physmap(struct xen_add_to_physmap xatp) +{ + struct domain *d; + long ret; + + if ( !xen_nested ) + return -ENOSYS; + + if ( (xatp.space !=3D XENMAPSPACE_shared_info) && + (xatp.space !=3D XENMAPSPACE_grant_table) ) + { + gprintk(XENLOG_ERR, "Nested memory op: unknown xatp.space: %u\n", + xatp.space); + return -EINVAL; + } + + if ( xatp.domid !=3D DOMID_SELF ) + return -EPERM; + + ret =3D xsm_nested_add_to_physmap(XSM_PRIV, current->domain); + if ( ret ) + return ret; + + gprintk(XENLOG_DEBUG, "Nested XENMEM_add_to_physmap: %d\n", xatp.space= ); + + d =3D rcu_lock_current_domain(); + + ret =3D xen_hypercall_memory_op(XENMEM_add_to_physmap, &xatp); + + rcu_unlock_domain(d); + + if ( ret ) + gprintk(XENLOG_ERR, "Nested memory op failed add_to_physmap" + " for %d with %ld\n", xatp.space, ret); + return ret; +} + +long do_nested_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + struct xen_add_to_physmap xatp; + + if ( cmd !=3D XENMEM_add_to_physmap ) + { + gprintk(XENLOG_ERR, "Nested memory op %u not implemented.\n", cmd); + return -EOPNOTSUPP; + } + + if ( copy_from_guest(&xatp, arg, 1) ) + return -EFAULT; + + return nested_add_to_physmap(xatp); +} + +#ifdef CONFIG_COMPAT +int compat_nested_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + struct compat_add_to_physmap cmp; + struct xen_add_to_physmap *nat =3D COMPAT_ARG_XLAT_VIRT_BASE; + + if ( cmd !=3D XENMEM_add_to_physmap ) + { + gprintk(XENLOG_ERR, "Nested memory op %u not implemented.\n", cmd); + return -EOPNOTSUPP; + } + + if ( copy_from_guest(&cmp, arg, 1) ) + return -EFAULT; + + XLAT_add_to_physmap(nat, &cmp); + + return nested_add_to_physmap(*nat); +} +#endif diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c index b22f0ca65a..2aa8dc5ac6 100644 --- a/xen/arch/x86/hypercall.c +++ b/xen/arch/x86/hypercall.c @@ -75,6 +75,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls= ] =3D #endif #ifdef CONFIG_XEN_NESTED ARGS(nested_xen_version, 2), + COMP(nested_memory_op, 2, 2), #endif ARGS(mca, 1), ARGS(arch_1, 1), diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c index 1e00d07273..96198d3313 100644 --- a/xen/arch/x86/pv/hypercall.c +++ b/xen/arch/x86/pv/hypercall.c @@ -86,6 +86,7 @@ const hypercall_table_t pv_hypercall_table[] =3D { #endif #ifdef CONFIG_XEN_NESTED HYPERCALL(nested_xen_version), + COMPAT_CALL(nested_memory_op), #endif HYPERCALL(mca), HYPERCALL(arch_1), diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index 2f5ac5eedc..e081f52fc4 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -122,6 +122,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t); #define __HYPERVISOR_xenpmu_op 40 #define __HYPERVISOR_dm_op 41 #define __HYPERVISOR_nested_xen_version 42 +#define __HYPERVISOR_nested_memory_op 43 =20 /* Architecture-specific hypercall definitions. */ #define __HYPERVISOR_arch_0 48 diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h index 15194002d6..d373bd1763 100644 --- a/xen/include/xen/hypercall.h +++ b/xen/include/xen/hypercall.h @@ -154,6 +154,10 @@ do_dm_op( extern long do_nested_xen_version( int cmd, XEN_GUEST_HANDLE_PARAM(void) arg); + +extern long do_nested_memory_op( + int cmd, + XEN_GUEST_HANDLE_PARAM(void) arg); #endif =20 #ifdef CONFIG_COMPAT @@ -222,6 +226,12 @@ compat_dm_op( unsigned int nr_bufs, XEN_GUEST_HANDLE_PARAM(void) bufs); =20 +#ifdef CONFIG_XEN_NESTED +extern int compat_nested_memory_op( + int cmd, + XEN_GUEST_HANDLE_PARAM(void) arg); +#endif + #endif =20 void arch_get_xen_caps(xen_capabilities_info_t *info); diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 8011bf2cb4..17375f6b9f 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -747,6 +747,13 @@ static XSM_INLINE int xsm_nested_xen_version(XSM_DEFAU= LT_ARG XSM_ASSERT_ACTION(XSM_PRIV); return xsm_default_action(action, d, NULL); } + +static XSM_INLINE int xsm_nested_add_to_physmap(XSM_DEFAULT_ARG + const struct domain *d) +{ + XSM_ASSERT_ACTION(XSM_PRIV); + return xsm_default_action(action, d, NULL); +} #endif =20 #include diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 96044cb55a..920d2d9088 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -189,6 +189,7 @@ struct xsm_operations { #endif #ifdef CONFIG_XEN_NESTED int (*nested_xen_version) (const struct domain *d, unsigned int cmd); + int (*nested_add_to_physmap) (const struct domain *d); #endif }; =20 @@ -734,6 +735,12 @@ static inline int xsm_nested_xen_version(xsm_default_t= def, return xsm_ops->nested_xen_version(d, cmd); } =20 +static inline int xsm_nested_add_to_physmap(xsm_default_t def, + const struct domain *d) +{ + return xsm_ops->nested_add_to_physmap(d); +} + #endif /* CONFIG_XEN_NESTED */ =20 #endif /* XSM_NO_WRAPPERS */ diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index ed0a4b0691..5ce29bcfe5 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -159,5 +159,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) #endif #ifdef CONFIG_XEN_NESTED set_to_dummy_if_null(ops, nested_xen_version); + set_to_dummy_if_null(ops, nested_add_to_physmap); #endif } diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 2835279fe7..17a81b85f9 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1749,6 +1749,20 @@ static int flask_argo_send(const struct domain *d, c= onst struct domain *t) #endif =20 #ifdef CONFIG_XEN_NESTED +static int domain_has_nested_perm(const struct domain *d, u16 class, u32 p= erm) +{ + struct avc_audit_data ad; + + AVC_AUDIT_DATA_INIT(&ad, NONE); + + return avc_has_perm(domain_sid(d), SECINITSID_NESTEDXEN, class, perm, = &ad); +} + +static int flask_nested_add_to_physmap(const struct domain *d) +{ + return domain_has_nested_perm(d, SECCLASS_MMU, MMU__PHYSMAP); +} + static int flask_nested_xen_version(const struct domain *d, unsigned int o= p) { return domain_has_xen_version(d, SECINITSID_NESTEDXEN, op); @@ -1897,6 +1911,7 @@ static struct xsm_operations flask_ops =3D { #endif #ifdef CONFIG_XEN_NESTED .nested_xen_version =3D flask_nested_xen_version, + .nested_add_to_physmap =3D flask_nested_add_to_physmap, #endif }; =20 --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Apr 26 11:29:03 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990753; cv=none; d=zoho.com; s=zohoarc; b=VwwA4V1TsEj/GoZG2dKRbUwi9Wm10u9Fux/2bPzVZSWmSpOZHVmvEw+jsxlE6qPphF54YyAvTseYCnIU6E8eHFYxMar0pqQJ4yLCIbbj0KlcRBH8gFeIF7o5fWlY6YVjgJD+QULlrKkA91y0kKPe8kIuRb0pkiyDQ6X1v0Qv51g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990753; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=N0yCuGNXzssO8DlaU995J39nVbtdwilmi6Y0juG3rW8=; b=oQAjNDKcTTzqRUbdgZygzMdo8DQZVh0/12g7Tc+Wm3eGgrbRL+ugCFplKeJu3l3198f+KsKtkqOhdv1CMCD4m0WHBrBc2GfaS5QqyTi/GYqHfUbvhGXmjbSdSP0ti/69PD091KrHefDkTl6N528veYTpmAn/ojsepakqCrxovhY= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 15609907535160.9839361471483699; Wed, 19 Jun 2019 17:32:33 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkzC-0000Ew-RI; Thu, 20 Jun 2019 00:31:30 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkzB-0000Dk-1i for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:29 +0000 Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id bdc2d041-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:27 +0000 (UTC) Received: by mail-io1-xd41.google.com with SMTP id j6so430842ioa.5 for ; Wed, 19 Jun 2019 17:31:27 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:25 -0700 (PDT) X-Inumbo-ID: bdc2d041-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fiPymJBpqSFVjWizv6VMNdFvETPztYM6w/lb+Pe6n6k=; b=Dzk9J86rpv5oga6pUzNVUzIzV+O40GfSx4Ru7fd/aJncXBetfqLAKgz6hNEJJKvgTg bxVoJDl22L5ofABLmzYtstFuueHkP3FxFhnsI12gUHOJR5A4iKSSeqemOxBqebNua4OV /FghdXExvZJ4/sBbf7ESroXeTpkELXxm6kgJP/FfmrJipsvZoAOXRaXTOKJ5Kcc3TcQ9 bSQzWrw+uwrWyJK9IdGqLglBxUfYLVwYeVilcMeBpDdypTDE5XA1UZ/pASGK/N36/nGt oekBzXYRWYSorNUQG4dnFMjGdQpTIgk1zfE5fMHttycXI8sLUw9CWX8e2w8JGCYjsx0S gAqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fiPymJBpqSFVjWizv6VMNdFvETPztYM6w/lb+Pe6n6k=; b=tToNRdjDKxRycsRYRzSXuQo5nskKv6BiK+ZUwDZa3nlwhmYeE7FBZ0z0hjaC1xLHbH OBkn74Q34OSGIMEpG6WffwLAmLJqQw6ryXddcyuP1ejZKmSVLouRw4pj8F4haqNVMuD5 2xwoCHbQLLPB3gfiFzSS84GSP2hhfEOuA9Nbdx+nOz5Q3TdrX0AP5CurezyM+Pq+AInB deKsxHznB6FJgVrnGLCmisiHgR97ITRLxQOWJ3E4RTUwDxQlqxQo5smYDTlsCBjx2w+t CzxRikE1LfzdW3g34NUSMnzfcE1PmkAlt1nzsEwsrMKw8G+tDLtvke8iiPAnNyYlaexK AMXA== X-Gm-Message-State: APjAAAWVWCLcTLnfOgOoXhJL3eZiRfs6brKaO1bN7Ji1EMNEcJqJ1BCn 1E/xYk0/LXpucdb9AldS7X+AwZOTgg8= X-Google-Smtp-Source: APXvYqysln0sal/3G7K7k8lyB2Cal5oDdFg8nOjpOWZnCEi1wIkET7Ka11WdAtmdNwyi2cnGzbL4RQ== X-Received: by 2002:a05:6638:63a:: with SMTP id h26mr12641877jar.92.1560990686853; Wed, 19 Jun 2019 17:31:26 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:50 -0700 Message-Id: <20190620003053.21993-7-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 6/9] x86/nested, xsm: add nested_hvm_op hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Rich Persaud , Tim Deegan , Julien Grall , Jan Beulich , Daniel De Graaf , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Provides proxying to the host hypervisor for HVMOP_get_param and HVMOP_set_param ops. Signed-off-by: Christopher Clark --- tools/flask/policy/modules/dom0.te | 1 + xen/arch/x86/guest/hypercall_page.S | 1 + xen/arch/x86/guest/xen-nested.c | 42 +++++++++++++++++++++++++++++ xen/arch/x86/hypercall.c | 1 + xen/arch/x86/pv/hypercall.c | 1 + xen/include/public/xen.h | 1 + xen/include/xen/hypercall.h | 4 +++ xen/include/xsm/dummy.h | 7 +++++ xen/include/xsm/xsm.h | 7 +++++ xen/xsm/dummy.c | 1 + xen/xsm/flask/hooks.c | 22 +++++++++++++++ 11 files changed, 88 insertions(+) diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/module= s/dom0.te index 1f564ff83b..7d0f29f082 100644 --- a/tools/flask/policy/modules/dom0.te +++ b/tools/flask/policy/modules/dom0.te @@ -46,6 +46,7 @@ allow dom0_t dom0_t:resource { add remove }; # Allow dom0 to communicate with a nested Xen hypervisor allow dom0_t nestedxen_t:version { xen_version xen_get_features }; allow dom0_t nestedxen_t:mmu physmap; +allow dom0_t nestedxen_t:hvm { setparam getparam }; =20 # These permissions allow using the FLASK security server to compute access # checks locally, which could be used by a domain or service (such as xens= tore) diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hyper= call_page.S index 1a8dd0ea4f..adbb82f4ec 100644 --- a/xen/arch/x86/guest/hypercall_page.S +++ b/xen/arch/x86/guest/hypercall_page.S @@ -62,6 +62,7 @@ DECLARE_HYPERCALL(argo_op) DECLARE_HYPERCALL(xenpmu_op) DECLARE_HYPERCALL(nested_xen_version) DECLARE_HYPERCALL(nested_memory_op) +DECLARE_HYPERCALL(nested_hvm_op) =20 DECLARE_HYPERCALL(arch_0) DECLARE_HYPERCALL(arch_1) diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-neste= d.c index a76983cc2d..82bd6885e6 100644 --- a/xen/arch/x86/guest/xen-nested.c +++ b/xen/arch/x86/guest/xen-nested.c @@ -22,6 +22,7 @@ #include #include =20 +#include #include #include #include @@ -160,3 +161,44 @@ int compat_nested_memory_op(int cmd, XEN_GUEST_HANDLE_= PARAM(void) arg) return nested_add_to_physmap(*nat); } #endif + +long do_nested_hvm_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + struct xen_hvm_param a; + long ret; + + if ( !xen_nested ) + return -ENOSYS; + + ret =3D xsm_nested_hvm_op(XSM_PRIV, current->domain, cmd); + if ( ret ) + return ret; + + switch ( cmd ) + { + case HVMOP_set_param: + { + if ( copy_from_guest(&a, arg, 1) ) + return -EFAULT; + + return xen_hypercall_hvm_op(cmd, &a); + } + + case HVMOP_get_param: + { + if ( copy_from_guest(&a, arg, 1) ) + return -EFAULT; + + ret =3D xen_hypercall_hvm_op(cmd, &a); + + if ( !ret && __copy_to_guest(arg, &a, 1) ) + return -EFAULT; + + return ret; + } + + default: + gprintk(XENLOG_ERR, "Nested hvm op %d not implemented.\n", cmd); + return -EOPNOTSUPP; + } +} diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c index 2aa8dc5ac6..268cc9450a 100644 --- a/xen/arch/x86/hypercall.c +++ b/xen/arch/x86/hypercall.c @@ -76,6 +76,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls= ] =3D #ifdef CONFIG_XEN_NESTED ARGS(nested_xen_version, 2), COMP(nested_memory_op, 2, 2), + ARGS(nested_hvm_op, 2), #endif ARGS(mca, 1), ARGS(arch_1, 1), diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c index 96198d3313..e88ecce222 100644 --- a/xen/arch/x86/pv/hypercall.c +++ b/xen/arch/x86/pv/hypercall.c @@ -87,6 +87,7 @@ const hypercall_table_t pv_hypercall_table[] =3D { #ifdef CONFIG_XEN_NESTED HYPERCALL(nested_xen_version), COMPAT_CALL(nested_memory_op), + HYPERCALL(nested_hvm_op), #endif HYPERCALL(mca), HYPERCALL(arch_1), diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index e081f52fc4..1731409eb8 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -123,6 +123,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t); #define __HYPERVISOR_dm_op 41 #define __HYPERVISOR_nested_xen_version 42 #define __HYPERVISOR_nested_memory_op 43 +#define __HYPERVISOR_nested_hvm_op 44 =20 /* Architecture-specific hypercall definitions. */ #define __HYPERVISOR_arch_0 48 diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h index d373bd1763..b09070539e 100644 --- a/xen/include/xen/hypercall.h +++ b/xen/include/xen/hypercall.h @@ -158,6 +158,10 @@ extern long do_nested_xen_version( extern long do_nested_memory_op( int cmd, XEN_GUEST_HANDLE_PARAM(void) arg); + +extern long do_nested_hvm_op( + int cmd, + XEN_GUEST_HANDLE_PARAM(void) arg); #endif =20 #ifdef CONFIG_COMPAT diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 17375f6b9f..238b425c49 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -754,6 +754,13 @@ static XSM_INLINE int xsm_nested_add_to_physmap(XSM_DE= FAULT_ARG XSM_ASSERT_ACTION(XSM_PRIV); return xsm_default_action(action, d, NULL); } + +static XSM_INLINE int xsm_nested_hvm_op(XSM_DEFAULT_ARG const struct domai= n *d, + unsigned int cmd) +{ + XSM_ASSERT_ACTION(XSM_PRIV); + return xsm_default_action(action, d, NULL); +} #endif =20 #include diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 920d2d9088..cc02bf18c7 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -190,6 +190,7 @@ struct xsm_operations { #ifdef CONFIG_XEN_NESTED int (*nested_xen_version) (const struct domain *d, unsigned int cmd); int (*nested_add_to_physmap) (const struct domain *d); + int (*nested_hvm_op) (const struct domain *d, unsigned int cmd); #endif }; =20 @@ -741,6 +742,12 @@ static inline int xsm_nested_add_to_physmap(xsm_defaul= t_t def, return xsm_ops->nested_add_to_physmap(d); } =20 +static inline int xsm_nested_hvm_op(xsm_default_t def, const struct domain= *d, + unsigned int cmd) +{ + return xsm_ops->nested_hvm_op(d, cmd); +} + #endif /* CONFIG_XEN_NESTED */ =20 #endif /* XSM_NO_WRAPPERS */ diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 5ce29bcfe5..909d41a81b 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -160,5 +160,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) #ifdef CONFIG_XEN_NESTED set_to_dummy_if_null(ops, nested_xen_version); set_to_dummy_if_null(ops, nested_add_to_physmap); + set_to_dummy_if_null(ops, nested_hvm_op); #endif } diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 17a81b85f9..f8d247e28f 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1768,6 +1768,27 @@ static int flask_nested_xen_version(const struct dom= ain *d, unsigned int op) return domain_has_xen_version(d, SECINITSID_NESTEDXEN, op); } =20 +static int flask_nested_hvm_op(const struct domain *d, unsigned int op) +{ + u32 perm; + + switch ( op ) + { + case HVMOP_set_param: + perm =3D HVM__SETPARAM; + break; + + case HVMOP_get_param: + perm =3D HVM__GETPARAM; + break; + + default: + perm =3D HVM__HVMCTL; + } + + return domain_has_nested_perm(d, SECCLASS_HVM, perm); +} + #endif =20 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op); @@ -1912,6 +1933,7 @@ static struct xsm_operations flask_ops =3D { #ifdef CONFIG_XEN_NESTED .nested_xen_version =3D flask_nested_xen_version, .nested_add_to_physmap =3D flask_nested_add_to_physmap, + .nested_hvm_op =3D flask_nested_hvm_op, #endif }; =20 --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Apr 26 11:29:03 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990755; cv=none; d=zoho.com; s=zohoarc; b=Wz8OhrfCd33lZnn4+klrZmcwsiMGZ/ZNanLIPrv8A2Dx3eCKnzQkF4r7mBVUawqeRqE2tozMRW1kTyz5PJW7vCmKF6N4pjpO5ZdkLMbnTPmdyGT0FFDCpisYN6O93CgWb+Q+bIt8yOM3UCmn6i5aRNYLOvQknQUG4LooJhDdFhg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990755; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=6w53PYRgepbEHsN4btHn4jFjioDWg0o4YGZ5sVjoYDk=; b=ODdiwS2CUb17nVvVf9a048bDs3q6IysN3Dspmoi5fOOmFhmZnXYmGTPFOzS/PkBZn3xSYKpnUHHiigdMRwmVCshl98bmsh1Wg6PfS0BuVjO21je6XUJ1JIPMKObhMNbU5GtlAbRIncLYA3s68a7XIWlpXTdD1xdt7xXfVWT1lKs= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1560990755795540.3035342219198; Wed, 19 Jun 2019 17:32:35 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkzF-0000Gz-6P; Thu, 20 Jun 2019 00:31:33 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkzD-0000FS-E2 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:31 +0000 Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id bf3ae313-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:30 +0000 (UTC) Received: by mail-io1-xd41.google.com with SMTP id u13so339938iop.0 for ; Wed, 19 Jun 2019 17:31:30 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:28 -0700 (PDT) X-Inumbo-ID: bf3ae313-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=O3FNmmPQ0t7RzZ3ZGXXHjmR4Ek9tOEiW9gGLxYsIl1I=; b=lM5vaZ1ZstrfT9vxAoXCjMHNGH+AVTo2iAJie7ONks9e//4HCXN8lmNhbEzoQDbcsj f9NuCUjx5VgncPzui+Qf9an0lynZAGrG967woEhoGiyaBjfdEHBjujMyYxwoGRy0t66j /UeGtU6gM/0mEBcAQHLSkm4qQ1kWIbMpMjhj2tt5z9HVGISSkd8zZnHBBHeFtv1dJ7bE wSsRw8Iul4PbWNa1uoK+rh9bcyMO/siYBZ+Zx4geCihugYFFRH1Vh+FnJkhrgPDDzeqF 9PKSGZQVkxKy4ahZlK7nblQ+eJnA4tHfKZhIK+/xWIQRP1YH/0rtgxd+3IpIQ+41suAw 2HqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=O3FNmmPQ0t7RzZ3ZGXXHjmR4Ek9tOEiW9gGLxYsIl1I=; b=h+kwtfeabXidzQmJCvBeIVU/lmNYUrFDmwizHGVhFwcajV0hD3JZOTSCD320cEjaVZ LIbG8vp6SJX6sgtrZaVGOrjmyiamFzkeKY8PnkQDMU+Y4Na+4Pu36mxaY3tY70pVbEJ7 mshey7Q0JcKvN1s/2GtlpPeVlsKjBEcjJuF0VwfnuKdeeq3eFFFZEeQD8yezhzD5joB+ KkM/Nse1J3y/6qIzW8zaRpj2+M562qtxTX75mcU6OGUjn1skckMvmDc3GZOivbQWY9K5 Rs+rIHWvLhWCvu1eZlvckv3nMOT/tbNFxXbg7mO9CmzapjettQOUNOg3WRWm+HI8/DHk dhXg== X-Gm-Message-State: APjAAAVTHSxu5JGrWrldyakrlQls9jFWNCmFYm5q7DSHSZLCBQZK5Iw4 HzohkPLmzxzS538VvZ9uX928tn/3fBc= X-Google-Smtp-Source: APXvYqwWfLcgyji8CQAVLpTFk2umQxr3Rfdv0gk7fd681Byg1VWIZ41B/rVNbgidF0W5pzUU8m814A== X-Received: by 2002:a6b:c38b:: with SMTP id t133mr15354466iof.162.1560990689299; Wed, 19 Jun 2019 17:31:29 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:51 -0700 Message-Id: <20190620003053.21993-8-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 7/9] x86/nested, xsm: add nested_grant_table_op hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Rich Persaud , Tim Deegan , Julien Grall , Jan Beulich , Daniel De Graaf , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Provides proxying to the host hypervisor for the GNTTABOP_query_size op. Signed-off-by: Christopher Clark --- tools/flask/policy/modules/dom0.te | 1 + xen/arch/x86/guest/hypercall_page.S | 1 + xen/arch/x86/guest/xen-nested.c | 37 +++++++++++++++++++++++++++++ xen/arch/x86/hypercall.c | 1 + xen/arch/x86/pv/hypercall.c | 1 + xen/include/public/xen.h | 1 + xen/include/xen/hypercall.h | 5 ++++ xen/include/xsm/dummy.h | 7 ++++++ xen/include/xsm/xsm.h | 7 ++++++ xen/xsm/dummy.c | 1 + xen/xsm/flask/hooks.c | 6 +++++ 11 files changed, 68 insertions(+) diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/module= s/dom0.te index 7d0f29f082..03c93a3093 100644 --- a/tools/flask/policy/modules/dom0.te +++ b/tools/flask/policy/modules/dom0.te @@ -47,6 +47,7 @@ allow dom0_t dom0_t:resource { add remove }; allow dom0_t nestedxen_t:version { xen_version xen_get_features }; allow dom0_t nestedxen_t:mmu physmap; allow dom0_t nestedxen_t:hvm { setparam getparam }; +allow dom0_t nestedxen_t:grant query; =20 # These permissions allow using the FLASK security server to compute access # checks locally, which could be used by a domain or service (such as xens= tore) diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hyper= call_page.S index adbb82f4ec..33403714ce 100644 --- a/xen/arch/x86/guest/hypercall_page.S +++ b/xen/arch/x86/guest/hypercall_page.S @@ -63,6 +63,7 @@ DECLARE_HYPERCALL(xenpmu_op) DECLARE_HYPERCALL(nested_xen_version) DECLARE_HYPERCALL(nested_memory_op) DECLARE_HYPERCALL(nested_hvm_op) +DECLARE_HYPERCALL(nested_grant_table_op) =20 DECLARE_HYPERCALL(arch_0) DECLARE_HYPERCALL(arch_1) diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-neste= d.c index 82bd6885e6..a4049e366f 100644 --- a/xen/arch/x86/guest/xen-nested.c +++ b/xen/arch/x86/guest/xen-nested.c @@ -22,6 +22,7 @@ #include #include =20 +#include #include #include #include @@ -202,3 +203,39 @@ long do_nested_hvm_op(int cmd, XEN_GUEST_HANDLE_PARAM(= void) arg) return -EOPNOTSUPP; } } + +long do_nested_grant_table_op(unsigned int cmd, + XEN_GUEST_HANDLE_PARAM(void) uop, + unsigned int count) +{ + struct gnttab_query_size op; + long ret; + + if ( !xen_nested ) + return -ENOSYS; + + if ( cmd !=3D GNTTABOP_query_size ) + { + gprintk(XENLOG_ERR, "Nested grant table op %u not supported.\n", c= md); + return -EOPNOTSUPP; + } + + if ( count !=3D 1 ) + return -EINVAL; + + if ( copy_from_guest(&op, uop, 1) ) + return -EFAULT; + + if ( op.dom !=3D DOMID_SELF ) + return -EPERM; + + ret =3D xsm_nested_grant_query_size(XSM_PRIV, current->domain); + if ( ret ) + return ret; + + ret =3D xen_hypercall_grant_table_op(cmd, &op, 1); + if ( !ret && __copy_to_guest(uop, &op, 1) ) + return -EFAULT; + + return ret; +} diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c index 268cc9450a..1b9f4c6050 100644 --- a/xen/arch/x86/hypercall.c +++ b/xen/arch/x86/hypercall.c @@ -77,6 +77,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls= ] =3D ARGS(nested_xen_version, 2), COMP(nested_memory_op, 2, 2), ARGS(nested_hvm_op, 2), + ARGS(nested_grant_table_op, 3), #endif ARGS(mca, 1), ARGS(arch_1, 1), diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c index e88ecce222..efa1bd0830 100644 --- a/xen/arch/x86/pv/hypercall.c +++ b/xen/arch/x86/pv/hypercall.c @@ -88,6 +88,7 @@ const hypercall_table_t pv_hypercall_table[] =3D { HYPERCALL(nested_xen_version), COMPAT_CALL(nested_memory_op), HYPERCALL(nested_hvm_op), + HYPERCALL(nested_grant_table_op), #endif HYPERCALL(mca), HYPERCALL(arch_1), diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index 1731409eb8..000b7fc9d0 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -124,6 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t); #define __HYPERVISOR_nested_xen_version 42 #define __HYPERVISOR_nested_memory_op 43 #define __HYPERVISOR_nested_hvm_op 44 +#define __HYPERVISOR_nested_grant_table_op 45 =20 /* Architecture-specific hypercall definitions. */ #define __HYPERVISOR_arch_0 48 diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h index b09070539e..102b20fd5f 100644 --- a/xen/include/xen/hypercall.h +++ b/xen/include/xen/hypercall.h @@ -162,6 +162,11 @@ extern long do_nested_memory_op( extern long do_nested_hvm_op( int cmd, XEN_GUEST_HANDLE_PARAM(void) arg); + +extern long do_nested_grant_table_op( + unsigned int cmd, + XEN_GUEST_HANDLE_PARAM(void) uop, + unsigned int count); #endif =20 #ifdef CONFIG_COMPAT diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 238b425c49..f5871ef05a 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -761,6 +761,13 @@ static XSM_INLINE int xsm_nested_hvm_op(XSM_DEFAULT_AR= G const struct domain *d, XSM_ASSERT_ACTION(XSM_PRIV); return xsm_default_action(action, d, NULL); } + +static XSM_INLINE int xsm_nested_grant_query_size(XSM_DEFAULT_ARG + const struct domain *d) +{ + XSM_ASSERT_ACTION(XSM_PRIV); + return xsm_default_action(action, d, NULL); +} #endif =20 #include diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index cc02bf18c7..e12001c401 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -191,6 +191,7 @@ struct xsm_operations { int (*nested_xen_version) (const struct domain *d, unsigned int cmd); int (*nested_add_to_physmap) (const struct domain *d); int (*nested_hvm_op) (const struct domain *d, unsigned int cmd); + int (*nested_grant_query_size) (const struct domain *d); #endif }; =20 @@ -748,6 +749,12 @@ static inline int xsm_nested_hvm_op(xsm_default_t def,= const struct domain *d, return xsm_ops->nested_hvm_op(d, cmd); } =20 +static inline int xsm_nested_grant_query_size(xsm_default_t def, + const struct domain *d) +{ + return xsm_ops->nested_grant_query_size(d); +} + #endif /* CONFIG_XEN_NESTED */ =20 #endif /* XSM_NO_WRAPPERS */ diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 909d41a81b..8c213c258f 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -161,5 +161,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, nested_xen_version); set_to_dummy_if_null(ops, nested_add_to_physmap); set_to_dummy_if_null(ops, nested_hvm_op); + set_to_dummy_if_null(ops, nested_grant_query_size); #endif } diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index f8d247e28f..2988df2cd1 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1789,6 +1789,11 @@ static int flask_nested_hvm_op(const struct domain *= d, unsigned int op) return domain_has_nested_perm(d, SECCLASS_HVM, perm); } =20 +static int flask_nested_grant_query_size(const struct domain *d) +{ + return domain_has_nested_perm(d, SECCLASS_GRANT, GRANT__QUERY); +} + #endif =20 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op); @@ -1934,6 +1939,7 @@ static struct xsm_operations flask_ops =3D { .nested_xen_version =3D flask_nested_xen_version, .nested_add_to_physmap =3D flask_nested_add_to_physmap, .nested_hvm_op =3D flask_nested_hvm_op, + .nested_grant_query_size =3D flask_nested_grant_query_size, #endif }; =20 --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Apr 26 11:29:03 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990758; cv=none; d=zoho.com; s=zohoarc; b=HO414U+0s5OIG9sX15J6EPnLbf0aoaXT+FUQkUvlMZn7QE/DUV7cSphtUPmNNQGSAzH9F7Mo0kvkaRNr20jsH83KpUUvToHyjzH9laj2Fnl7SS3UWN8Vuavs8AJDp7srRuhqwGh3TxaTyYN04uIF5hMNKpmTFo7+GpbzyK3q3N0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990758; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=SndYtFBxU8+CF9u7kwTJns6+CJc2UG419qdYghfdi78=; b=RTe7tBLSkEE8wwVwtT+xgSWFakDtKNVmK1fwBguZJgeHruDHcGcYL0/l2ofIjqKIvSG/lH6SpS1ieZhbIcld0Q+sDszvv1tz9KmXaPMAD2K3rw/7XfLkHS5/qHgiiAV4d7eKZRt1S9RkcndffevokeZDRoMUubGvY2TjV3ZzY2A= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1560990758903799.7466873468211; Wed, 19 Jun 2019 17:32:38 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkzH-0000JP-J3; Thu, 20 Jun 2019 00:31:35 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkzG-0000I4-1m for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:34 +0000 Received: from mail-io1-xd36.google.com (unknown [2607:f8b0:4864:20::d36]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c0d37983-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:32 +0000 (UTC) Received: by mail-io1-xd36.google.com with SMTP id u19so234020ior.9 for ; Wed, 19 Jun 2019 17:31:32 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:30 -0700 (PDT) X-Inumbo-ID: c0d37983-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BMTKi04P7MgHA/kZyHsqvUaTgfOnta1B9UscTK1MQV8=; b=Sb+u/AaUt8NtQokUUAkKM7XqYppwJYqUzRCnrPBcSYBSW5iKeqCs54pHxwjyNbnbav hEwfqtTh5M795KdL3PiANWyy/wuKI3/NYmenDAtPuSlaD1SiZh7J8nYSnhZ+1CepPxKr H9EuQnSybOUqwo/pl2qPnA56h2RAy/Y3r16f6nawHA6EqRtNUlJ6H33TJizPwePvdGO3 WTqhhLQ934gW6VZUC1/5pg/7YqCnbG7P4mUWyFPT8j9nkmEbytIal3U5RsIB1uwpsuKz 73AmszUXfvu/D4r4rD59D3NqAPxGxCDNNqPUoaM5Uey5UJ3X1TESeC3olLOmiR/UZbhe WU6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BMTKi04P7MgHA/kZyHsqvUaTgfOnta1B9UscTK1MQV8=; b=df7cqDztZhKrJVja6VhT3fbU1R793vl1dUaIifDgw6hQOm6m8wpDkyct0ydjLz77mx JnpMxNkCdAIRB5OoLe78gmlSU4MmYSJ7my5sHyB0YiEM4oA+l13zIy+J7CfqV9TO5zth ZZN+5rQkkDVe4b19hk+N8JsjMIdlP3BdaozeugGkQGaUw+/JcpGn2t1LU6mvJcM3jzt0 BxEMvZRgEoUPgbS/HagIqYKuYQ4vBEgCHLp1wyOHD6CNPE30U9D+qDuhj2El7kpQZHcC I3YJDay04bgyCAMzoGQqqdBSGAreZPtN0SpRcPQVOd09GrVRjlGMzDoFi740/POvW26d 9Eng== X-Gm-Message-State: APjAAAW+LXB/KUy6qVxSJRcawVNftDJ5F7+N+BhJFuUojZugxeQixhsk eB4GvMcI6+Uzcbt7y1QN092XYrssAWY= X-Google-Smtp-Source: APXvYqyzceR4at2OYBzDCrmjj2sZkWQxr/Lk4mT3bpPFns5gUnH1uffsuo8ao3pWGI4Nef1dZdsLiQ== X-Received: by 2002:a5e:8b43:: with SMTP id z3mr2972029iom.287.1560990691588; Wed, 19 Jun 2019 17:31:31 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:52 -0700 Message-Id: <20190620003053.21993-9-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 8/9] x86/nested, xsm: add nested_event_channel_op hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Rich Persaud , Tim Deegan , Julien Grall , Jan Beulich , Daniel De Graaf , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Provides proxying to the host hypervisor for these event channel ops: * EVTCHNOP_alloc_unbound * EVTCHNOP_bind_vcpu * EVTCHNOP_close * EVTCHNOP_send * EVTCHNOP_unmask Introduces a new XSM access vector class for policy control applied to this operation: nested_event. This is required because the existing 'event' access vector is unsuitable for repurposing to the nested case: it operates on per-channel security identifiers that are generated from a combination of the security identifiers of the two communicating endpoints and data is not available for the remote endpoint in the nested case. Signed-off-by: Christopher Clark --- tools/flask/policy/modules/dom0.te | 3 + xen/arch/x86/guest/hypercall_page.S | 1 + xen/arch/x86/guest/xen-nested.c | 84 +++++++++++++++++++++++++++ xen/arch/x86/hypercall.c | 1 + xen/arch/x86/pv/hypercall.c | 1 + xen/include/public/xen.h | 1 + xen/include/xen/hypercall.h | 4 ++ xen/include/xsm/dummy.h | 8 +++ xen/include/xsm/xsm.h | 8 +++ xen/xsm/dummy.c | 1 + xen/xsm/flask/hooks.c | 35 +++++++++++ xen/xsm/flask/policy/access_vectors | 20 +++++++ xen/xsm/flask/policy/security_classes | 1 + 13 files changed, 168 insertions(+) diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/module= s/dom0.te index 03c93a3093..ba3c5ad63d 100644 --- a/tools/flask/policy/modules/dom0.te +++ b/tools/flask/policy/modules/dom0.te @@ -48,6 +48,9 @@ allow dom0_t nestedxen_t:version { xen_version xen_get_fe= atures }; allow dom0_t nestedxen_t:mmu physmap; allow dom0_t nestedxen_t:hvm { setparam getparam }; allow dom0_t nestedxen_t:grant query; +allow dom0_t nestedxen_t:nested_event { + alloc_unbound bind_vcpu close send unmask +}; =20 # These permissions allow using the FLASK security server to compute access # checks locally, which could be used by a domain or service (such as xens= tore) diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hyper= call_page.S index 33403714ce..64f1885629 100644 --- a/xen/arch/x86/guest/hypercall_page.S +++ b/xen/arch/x86/guest/hypercall_page.S @@ -64,6 +64,7 @@ DECLARE_HYPERCALL(nested_xen_version) DECLARE_HYPERCALL(nested_memory_op) DECLARE_HYPERCALL(nested_hvm_op) DECLARE_HYPERCALL(nested_grant_table_op) +DECLARE_HYPERCALL(nested_event_channel_op) =20 DECLARE_HYPERCALL(arch_0) DECLARE_HYPERCALL(arch_1) diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-neste= d.c index a4049e366f..babf4bf783 100644 --- a/xen/arch/x86/guest/xen-nested.c +++ b/xen/arch/x86/guest/xen-nested.c @@ -22,6 +22,7 @@ #include #include =20 +#include #include #include #include @@ -239,3 +240,86 @@ long do_nested_grant_table_op(unsigned int cmd, =20 return ret; } + +long do_nested_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + long ret; + + if ( !xen_nested ) + return -ENOSYS; + + ret =3D xsm_nested_event_channel_op(XSM_PRIV, current->domain, cmd); + if ( ret ) + return ret; + + switch ( cmd ) + { + case EVTCHNOP_alloc_unbound: + { + struct evtchn_alloc_unbound alloc_unbound; + + if ( copy_from_guest(&alloc_unbound, arg, 1) ) + return -EFAULT; + + ret =3D xen_hypercall_event_channel_op(cmd, &alloc_unbound); + if ( !ret && __copy_to_guest(arg, &alloc_unbound, 1) ) + { + struct evtchn_close close; + + ret =3D -EFAULT; + close.port =3D alloc_unbound.port; + + if ( xen_hypercall_event_channel_op(EVTCHNOP_close, &close) ) + gprintk(XENLOG_ERR, "Nested event alloc_unbound failed to = close" + " port %u on EFAULT\n", alloc_unbound.= port); + } + break; + } + + case EVTCHNOP_bind_vcpu: + { + struct evtchn_bind_vcpu bind_vcpu; + + if( copy_from_guest(&bind_vcpu, arg, 1) ) + return -EFAULT; + + return xen_hypercall_event_channel_op(cmd, &bind_vcpu); + } + + case EVTCHNOP_close: + { + struct evtchn_close close; + + if ( copy_from_guest(&close, arg, 1) ) + return -EFAULT; + + return xen_hypercall_event_channel_op(cmd, &close); + } + + case EVTCHNOP_send: + { + struct evtchn_send send; + + if ( copy_from_guest(&send, arg, 1) ) + return -EFAULT; + + return xen_hypercall_event_channel_op(cmd, &send); + } + + case EVTCHNOP_unmask: + { + struct evtchn_unmask unmask; + + if ( copy_from_guest(&unmask, arg, 1) ) + return -EFAULT; + + return xen_hypercall_event_channel_op(cmd, &unmask); + } + + default: + gprintk(XENLOG_ERR, "Nested: event hypercall %d not supported.\n",= cmd); + return -EOPNOTSUPP; + } + + return ret; +} diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c index 1b9f4c6050..752955ac81 100644 --- a/xen/arch/x86/hypercall.c +++ b/xen/arch/x86/hypercall.c @@ -78,6 +78,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls= ] =3D COMP(nested_memory_op, 2, 2), ARGS(nested_hvm_op, 2), ARGS(nested_grant_table_op, 3), + ARGS(nested_event_channel_op, 2), #endif ARGS(mca, 1), ARGS(arch_1, 1), diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c index efa1bd0830..6b1ae74d64 100644 --- a/xen/arch/x86/pv/hypercall.c +++ b/xen/arch/x86/pv/hypercall.c @@ -89,6 +89,7 @@ const hypercall_table_t pv_hypercall_table[] =3D { COMPAT_CALL(nested_memory_op), HYPERCALL(nested_hvm_op), HYPERCALL(nested_grant_table_op), + HYPERCALL(nested_event_channel_op), #endif HYPERCALL(mca), HYPERCALL(arch_1), diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index 000b7fc9d0..5fb322e882 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -125,6 +125,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t); #define __HYPERVISOR_nested_memory_op 43 #define __HYPERVISOR_nested_hvm_op 44 #define __HYPERVISOR_nested_grant_table_op 45 +#define __HYPERVISOR_nested_event_channel_op 46 =20 /* Architecture-specific hypercall definitions. */ #define __HYPERVISOR_arch_0 48 diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h index 102b20fd5f..bd739c2dc7 100644 --- a/xen/include/xen/hypercall.h +++ b/xen/include/xen/hypercall.h @@ -167,6 +167,10 @@ extern long do_nested_grant_table_op( unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count); + +extern long do_nested_event_channel_op( + int cmd, + XEN_GUEST_HANDLE_PARAM(void) arg); #endif =20 #ifdef CONFIG_COMPAT diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index f5871ef05a..f8162f3308 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -768,6 +768,14 @@ static XSM_INLINE int xsm_nested_grant_query_size(XSM_= DEFAULT_ARG XSM_ASSERT_ACTION(XSM_PRIV); return xsm_default_action(action, d, NULL); } + +static XSM_INLINE int xsm_nested_event_channel_op(XSM_DEFAULT_ARG + const struct domain *d, + unsigned int cmd) +{ + XSM_ASSERT_ACTION(XSM_PRIV); + return xsm_default_action(action, d, NULL); +} #endif =20 #include diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index e12001c401..81cb67b89b 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -192,6 +192,7 @@ struct xsm_operations { int (*nested_add_to_physmap) (const struct domain *d); int (*nested_hvm_op) (const struct domain *d, unsigned int cmd); int (*nested_grant_query_size) (const struct domain *d); + int (*nested_event_channel_op) (const struct domain *d, unsigned int c= md); #endif }; =20 @@ -755,6 +756,13 @@ static inline int xsm_nested_grant_query_size(xsm_defa= ult_t def, return xsm_ops->nested_grant_query_size(d); } =20 +static inline int xsm_nested_event_channel_op(xsm_default_t def, + const struct domain *d, + unsigned int cmd) +{ + return xsm_ops->nested_event_channel_op(d, cmd); +} + #endif /* CONFIG_XEN_NESTED */ =20 #endif /* XSM_NO_WRAPPERS */ diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 8c213c258f..91db264ddc 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -162,5 +162,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, nested_add_to_physmap); set_to_dummy_if_null(ops, nested_hvm_op); set_to_dummy_if_null(ops, nested_grant_query_size); + set_to_dummy_if_null(ops, nested_event_channel_op); #endif } diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 2988df2cd1..27bfa01559 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1794,6 +1794,40 @@ static int flask_nested_grant_query_size(const struc= t domain *d) return domain_has_nested_perm(d, SECCLASS_GRANT, GRANT__QUERY); } =20 +static int flask_nested_event_channel_op(const struct domain *d, + unsigned int op) +{ + u32 perm; + + switch ( op ) + { + case EVTCHNOP_alloc_unbound: + perm =3D NESTED_EVENT__ALLOC_UNBOUND; + break; + + case EVTCHNOP_bind_vcpu: + perm =3D NESTED_EVENT__BIND_VCPU; + break; + + case EVTCHNOP_close: + perm =3D NESTED_EVENT__CLOSE; + break; + + case EVTCHNOP_send: + perm =3D NESTED_EVENT__SEND; + break; + + case EVTCHNOP_unmask: + perm =3D NESTED_EVENT__UNMASK; + break; + + default: + return avc_unknown_permission("nested event channel op", op); + } + + return domain_has_nested_perm(d, SECCLASS_NESTED_EVENT, perm); +} + #endif =20 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op); @@ -1940,6 +1974,7 @@ static struct xsm_operations flask_ops =3D { .nested_add_to_physmap =3D flask_nested_add_to_physmap, .nested_hvm_op =3D flask_nested_hvm_op, .nested_grant_query_size =3D flask_nested_grant_query_size, + .nested_event_channel_op =3D flask_nested_event_channel_op, #endif }; =20 diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/acc= ess_vectors index 7e0d5aa7bf..87caa36391 100644 --- a/xen/xsm/flask/policy/access_vectors +++ b/xen/xsm/flask/policy/access_vectors @@ -316,6 +316,26 @@ class event reset } =20 +# Class nested_event describes event channels to the host hypervisor +# in a nested Xen-on-Xen system. Policy controls for these differ +# from the interdomain event channels between guest VMs: +# the guest hypervisor does not maintain security identifier information a= bout +# the remote event endpoint managed by the host hypervisor, so nested_event +# channels do not have their own security label derived from a type transi= tion. +class nested_event +{ + # nested_event_channel_op: EVTCHNOP_alloc_unbound + alloc_unbound + # nested_event_channel_op: EVTCHNOP_bind_vcpu + bind_vcpu + # nested_event_channel_op: EVTCHNOP_close + close + # nested_event_channel_op: EVTCHNOP_send + send + # nested_event_channel_op: EVTCHNOP_unmask + unmask +} + # Class grant describes pages shared by grant mappings. Pages use the sec= urity # label of their owning domain. class grant diff --git a/xen/xsm/flask/policy/security_classes b/xen/xsm/flask/policy/s= ecurity_classes index 50ecbabc5c..ce5d00df23 100644 --- a/xen/xsm/flask/policy/security_classes +++ b/xen/xsm/flask/policy/security_classes @@ -20,5 +20,6 @@ class grant class security class version class argo +class nested_event =20 # FLASK --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel From nobody Fri Apr 26 11:29:03 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1560990767; cv=none; d=zoho.com; s=zohoarc; b=e8FWt7xPnTjnJz1KiIOPuu5ZakgkzfFH4oAwyecGwimJJj2ozo9occdQzNlvhyrPjSg+T3NlJXzC57FaqNNn+mo6JPjaa9yc7uJekXyVAK1U/21vVBEL7UV/YZghTWxYaTVBs5BSCr0xvIpMSmAJyKyI+zkCZTlETJh8envF65w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1560990767; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=o5uOVXr5aYfOZODTVL+i2rbs2WZo+V8dpgfhGRpLVfg=; b=NcrjKPQHUEMBDa7vJRpLl3bJummCZIW3oXeB4W58DC6kq6zlHhIZqEZHetKlGP16U478v1B4SoEkf4xnfPLXxLnuu0N5wWFT65fl21a/aBgh+o6FwGNvo+oGRCd10oKnmgTt/kiENW5TprN+lZDYtc6uPSOdOHb/7isF6LYUcvc= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1560990767461333.2150213688403; Wed, 19 Jun 2019 17:32:47 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkzJ-0000Lm-7e; Thu, 20 Jun 2019 00:31:37 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdkzH-0000JE-Fe for xen-devel@lists.xenproject.org; Thu, 20 Jun 2019 00:31:35 +0000 Received: from mail-io1-xd2b.google.com (unknown [2607:f8b0:4864:20::d2b]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id c1d899cf-92f2-11e9-8980-bc764e045a96; Thu, 20 Jun 2019 00:31:34 +0000 (UTC) Received: by mail-io1-xd2b.google.com with SMTP id s7so766406iob.11 for ; Wed, 19 Jun 2019 17:31:34 -0700 (PDT) Received: from desktop.ice.pyrology.org (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115]) by smtp.gmail.com with ESMTPSA id e188sm22579016ioa.3.2019.06.19.17.31.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Jun 2019 17:31:32 -0700 (PDT) X-Inumbo-ID: c1d899cf-92f2-11e9-8980-bc764e045a96 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=csP8Dwc66MZXJ6lVE4hSijuaWaSG0fWL9FpKEhkXBek=; b=t5RbJ2h6sqgS9ChxZ+rWrDic7cmqmvwrF3STtYBSTZxoegDoqR146sD7FUw71N6oDM FAKyLLubMznlckfKY2oFfeFck61YwljZy0WPTm3ttB9QGxT3EMzLfH9WEWEpYzzVuhv1 IMnOJsrkw7jOx+8M6yHLTwDwm48TGA/fznWCXHbXvBpP6g1aJ1sThHwFZ4oTVHi/GtW6 BfwilI5qo1/nkSaS2BRDJPQz5kr6s0p8NC/EU34l2fLt5wtnDaOLYVTXKAjsk1PZIGPQ u5Rtsi+sRAHEfSrtsecVUTY22W64J8Z3hDr/iQWjrwQnBCwMbMfsCwlvhjcVs7iFfEK8 YKtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=csP8Dwc66MZXJ6lVE4hSijuaWaSG0fWL9FpKEhkXBek=; b=mk3yX+uE1EQ+A9KN0LaEePkiDCf87E95H3kfj61xh+CGdRtlMMC+vMrHzX8WH44Jcm +fHtOfum7L3Y0qZpqGTLCwKJDMROm4+FBeHEPS6j3Z/ZqiEC0C5SqTDIy2MIw22d7/lP EwISTHrKeLIVtgkBng1jHoTb2X8tat18CguWNy7KMxNUryGm+//f5Gk85/fjN8rFBf1K g3vi86KI6iRBJ7ASTiUgEaU6fyswfrZzb+M3hVQYYlPMOB44psP6IVRfS7jvEkc2u0YP mmrtQQK5OgVnCQ/B6lsDiXZFFv/ifkx4dn7rvpLQ/tv83xDmFJR2pOAiTfajzade3hzv 1hOA== X-Gm-Message-State: APjAAAWK6KZkX112We+dv+qcaxwDqNwldMh8GWbBUXh9jxpR05Axn/ih V7UNr4IzwAFzz6cyLeVR5Y7B09/G4Dw= X-Google-Smtp-Source: APXvYqwpsi9w1qS4SR6LhejB14Bnby7DRqW+OggJn6cNJZDUE2e+BepafrRK72HDo5kBGbSSNjfBbg== X-Received: by 2002:a5d:9643:: with SMTP id d3mr33288373ios.227.1560990693700; Wed, 19 Jun 2019 17:31:33 -0700 (PDT) From: Christopher Clark To: xen-devel@lists.xenproject.org Date: Wed, 19 Jun 2019 17:30:53 -0700 Message-Id: <20190620003053.21993-10-christopher.w.clark@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190620003053.21993-1-christopher.w.clark@gmail.com> References: <20190620003053.21993-1-christopher.w.clark@gmail.com> Subject: [Xen-devel] [RFC 9/9] x86/nested, xsm: add nested_schedop_shutdown hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Rich Persaud , Tim Deegan , Julien Grall , Jan Beulich , Daniel De Graaf , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Provides proxying to the host hypervisor for SCHEDOP_shutdown op. Signed-off-by: Christopher Clark --- tools/flask/policy/modules/dom0.te | 1 + xen/arch/x86/guest/hypercall_page.S | 1 + xen/arch/x86/guest/xen-nested.c | 25 +++++++++++++++++++++++++ xen/arch/x86/hypercall.c | 1 + xen/arch/x86/pv/hypercall.c | 1 + xen/include/public/xen.h | 1 + xen/include/xen/hypercall.h | 4 ++++ xen/include/xsm/dummy.h | 7 +++++++ xen/include/xsm/xsm.h | 7 +++++++ xen/xsm/dummy.c | 1 + xen/xsm/flask/hooks.c | 6 ++++++ 11 files changed, 55 insertions(+) diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/module= s/dom0.te index ba3c5ad63d..23911aef4d 100644 --- a/tools/flask/policy/modules/dom0.te +++ b/tools/flask/policy/modules/dom0.te @@ -51,6 +51,7 @@ allow dom0_t nestedxen_t:grant query; allow dom0_t nestedxen_t:nested_event { alloc_unbound bind_vcpu close send unmask }; +allow dom0_t nestedxen_t:domain { shutdown }; =20 # These permissions allow using the FLASK security server to compute access # checks locally, which could be used by a domain or service (such as xens= tore) diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hyper= call_page.S index 64f1885629..28a631e850 100644 --- a/xen/arch/x86/guest/hypercall_page.S +++ b/xen/arch/x86/guest/hypercall_page.S @@ -65,6 +65,7 @@ DECLARE_HYPERCALL(nested_memory_op) DECLARE_HYPERCALL(nested_hvm_op) DECLARE_HYPERCALL(nested_grant_table_op) DECLARE_HYPERCALL(nested_event_channel_op) +DECLARE_HYPERCALL(nested_sched_op) =20 DECLARE_HYPERCALL(arch_0) DECLARE_HYPERCALL(arch_1) diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-neste= d.c index babf4bf783..4f33d5d9be 100644 --- a/xen/arch/x86/guest/xen-nested.c +++ b/xen/arch/x86/guest/xen-nested.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include =20 @@ -323,3 +324,27 @@ long do_nested_event_channel_op(int cmd, XEN_GUEST_HAN= DLE_PARAM(void) arg) =20 return ret; } + +long do_nested_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + struct sched_shutdown sched_shutdown; + long ret; + + if ( !xen_nested ) + return -ENOSYS; + + if ( cmd !=3D SCHEDOP_shutdown ) + { + gprintk(XENLOG_ERR, "Nested: sched op %d not supported.\n", cmd); + return -EOPNOTSUPP; + } + + ret =3D xsm_nested_schedop_shutdown(XSM_PRIV, current->domain); + if ( ret ) + return ret; + + if ( copy_from_guest(&sched_shutdown, arg, 1) ) + return -EFAULT; + + return xen_hypercall_sched_op(cmd, &sched_shutdown); +} diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c index 752955ac81..8bf1d74f14 100644 --- a/xen/arch/x86/hypercall.c +++ b/xen/arch/x86/hypercall.c @@ -79,6 +79,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls= ] =3D ARGS(nested_hvm_op, 2), ARGS(nested_grant_table_op, 3), ARGS(nested_event_channel_op, 2), + ARGS(nested_sched_op, 2), #endif ARGS(mca, 1), ARGS(arch_1, 1), diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c index 6b1ae74d64..4874e701e0 100644 --- a/xen/arch/x86/pv/hypercall.c +++ b/xen/arch/x86/pv/hypercall.c @@ -90,6 +90,7 @@ const hypercall_table_t pv_hypercall_table[] =3D { HYPERCALL(nested_hvm_op), HYPERCALL(nested_grant_table_op), HYPERCALL(nested_event_channel_op), + HYPERCALL(nested_sched_op), #endif HYPERCALL(mca), HYPERCALL(arch_1), diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index 5fb322e882..62a23310e7 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -126,6 +126,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t); #define __HYPERVISOR_nested_hvm_op 44 #define __HYPERVISOR_nested_grant_table_op 45 #define __HYPERVISOR_nested_event_channel_op 46 +#define __HYPERVISOR_nested_sched_op 47 =20 /* Architecture-specific hypercall definitions. */ #define __HYPERVISOR_arch_0 48 diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h index bd739c2dc7..96d6ba2cd2 100644 --- a/xen/include/xen/hypercall.h +++ b/xen/include/xen/hypercall.h @@ -171,6 +171,10 @@ extern long do_nested_grant_table_op( extern long do_nested_event_channel_op( int cmd, XEN_GUEST_HANDLE_PARAM(void) arg); + +extern long do_nested_sched_op( + int cmd, + XEN_GUEST_HANDLE_PARAM(void) arg); #endif =20 #ifdef CONFIG_COMPAT diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index f8162f3308..200f097d50 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -776,6 +776,13 @@ static XSM_INLINE int xsm_nested_event_channel_op(XSM_= DEFAULT_ARG XSM_ASSERT_ACTION(XSM_PRIV); return xsm_default_action(action, d, NULL); } + +static XSM_INLINE int xsm_nested_schedop_shutdown(XSM_DEFAULT_ARG + const struct domain *d) +{ + XSM_ASSERT_ACTION(XSM_PRIV); + return xsm_default_action(action, d, NULL); +} #endif =20 #include diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 81cb67b89b..1cb70d427b 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -193,6 +193,7 @@ struct xsm_operations { int (*nested_hvm_op) (const struct domain *d, unsigned int cmd); int (*nested_grant_query_size) (const struct domain *d); int (*nested_event_channel_op) (const struct domain *d, unsigned int c= md); + int (*nested_schedop_shutdown) (const struct domain *d); #endif }; =20 @@ -763,6 +764,12 @@ static inline int xsm_nested_event_channel_op(xsm_defa= ult_t def, return xsm_ops->nested_event_channel_op(d, cmd); } =20 +static inline int xsm_nested_schedop_shutdown(xsm_default_t def, + const struct domain *d) +{ + return xsm_ops->nested_schedop_shutdown(d); +} + #endif /* CONFIG_XEN_NESTED */ =20 #endif /* XSM_NO_WRAPPERS */ diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 91db264ddc..ac6e5fdd49 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -163,5 +163,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, nested_hvm_op); set_to_dummy_if_null(ops, nested_grant_query_size); set_to_dummy_if_null(ops, nested_event_channel_op); + set_to_dummy_if_null(ops, nested_schedop_shutdown); #endif } diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 27bfa01559..385ae1458c 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1828,6 +1828,11 @@ static int flask_nested_event_channel_op(const struc= t domain *d, return domain_has_nested_perm(d, SECCLASS_NESTED_EVENT, perm); } =20 +static int flask_nested_schedop_shutdown(const struct domain *d) +{ + return domain_has_nested_perm(d, SECCLASS_DOMAIN, DOMAIN__SHUTDOWN); +} + #endif =20 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op); @@ -1975,6 +1980,7 @@ static struct xsm_operations flask_ops =3D { .nested_hvm_op =3D flask_nested_hvm_op, .nested_grant_query_size =3D flask_nested_grant_query_size, .nested_event_channel_op =3D flask_nested_event_channel_op, + .nested_schedop_shutdown =3D flask_nested_schedop_shutdown, #endif }; =20 --=20 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel