From nobody Thu May 2 09:37:10 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1627489170; cv=none; d=zohomail.com; s=zohoarc; b=lDydKqRDMkOa/KzkxiZiqugYUY67pdkn4ZwrjChuhPI/Po5wTg2T+seuMjBVAuOM6xAqSnKK3MtBDNaXIX3nSY02MKvg7kyMxRVHOZGcPa/DkA5no5QJ4IDYimHKLuCk8v47ZWh+johe+xSiieOrBpEqhS6BGmXWYI9F1cqBk3k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1627489170; h=Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:Sender:Subject:To; bh=rtGLTu5WvJzmoeG/lwD0OT0TkFZgPNP86HNxh5o9df8=; b=WB8I1KHaLZ+YQU88uRBTvMBgBCDTJ4bNxS1vqSXFtxRv0EDlXWZzzlT5JIEbjvYGKBk5MOhoaaw/nwYp/93tGDwYsFi2vWtLGe7htcOkhhhD+yAoxI/qN8eeV/spoNNfXwktCI55NEgQDzJJEaiHkgkvMaFdPAwFviIxokfhgck= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1627489170549140.52645812037713; Wed, 28 Jul 2021 09:19:30 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.161719.296793 (Exim 4.92) (envelope-from ) id 1m8mGc-0004Vy-VW; Wed, 28 Jul 2021 16:18:46 +0000 Received: by outflank-mailman (output) from mailman id 161719.296793; Wed, 28 Jul 2021 16:18:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1m8mGc-0004Vr-S7; Wed, 28 Jul 2021 16:18:46 +0000 Received: by outflank-mailman (input) for mailman id 161719; Wed, 28 Jul 2021 16:18:44 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1m8mGa-0004Vl-L6 for xen-devel@lists.xenproject.org; Wed, 28 Jul 2021 16:18:44 +0000 Received: from mail-lf1-x129.google.com (unknown [2a00:1450:4864:20::129]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5beb3372-ab88-4787-88f7-41a3e87d78bf; Wed, 28 Jul 2021 16:18:42 +0000 (UTC) Received: by mail-lf1-x129.google.com with SMTP id h14so4981335lfv.7 for ; Wed, 28 Jul 2021 09:18:42 -0700 (PDT) Received: from otyshchenko.router ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id l11sm42162lfe.81.2021.07.28.09.18.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 28 Jul 2021 09:18:40 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5beb3372-ab88-4787-88f7-41a3e87d78bf DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=rtGLTu5WvJzmoeG/lwD0OT0TkFZgPNP86HNxh5o9df8=; b=rnO6c2ykGqTjVwp0adatoazyLFibHkPlLjIUAv+1awif/EAtqlS5t4h9dx2i/c0HGe WUegkUfiWDJf2MBsoZrN6QH/tJqr0hIiXFJEcB5DDPRpUcseKJgBL0sQElczGUPvtoEd bwKHEXYkK9Uk0DX4sI6vCCtgEwv45sMY6W0TbmhwIQUpyVowy5f8gBbconAO7OXV2TEt cyMBREqcSLvtdd9LxlufuuEPd+8eJ8T7RokXtMYCAgmH42DfUoZeCWWnRqC3NOKRZkGa y6Kf7paNA2llnOc223En7BWFV2OmqZyuW4/RXptDUFjR7ljzaEvqsIfcJCkBYbiwq5N7 L13w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=rtGLTu5WvJzmoeG/lwD0OT0TkFZgPNP86HNxh5o9df8=; b=DjqlY3rr+8+C9lJ8buVEILJD1UIrp5W7MuKBYvoLwujaBKhV4APnJS/Dnma8eCDqAb UcgftXKrpUBgESIsUdK8yOOvjqNuyOt13tllZsoqW4q0Dt1MENoKJi9TMhcvqxDCE8kS 82auZjLqYu01mSVxxSaR3TL40hyrtHXajccSt5jfUd/tT6v8UTia657wC5MomdsIa/1Y 2ZfFl5BGqqqlUTyPRAoZreUa4i/mU+YWwTzg4mF5CaVbOK46N88ii3rZoGWALPhAtEbv YYMLhcnNL5WtXdg4RgYrd8BPu1zc1YPyHGsTTp5WjygPzBRtR2n7ptq6AsV3bizg1s/6 pULA== X-Gm-Message-State: AOAM532/K9wfmdA59QhIb2gR4qJqPgoNXeuC1YaEwd2I+sc5/T6XHZQG DFWgDxQ8v1WN+LUoY4Y5hEMF5s3BJ3/AWA== X-Google-Smtp-Source: ABdhPJxPD7fsfzyaGAZntl1DcujeDcyCVVt2C/C+VBwoRnt6YfPoCitxO/mksH7RWWWaOkP9Udi+sA== X-Received: by 2002:a05:6512:3237:: with SMTP id f23mr290323lfe.524.1627489120885; Wed, 28 Jul 2021 09:18:40 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Daniel De Graaf , "Daniel P. Smith" , Ian Jackson , Wei Liu , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Bertrand Marquis , Wei Chen Subject: [RFC PATCH] xen/memory: Introduce a hypercall to provide unallocated space Date: Wed, 28 Jul 2021 19:18:30 +0300 Message-Id: <1627489110-25633-1-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1627489172320100001 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Oleksandr Tyshchenko Add XENMEM_get_unallocated_space hypercall which purpose is to query hypervisor to find regions of guest physical address space which are unused and can be used to create grant/foreign mappings instead of wasting real pages from the domain memory for establishing these mappings. The problem with the current Linux on Xen on Arm behaviour is if we want to map some guest memory regions in advance or to perform cache mappings in the backend we might run out of memory in the host (see XSA-300). This of course, depends on the both host and guest memory sizes. The "unallocated space" can't be figured out precisely by the domain on Arm without hypervisor involvement: - not all device I/O regions are known by the time domain starts creating grant/foreign mappings - the Dom0 is not aware of memory regions used for the identity mappings needed for the PV drivers to work In both cases we might end up re-using these regions by a mistake. So, the hypervisor which maintains the P2M for the domain is in the best position to provide "unallocated space". The arch code is in charge of finding these regions and filling in corresponding array in new helper arch_get_unallocated_space(). This patch implements common and Arm parts, the x86 specific bits are left uniplemented for now. Signed-off-by: Oleksandr Tyshchenko Tested-by: Henry Wang --- Current patch is based on the latest staging branch: 73c932d tools/libxc: use uint32_t for pirq in xc_domain_irq_permission and also available at: https://github.com/otyshchenko1/xen/commits/map_opt_ml1 The corresponding Linux changes you can find at: https://github.com/otyshchenko1/linux/commits/map_opt_ml1 I am going to publish them soon. --- tools/flask/policy/modules/dom0.te | 2 +- xen/arch/arm/mm.c | 18 ++++++++++++ xen/common/memory.c | 56 +++++++++++++++++++++++++++++++++= ++++ xen/include/asm-arm/mm.h | 4 +++ xen/include/asm-x86/mm.h | 8 ++++++ xen/include/public/memory.h | 37 +++++++++++++++++++++++- xen/include/xsm/dummy.h | 6 ++++ xen/include/xsm/xsm.h | 6 ++++ xen/xsm/dummy.c | 1 + xen/xsm/flask/hooks.c | 6 ++++ xen/xsm/flask/policy/access_vectors | 2 ++ 11 files changed, 144 insertions(+), 2 deletions(-) diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/module= s/dom0.te index 0a63ce1..b40091f 100644 --- a/tools/flask/policy/modules/dom0.te +++ b/tools/flask/policy/modules/dom0.te @@ -39,7 +39,7 @@ allow dom0_t dom0_t:domain { }; allow dom0_t dom0_t:domain2 { set_cpu_policy gettsc settsc setscheduler set_vnumainfo - get_vnumainfo psr_cmt_op psr_alloc get_cpu_policy + get_vnumainfo psr_cmt_op psr_alloc get_cpu_policy get_unallocated_space }; allow dom0_t dom0_t:resource { add remove }; =20 diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 0e07335..8a70fe0 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1635,6 +1635,24 @@ unsigned long get_upper_mfn_bound(void) return max_page - 1; } =20 +#define GPFN_ALIGN_TO_GB(x) (((x) + (1UL << 18) - 1) & (~((1UL << 18) - 1)= )) + +int arch_get_unallocated_space(struct domain *d, + struct xen_unallocated_region *regions, + unsigned int *nr_regions) +{ + /* + * Everything not mapped into P2M could be treated as unused. Taking i= nto + * the account that we have a lot of unallocated space above max_mappe= d_gfn + * and in order to simplify things, just provide a single 8GB memory + * region aligned to 1GB boundary for now. + */ + regions->start_gpfn =3D GPFN_ALIGN_TO_GB(domain_get_maximum_gpfn(d) + = 1); + regions->nr_gpfns =3D (1UL << 18) * 8; + *nr_regions =3D 1; + + return 0; +} /* * Local variables: * mode: C diff --git a/xen/common/memory.c b/xen/common/memory.c index e07bd9a..3f9b816 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1811,6 +1811,62 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDL= E_PARAM(void) arg) start_extent); break; =20 + case XENMEM_get_unallocated_space: + { + struct xen_get_unallocated_space xgus; + struct xen_unallocated_region *regions; + + if ( unlikely(start_extent) ) + return -EINVAL; + + if ( copy_from_guest(&xgus, arg, 1) || + !guest_handle_okay(xgus.buffer, xgus.nr_regions) ) + return -EFAULT; + + d =3D rcu_lock_domain_by_any_id(xgus.domid); + if ( d =3D=3D NULL ) + return -ESRCH; + + rc =3D xsm_get_unallocated_space(XSM_HOOK, d); + if ( rc ) + { + rcu_unlock_domain(d); + return rc; + } + + if ( !xgus.nr_regions || xgus.nr_regions > XEN_MAX_UNALLOCATED_REG= IONS ) + { + rcu_unlock_domain(d); + return -EINVAL; + } + + regions =3D xzalloc_array(struct xen_unallocated_region, xgus.nr_r= egions); + if ( !regions ) + { + rcu_unlock_domain(d); + return -ENOMEM; + } + + rc =3D arch_get_unallocated_space(d, regions, &xgus.nr_regions); + if ( rc ) + goto unallocated_out; + + if ( __copy_to_guest(xgus.buffer, regions, xgus.nr_regions) ) + { + rc =3D -EFAULT; + goto unallocated_out; + } + + if ( __copy_to_guest(arg, &xgus, 1) ) + rc =3D -EFAULT; + +unallocated_out: + rcu_unlock_domain(d); + xfree(regions); + + break; + } + default: rc =3D arch_memory_op(cmd, arg); break; diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index ded74d2..99a2824 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -359,6 +359,10 @@ void clear_and_clean_page(struct page_info *page); =20 unsigned int arch_get_dma_bitsize(void); =20 +int arch_get_unallocated_space(struct domain *d, + struct xen_unallocated_region *regions, + unsigned int *nr_regions); + #endif /* __ARCH_ARM_MM__ */ /* * Local variables: diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index cb90527..6244bc4 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -652,4 +652,12 @@ static inline bool arch_mfn_in_directmap(unsigned long= mfn) return mfn <=3D (virt_to_mfn(eva - 1) + 1); } =20 +static inline +int arch_get_unallocated_space(struct domain *d, + struct xen_unallocated_region *regions, + unsigned int *nr_regions) +{ + return -EOPNOTSUPP; +} + #endif /* __ASM_X86_MM_H__ */ diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index 383a946..040201b 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -739,7 +739,42 @@ struct xen_vnuma_topology_info { typedef struct xen_vnuma_topology_info xen_vnuma_topology_info_t; DEFINE_XEN_GUEST_HANDLE(xen_vnuma_topology_info_t); =20 -/* Next available subop number is 29 */ +/* + * Get the unallocated space (regions of guest physical address space which + * are unused) and can be used to create grant/foreign mappings. + */ +#define XENMEM_get_unallocated_space 29 +struct xen_unallocated_region { + xen_pfn_t start_gpfn; + xen_ulong_t nr_gpfns; +}; +typedef struct xen_unallocated_region xen_unallocated_region_t; +DEFINE_XEN_GUEST_HANDLE(xen_unallocated_region_t); + +#define XEN_MAX_UNALLOCATED_REGIONS 32 + +struct xen_get_unallocated_space { + /* IN - Which domain to provide unallocated space for */ + domid_t domid; + + /* + * IN/OUT - As an IN parameter number of memory regions which + * can be written to the buffer (maximum size of the array) + * As OUT parameter number of memory regions which + * have been written to the buffer + */ + unsigned int nr_regions; + + /* + * OUT - An array of memory regions, the regions must be placed in + * ascending order, there must be no overlap between them. + */ + XEN_GUEST_HANDLE(xen_unallocated_region_t) buffer; +}; +typedef struct xen_get_unallocated_space xen_get_unallocated_space_t; +DEFINE_XEN_GUEST_HANDLE(xen_get_unallocated_space_t); + +/* Next available subop number is 30 */ =20 #endif /* __XEN_PUBLIC_MEMORY_H__ */ =20 diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 363c6d7..2761fbd 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -772,3 +772,9 @@ static XSM_INLINE int xsm_domain_resource_map(XSM_DEFAU= LT_ARG struct domain *d) XSM_ASSERT_ACTION(XSM_DM_PRIV); return xsm_default_action(action, current->domain, d); } + +static XSM_INLINE int xsm_get_unallocated_space(XSM_DEFAULT_ARG struct dom= ain *d) +{ + XSM_ASSERT_ACTION(XSM_HOOK); + return xsm_default_action(action, current->domain, d); +} diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index ad3cddb..b01619c 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -180,6 +180,7 @@ struct xsm_operations { int (*dm_op) (struct domain *d); int (*xen_version) (uint32_t cmd); int (*domain_resource_map) (struct domain *d); + int (*get_unallocated_space) (struct domain *d); #ifdef CONFIG_ARGO int (*argo_enable) (const struct domain *d); int (*argo_register_single_source) (const struct domain *d, @@ -701,6 +702,11 @@ static inline int xsm_domain_resource_map(xsm_default_= t def, struct domain *d) return xsm_ops->domain_resource_map(d); } =20 +static inline int xsm_get_unallocated_space(xsm_default_t def, struct doma= in *d) +{ + return xsm_ops->get_unallocated_space(d); +} + #ifdef CONFIG_ARGO static inline int xsm_argo_enable(const struct domain *d) { diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index de44b10..45efadb 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -151,6 +151,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, dm_op); set_to_dummy_if_null(ops, xen_version); set_to_dummy_if_null(ops, domain_resource_map); + set_to_dummy_if_null(ops, get_unallocated_space); #ifdef CONFIG_ARGO set_to_dummy_if_null(ops, argo_enable); set_to_dummy_if_null(ops, argo_register_single_source); diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index f1a1217..38a9b20 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1715,6 +1715,11 @@ static int flask_domain_resource_map(struct domain *= d) return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__RESOURCE_MAP); } =20 +static int flask_get_unallocated_space(struct domain *d) +{ + return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_UNALLOCATED_= SPACE); +} + #ifdef CONFIG_ARGO static int flask_argo_enable(const struct domain *d) { @@ -1875,6 +1880,7 @@ static struct xsm_operations flask_ops =3D { .dm_op =3D flask_dm_op, .xen_version =3D flask_xen_version, .domain_resource_map =3D flask_domain_resource_map, + .get_unallocated_space =3D flask_get_unallocated_space, #ifdef CONFIG_ARGO .argo_enable =3D flask_argo_enable, .argo_register_single_source =3D flask_argo_register_single_source, diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/acc= ess_vectors index 6359c7f..3cbdc19 100644 --- a/xen/xsm/flask/policy/access_vectors +++ b/xen/xsm/flask/policy/access_vectors @@ -245,6 +245,8 @@ class domain2 resource_map # XEN_DOMCTL_get_cpu_policy get_cpu_policy +# XENMEM_get_unallocated_space + get_unallocated_space } =20 # Similar to class domain, but primarily contains domctls related to HVM d= omains --=20 2.7.4