From nobody Sat May 18 23:55:22 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1713860769602431.1717567892931; Tue, 23 Apr 2024 01:26:09 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.710382.1109571 (Exim 4.92) (envelope-from ) id 1rzBTJ-0002oP-LW; Tue, 23 Apr 2024 08:25:49 +0000 Received: by outflank-mailman (output) from mailman id 710382.1109571; Tue, 23 Apr 2024 08:25:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTJ-0002nn-IW; Tue, 23 Apr 2024 08:25:49 +0000 Received: by outflank-mailman (input) for mailman id 710382; Tue, 23 Apr 2024 08:25:48 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTI-0002TX-6O for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:48 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 14226acf-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:45 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 182841063; Tue, 23 Apr 2024 01:26:12 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 348733F64C; Tue, 23 Apr 2024 01:25:43 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 14226acf-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 1/7] xen/arm: Lookup bootinfo shm bank during the mapping Date: Tue, 23 Apr 2024 09:25:26 +0100 Message-Id: <20240423082532.776623-2-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1713860770048100001 Content-Type: text/plain; charset="utf-8" The current static shared memory code is using bootinfo banks when it needs to find the number of borrower, so every time assign_shared_memory is called, the bank is searched in the bootinfo.shmem structure. There is nothing wrong with it, however the bank can be used also to retrieve the start address and size and also to pass less argument to assign_shared_memory. When retrieving the information from the bootinfo bank, it's also possible to move the checks on alignment to process_shm_node in the early stages. So create a new function find_shm() which takes a 'struct shared_meminfo' structure and the shared memory ID, to look for a bank with a matching ID, take the physical host address and size from the bank, pass the bank to assign_shared_memory() removing the now unnecessary arguments and finally remove the acquire_nr_borrower_domain() function since now the information can be extracted from the passed bank. Move the "xen,shm-id" parsing early in process_shm to bail out quickly in case of errors (unlikely), as said above, move the checks on alignment to process_shm_node. Drawback of this change is that now the bootinfo are used also when the bank doesn't need to be allocated, however it will be convinient later to use it as an argument for assign_shared_memory when dealing with the use case where the Host physical address is not supplied by the user. Signed-off-by: Luca Fancellu --- xen/arch/arm/static-shmem.c | 105 ++++++++++++++++++++---------------- 1 file changed, 58 insertions(+), 47 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 09f474ec6050..f6cf74e58a83 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -19,29 +19,24 @@ static void __init __maybe_unused build_assertions(void) offsetof(struct shared_meminfo, bank))); } =20 -static int __init acquire_nr_borrower_domain(struct domain *d, - paddr_t pbase, paddr_t psize, - unsigned long *nr_borrowers) +static const struct membank __init *find_shm(const struct membanks *shmem, + const char *shm_id) { - const struct membanks *shmem =3D bootinfo_get_shmem(); unsigned int bank; =20 - /* Iterate reserved memory to find requested shm bank. */ + BUG_ON(!shmem || !shm_id); + for ( bank =3D 0 ; bank < shmem->nr_banks; bank++ ) { - paddr_t bank_start =3D shmem->bank[bank].start; - paddr_t bank_size =3D shmem->bank[bank].size; - - if ( (pbase =3D=3D bank_start) && (psize =3D=3D bank_size) ) + if ( strncmp(shm_id, shmem->bank[bank].shmem_extra->shm_id, + MAX_SHM_ID_LENGTH) =3D=3D 0 ) break; } =20 if ( bank =3D=3D shmem->nr_banks ) - return -ENOENT; - - *nr_borrowers =3D shmem->bank[bank].shmem_extra->nr_shm_borrowers; + return NULL; =20 - return 0; + return &shmem->bank[bank]; } =20 /* @@ -103,14 +98,20 @@ static mfn_t __init acquire_shared_memory_bank(struct = domain *d, return smfn; } =20 -static int __init assign_shared_memory(struct domain *d, - paddr_t pbase, paddr_t psize, - paddr_t gbase) +static int __init assign_shared_memory(struct domain *d, paddr_t gbase, + const struct membank *shm_bank) { mfn_t smfn; int ret =3D 0; unsigned long nr_pages, nr_borrowers, i; struct page_info *page; + paddr_t pbase, psize; + + BUG_ON(!shm_bank || !shm_bank->shmem_extra); + + pbase =3D shm_bank->start; + psize =3D shm_bank->size; + nr_borrowers =3D shm_bank->shmem_extra->nr_shm_borrowers; =20 printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpad= dr".\n", d, pbase, pbase + psize); @@ -135,14 +136,6 @@ static int __init assign_shared_memory(struct domain *= d, } } =20 - /* - * Get the right amount of references per page, which is the number of - * borrower domains. - */ - ret =3D acquire_nr_borrower_domain(d, pbase, psize, &nr_borrowers); - if ( ret ) - return ret; - /* * Instead of letting borrower domain get a page ref, we add as many * additional reference as the number of borrowers when the owner @@ -199,6 +192,7 @@ int __init process_shm(struct domain *d, struct kernel_= info *kinfo, =20 dt_for_each_child_node(node, shm_node) { + const struct membank *boot_shm_bank; const struct dt_property *prop; const __be32 *cells; uint32_t addr_cells, size_cells; @@ -212,6 +206,23 @@ int __init process_shm(struct domain *d, struct kernel= _info *kinfo, if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-= v1") ) continue; =20 + if ( dt_property_read_string(shm_node, "xen,shm-id", &shm_id) ) + { + printk("%pd: invalid \"xen,shm-id\" property", d); + return -EINVAL; + } + BUG_ON((strlen(shm_id) <=3D 0) || (strlen(shm_id) >=3D MAX_SHM_ID_= LENGTH)); + + boot_shm_bank =3D find_shm(bootinfo_get_shmem(), shm_id); + if ( !boot_shm_bank ) + { + printk("%pd: static shared memory bank not found: '%s'", d, sh= m_id); + return -ENOENT; + } + + pbase =3D boot_shm_bank->start; + psize =3D boot_shm_bank->size; + /* * xen,shared-mem =3D ; * TODO: pbase is optional. @@ -221,20 +232,7 @@ int __init process_shm(struct domain *d, struct kernel= _info *kinfo, prop =3D dt_find_property(shm_node, "xen,shared-mem", NULL); BUG_ON(!prop); cells =3D (const __be32 *)prop->value; - device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase= ); - psize =3D dt_read_paddr(cells, size_cells); - if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE= ) ) - { - printk("%pd: physical address 0x%"PRIpaddr", or guest address = 0x%"PRIpaddr" is not suitably aligned.\n", - d, pbase, gbase); - return -EINVAL; - } - if ( !IS_ALIGNED(psize, PAGE_SIZE) ) - { - printk("%pd: size 0x%"PRIpaddr" is not suitably aligned\n", - d, psize); - return -EINVAL; - } + gbase =3D dt_read_paddr(cells + addr_cells, addr_cells); =20 for ( i =3D 0; i < PFN_DOWN(psize); i++ ) if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) @@ -251,13 +249,6 @@ int __init process_shm(struct domain *d, struct kernel= _info *kinfo, if ( dt_property_read_string(shm_node, "role", &role_str) =3D=3D 0= ) owner_dom_io =3D false; =20 - if ( dt_property_read_string(shm_node, "xen,shm-id", &shm_id) ) - { - printk("%pd: invalid \"xen,shm-id\" property", d); - return -EINVAL; - } - BUG_ON((strlen(shm_id) <=3D 0) || (strlen(shm_id) >=3D MAX_SHM_ID_= LENGTH)); - /* * DOMID_IO is a fake domain and is not described in the Device-Tr= ee. * Therefore when the owner of the shared region is DOMID_IO, we w= ill @@ -270,8 +261,8 @@ int __init process_shm(struct domain *d, struct kernel_= info *kinfo, * We found the first borrower of the region, the owner was not * specified, so they should be assigned to dom_io. */ - ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, - pbase, psize, gbase); + ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, + boot_shm_bank); if ( ret ) return ret; } @@ -440,6 +431,26 @@ int __init process_shm_node(const void *fdt, int node,= uint32_t address_cells, device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gadd= r); size =3D dt_next_cell(size_cells, &cell); =20 + if ( !IS_ALIGNED(paddr, PAGE_SIZE) ) + { + printk("fdt: physical address 0x%"PRIpaddr" is not suitably aligne= d.\n", + paddr); + return -EINVAL; + } + + if ( !IS_ALIGNED(gaddr, PAGE_SIZE) ) + { + printk("fdt: guest address 0x%"PRIpaddr" is not suitably aligned.\= n", + gaddr); + return -EINVAL; + } + + if ( !IS_ALIGNED(size, PAGE_SIZE) ) + { + printk("fdt: size 0x%"PRIpaddr" is not suitably aligned\n", size); + return -EINVAL; + } + if ( !size ) { printk("fdt: the size for static shared memory region can not be z= ero\n"); --=20 2.34.1 From nobody Sat May 18 23:55:22 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1713860771431261.958937343927; Tue, 23 Apr 2024 01:26:11 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.710381.1109569 (Exim 4.92) (envelope-from ) id 1rzBTJ-0002kD-Fs; Tue, 23 Apr 2024 08:25:49 +0000 Received: by outflank-mailman (output) from mailman id 710381.1109569; Tue, 23 Apr 2024 08:25:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTJ-0002k4-B2; Tue, 23 Apr 2024 08:25:49 +0000 Received: by outflank-mailman (input) for mailman id 710381; Tue, 23 Apr 2024 08:25:47 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTH-0002VX-TS for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:47 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 14d33859-014b-11ef-909a-e314d9c70b13; Tue, 23 Apr 2024 10:25:46 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 31EBE1476; Tue, 23 Apr 2024 01:26:13 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4E6313F64C; Tue, 23 Apr 2024 01:25:44 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 14d33859-014b-11ef-909a-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 2/7] xen/arm: Wrap shared memory mapping code in one function Date: Tue, 23 Apr 2024 09:25:27 +0100 Message-Id: <20240423082532.776623-3-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1713860772029100004 Content-Type: text/plain; charset="utf-8" Wrap the code and logic that is calling assign_shared_memory and map_regions_p2mt into a new function 'handle_shared_mem_bank', it will become useful later when the code will allow the user to don't pass the host physical address. Signed-off-by: Luca Fancellu --- xen/arch/arm/static-shmem.c | 71 +++++++++++++++++++++++-------------- 1 file changed, 45 insertions(+), 26 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index f6cf74e58a83..24e40495a481 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -185,6 +185,47 @@ append_shm_bank_to_domain(struct shared_meminfo *kinfo= _shm_mem, paddr_t start, return 0; } =20 +static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase, + bool owner_dom_io, + const char *role_str, + const struct membank *shm_bank) +{ + paddr_t pbase, psize; + int ret; + + BUG_ON(!shm_bank); + + pbase =3D shm_bank->start; + psize =3D shm_bank->size; + /* + * DOMID_IO is a fake domain and is not described in the Device-Tree. + * Therefore when the owner of the shared region is DOMID_IO, we will + * only find the borrowers. + */ + if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) || + (!owner_dom_io && strcmp(role_str, "owner") =3D=3D 0) ) + { + /* + * We found the first borrower of the region, the owner was not + * specified, so they should be assigned to dom_io. + */ + ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, shm= _bank); + if ( ret ) + return ret; + } + + if ( owner_dom_io || (strcmp(role_str, "borrower") =3D=3D 0) ) + { + /* Set up P2M foreign mapping for borrower domain. */ + ret =3D map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), + _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); + if ( ret ) + return ret; + } + + return 0; +} + int __init process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node) { @@ -249,32 +290,10 @@ int __init process_shm(struct domain *d, struct kerne= l_info *kinfo, if ( dt_property_read_string(shm_node, "role", &role_str) =3D=3D 0= ) owner_dom_io =3D false; =20 - /* - * DOMID_IO is a fake domain and is not described in the Device-Tr= ee. - * Therefore when the owner of the shared region is DOMID_IO, we w= ill - * only find the borrowers. - */ - if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) || - (!owner_dom_io && strcmp(role_str, "owner") =3D=3D 0) ) - { - /* - * We found the first borrower of the region, the owner was not - * specified, so they should be assigned to dom_io. - */ - ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, - boot_shm_bank); - if ( ret ) - return ret; - } - - if ( owner_dom_io || (strcmp(role_str, "borrower") =3D=3D 0) ) - { - /* Set up P2M foreign mapping for borrower domain. */ - ret =3D map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psiz= e), - _mfn(PFN_UP(pbase)), p2m_map_foreign_rw= ); - if ( ret ) - return ret; - } + ret =3D handle_shared_mem_bank(d, gbase, owner_dom_io, role_str, + boot_shm_bank); + if ( ret ) + return ret; =20 /* * Record static shared memory region info for later setting --=20 2.34.1 From nobody Sat May 18 23:55:22 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1713860773990575.8771272649392; Tue, 23 Apr 2024 01:26:13 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.710383.1109582 (Exim 4.92) (envelope-from ) id 1rzBTK-0002rC-5u; Tue, 23 Apr 2024 08:25:50 +0000 Received: by outflank-mailman (output) from mailman id 710383.1109582; Tue, 23 Apr 2024 08:25:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTJ-0002pb-Ov; Tue, 23 Apr 2024 08:25:49 +0000 Received: by outflank-mailman (input) for mailman id 710383; Tue, 23 Apr 2024 08:25:48 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTI-0002VX-IM for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:48 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 1599ce26-014b-11ef-909a-e314d9c70b13; Tue, 23 Apr 2024 10:25:47 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 81E391477; Tue, 23 Apr 2024 01:26:14 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 68BC23F64C; Tue, 23 Apr 2024 01:25:45 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1599ce26-014b-11ef-909a-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Penny Zheng Subject: [PATCH 3/7] xen/p2m: put reference for superpage Date: Tue, 23 Apr 2024 09:25:28 +0100 Message-Id: <20240423082532.776623-4-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1713860776000100013 Content-Type: text/plain; charset="utf-8" From: Penny Zheng We are doing foreign memory mapping for static shared memory, and there is a great possibility that it could be super mapped. But today, p2m_put_l3_page could not handle superpages. This commits implements a new function p2m_put_superpage to handle superpag= es, specifically for helping put extra references for foreign superpages. Signed-off-by: Penny Zheng Signed-off-by: Luca Fancellu --- v1: - patch from https://patchwork.kernel.org/project/xen-devel/patch/20231206= 090623.1932275-9-Penny.Zheng@arm.com/ --- xen/arch/arm/mmu/p2m.c | 58 +++++++++++++++++++++++++++++++----------- 1 file changed, 43 insertions(+), 15 deletions(-) diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c index 41fcca011cf4..479a80fbd4cf 100644 --- a/xen/arch/arm/mmu/p2m.c +++ b/xen/arch/arm/mmu/p2m.c @@ -753,17 +753,9 @@ static int p2m_mem_access_radix_set(struct p2m_domain = *p2m, gfn_t gfn, return rc; } =20 -/* - * Put any references on the single 4K page referenced by pte. - * TODO: Handle superpages, for now we only take special references for le= af - * pages (specifically foreign ones, which can't be super mapped today). - */ -static void p2m_put_l3_page(const lpae_t pte) +/* Put any references on the single 4K page referenced by mfn. */ +static void p2m_put_l3_page(mfn_t mfn, unsigned type) { - mfn_t mfn =3D lpae_get_mfn(pte); - - ASSERT(p2m_is_valid(pte)); - /* * TODO: Handle other p2m types * @@ -771,16 +763,53 @@ static void p2m_put_l3_page(const lpae_t pte) * flush the TLBs if the page is reallocated before the end of * this loop. */ - if ( p2m_is_foreign(pte.p2m.type) ) + if ( p2m_is_foreign(type) ) { ASSERT(mfn_valid(mfn)); put_page(mfn_to_page(mfn)); } /* Detect the xenheap page and mark the stored GFN as invalid. */ - else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) + else if ( p2m_is_ram(type) && is_xen_heap_mfn(mfn) ) page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); } =20 +/* Put any references on the superpage referenced by mfn. */ +static void p2m_put_superpage(mfn_t mfn, unsigned int next_level, unsigned= type) +{ + unsigned int i; + unsigned int level_order =3D XEN_PT_LEVEL_ORDER(next_level); + + for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + if ( next_level =3D=3D 3 ) + p2m_put_l3_page(mfn, type); + else + p2m_put_superpage(mfn, next_level + 1, type); + + mfn =3D mfn_add(mfn, 1 << level_order); + } +} + +/* Put any references on the page referenced by pte. */ +static void p2m_put_page(const lpae_t pte, unsigned int level) +{ + mfn_t mfn =3D lpae_get_mfn(pte); + + ASSERT(p2m_is_valid(pte)); + + /* + * We are either having a first level 1G superpage or a + * second level 2M superpage. + */ + if ( p2m_is_superpage(pte, level) ) + return p2m_put_superpage(mfn, level + 1, pte.p2m.type); + else + { + ASSERT(level =3D=3D 3); + return p2m_put_l3_page(mfn, pte.p2m.type); + } +} + /* Free lpae sub-tree behind an entry */ static void p2m_free_entry(struct p2m_domain *p2m, lpae_t entry, unsigned int level) @@ -809,9 +838,8 @@ static void p2m_free_entry(struct p2m_domain *p2m, #endif =20 p2m->stats.mappings[level]--; - /* Nothing to do if the entry is a super-page. */ - if ( level =3D=3D 3 ) - p2m_put_l3_page(entry); + p2m_put_page(entry, level); + return; } =20 --=20 2.34.1 From nobody Sat May 18 23:55:22 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1713860772267748.0953023366257; Tue, 23 Apr 2024 01:26:12 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.710384.1109598 (Exim 4.92) (envelope-from ) id 1rzBTL-0003T8-Eo; Tue, 23 Apr 2024 08:25:51 +0000 Received: by outflank-mailman (output) from mailman id 710384.1109598; Tue, 23 Apr 2024 08:25:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTL-0003Py-9D; Tue, 23 Apr 2024 08:25:51 +0000 Received: by outflank-mailman (input) for mailman id 710384; Tue, 23 Apr 2024 08:25:50 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTK-0002TX-35 for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:50 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 1648df22-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:48 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B67DD339; Tue, 23 Apr 2024 01:26:15 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B81B93F64C; Tue, 23 Apr 2024 01:25:46 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1648df22-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 4/7] xen/arm: Parse xen,shared-mem when host phys address is not provided Date: Tue, 23 Apr 2024 09:25:29 +0100 Message-Id: <20240423082532.776623-5-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1713860774196100011 Content-Type: text/plain; charset="utf-8" Handle the parsing of the 'xen,shared-mem' property when the host physical address is not provided, this commit is introducing the logic to parse it, but the functionality is still not implemented and will be part of future commits. Rework the logic inside process_shm_node to check the shm_id before doing the other checks, because it ease the logic itself, add more comment on the logic. Now when the host physical address is not provided, the value INVALID_PADDR is chosen to signal this condition and it is stored as start of the bank, due to that change also early_print_info_shmem and init_sharedmem_pages are changed, to don't handle banks with start equal to INVALID_PADDR. Another change is done inside meminfo_overlap_check, to skip banks that are starting with the start address INVALID_PADDR, that function is used to check banks from reserved memory and ACPI and it's unlikely for these bank to have the start address as INVALID_PADDR. The change holds because of this consideration. Signed-off-by: Luca Fancellu --- xen/arch/arm/setup.c | 3 +- xen/arch/arm/static-shmem.c | 129 +++++++++++++++++++++++++----------- 2 files changed, 93 insertions(+), 39 deletions(-) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index d242674381d4..f15d40a85a5f 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -297,7 +297,8 @@ static bool __init meminfo_overlap_check(const struct m= embanks *mem, bank_start =3D mem->bank[i].start; bank_end =3D bank_start + mem->bank[i].size; =20 - if ( region_end <=3D bank_start || region_start >=3D bank_end ) + if ( INVALID_PADDR =3D=3D bank_start || region_end <=3D bank_start= || + region_start >=3D bank_end ) continue; else { diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 24e40495a481..1c03bb7f1882 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -264,6 +264,12 @@ int __init process_shm(struct domain *d, struct kernel= _info *kinfo, pbase =3D boot_shm_bank->start; psize =3D boot_shm_bank->size; =20 + if ( INVALID_PADDR =3D=3D pbase ) + { + printk("%pd: host physical address must be chosen by users at = the moment.", d); + return -EINVAL; + } + /* * xen,shared-mem =3D ; * TODO: pbase is optional. @@ -382,7 +388,8 @@ int __init process_shm_node(const void *fdt, int node, = uint32_t address_cells, { const struct fdt_property *prop, *prop_id, *prop_role; const __be32 *cell; - paddr_t paddr, gaddr, size, end; + paddr_t paddr =3D INVALID_PADDR; + paddr_t gaddr, size, end; struct membanks *mem =3D bootinfo_get_shmem(); struct shmem_membank_extra *shmem_extra =3D bootinfo_get_shmem_extra(); unsigned int i; @@ -437,24 +444,37 @@ int __init process_shm_node(const void *fdt, int node= , uint32_t address_cells, if ( !prop ) return -ENOENT; =20 + cell =3D (const __be32 *)prop->data; if ( len !=3D dt_cells_to_size(address_cells + size_cells + address_ce= lls) ) { - if ( len =3D=3D dt_cells_to_size(size_cells + address_cells) ) - printk("fdt: host physical address must be chosen by users at = the moment.\n"); - - printk("fdt: invalid `xen,shared-mem` property.\n"); - return -EINVAL; + if ( len =3D=3D dt_cells_to_size(address_cells + size_cells) ) + device_tree_get_reg(&cell, address_cells, size_cells, &gaddr, + &size); + else + { + printk("fdt: invalid `xen,shared-mem` property.\n"); + return -EINVAL; + } } + else + { + device_tree_get_reg(&cell, address_cells, address_cells, &paddr, + &gaddr); + size =3D dt_next_cell(size_cells, &cell); =20 - cell =3D (const __be32 *)prop->data; - device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gadd= r); - size =3D dt_next_cell(size_cells, &cell); + if ( !IS_ALIGNED(paddr, PAGE_SIZE) ) + { + printk("fdt: physical address 0x%"PRIpaddr" is not suitably al= igned.\n", + paddr); + return -EINVAL; + } =20 - if ( !IS_ALIGNED(paddr, PAGE_SIZE) ) - { - printk("fdt: physical address 0x%"PRIpaddr" is not suitably aligne= d.\n", - paddr); - return -EINVAL; + end =3D paddr + size; + if ( end <=3D paddr ) + { + printk("fdt: static shared memory region %s overflow\n", shm_i= d); + return -EINVAL; + } } =20 if ( !IS_ALIGNED(gaddr, PAGE_SIZE) ) @@ -476,41 +496,69 @@ int __init process_shm_node(const void *fdt, int node= , uint32_t address_cells, return -EINVAL; } =20 - end =3D paddr + size; - if ( end <=3D paddr ) - { - printk("fdt: static shared memory region %s overflow\n", shm_id); - return -EINVAL; - } - for ( i =3D 0; i < mem->nr_banks; i++ ) { /* * Meet the following check: + * when host address is provided: * 1) The shm ID matches and the region exactly match * 2) The shm ID doesn't match and the region doesn't overlap * with an existing one + * when host address is not provided: + * 1) The shm ID matches and the region size exactly match */ - if ( paddr =3D=3D mem->bank[i].start && size =3D=3D mem->bank[i].s= ize ) + bool paddr_assigned =3D INVALID_PADDR =3D=3D paddr; + bool shm_id_match =3D strncmp(shm_id, shmem_extra[i].shm_id, + MAX_SHM_ID_LENGTH) =3D=3D 0; + if ( shm_id_match ) { - if ( strncmp(shm_id, shmem_extra[i].shm_id, - MAX_SHM_ID_LENGTH) =3D=3D 0 ) + /* + * Regions have same shm_id (cases): + * 1) physical host address is supplied: + * - OK: paddr is equal and size is equal (same region) + * - Fail: paddr doesn't match or size doesn't match (there + * cannot exists two shmem regions with same shm_id) + * 2) physical host address is NOT supplied: + * - OK: size is equal (same region) + * - Fail: size is not equal (same shm_id must identify onl= y one + * region, there can't be two different regions wit= h same + * shm_id) + */ + bool start_match =3D paddr_assigned ? (paddr =3D=3D mem->bank[= i].start) : + true; + + if ( start_match && size =3D=3D mem->bank[i].size ) break; else { - printk("fdt: xen,shm-id %s does not match for all the node= s using the same region.\n", + printk("fdt: different shared memory region could not shar= e the same shm ID %s\n", shm_id); return -EINVAL; } } - else if ( strncmp(shm_id, shmem_extra[i].shm_id, - MAX_SHM_ID_LENGTH) !=3D 0 ) - continue; else { - printk("fdt: different shared memory region could not share th= e same shm ID %s\n", - shm_id); - return -EINVAL; + /* + * Regions have different shm_id (cases): + * 1) physical host address is supplied: + * - OK: paddr different, or size different (case where p= addr + * is equal but psize is different are wrong, but t= hey + * are handled later when checking for overlapping) + * - Fail: paddr equal and size equal (the same region can'= t be + * identified with different shm_id) + * 2) physical host address is NOT supplied: + * - OK: Both have different shm_id so even with same siz= e they + * can exists + */ + if ( !paddr_assigned || paddr !=3D mem->bank[i].start || + size !=3D mem->bank[i].size ) + continue; + else + { + printk("fdt: xen,shm-id %s does not match for all the node= s using the same region.\n", + shm_id); + return -EINVAL; + } } } =20 @@ -518,7 +566,8 @@ int __init process_shm_node(const void *fdt, int node, = uint32_t address_cells, { if (i < mem->max_banks) { - if ( check_reserved_regions_overlap(paddr, size) ) + if ( (paddr !=3D INVALID_PADDR) && + check_reserved_regions_overlap(paddr, size) ) return -EINVAL; =20 /* Static shared memory shall be reserved from any other use. = */ @@ -588,13 +637,16 @@ void __init early_print_info_shmem(void) { const struct membanks *shmem =3D bootinfo_get_shmem(); unsigned int bank; + unsigned int printed =3D 0; =20 for ( bank =3D 0; bank < shmem->nr_banks; bank++ ) - { - printk(" SHMEM[%u]: %"PRIpaddr" - %"PRIpaddr"\n", bank, - shmem->bank[bank].start, - shmem->bank[bank].start + shmem->bank[bank].size - 1); - } + if ( shmem->bank[bank].start !=3D INVALID_PADDR ) + { + printk(" SHMEM[%u]: %"PRIpaddr" - %"PRIpaddr"\n", printed, + shmem->bank[bank].start, + shmem->bank[bank].start + shmem->bank[bank].size - 1); + printed++; + } } =20 void __init init_sharedmem_pages(void) @@ -603,7 +655,8 @@ void __init init_sharedmem_pages(void) unsigned int bank; =20 for ( bank =3D 0 ; bank < shmem->nr_banks; bank++ ) - init_staticmem_bank(&shmem->bank[bank]); + if ( shmem->bank[bank].start !=3D INVALID_PADDR ) + init_staticmem_bank(&shmem->bank[bank]); } =20 int __init remove_shm_from_rangeset(const struct kernel_info *kinfo, --=20 2.34.1 From nobody Sat May 18 23:55:22 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 17138607720191.8910266269710974; Tue, 23 Apr 2024 01:26:12 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.710385.1109608 (Exim 4.92) (envelope-from ) id 1rzBTM-0003lZ-Mi; Tue, 23 Apr 2024 08:25:52 +0000 Received: by outflank-mailman (output) from mailman id 710385.1109608; Tue, 23 Apr 2024 08:25:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTM-0003lO-IL; Tue, 23 Apr 2024 08:25:52 +0000 Received: by outflank-mailman (input) for mailman id 710385; Tue, 23 Apr 2024 08:25:50 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTK-0002TX-RL for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:50 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 16dca818-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:49 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D09371063; Tue, 23 Apr 2024 01:26:16 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ED13C3F64C; Tue, 23 Apr 2024 01:25:47 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 16dca818-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 5/7] xen/arm: Rework heap page allocation outside allocate_bank_memory Date: Tue, 23 Apr 2024 09:25:30 +0100 Message-Id: <20240423082532.776623-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1713860774005100008 Content-Type: text/plain; charset="utf-8" The function allocate_bank_memory allocates pages from the heap and map them to the guest using guest_physmap_add_page. As a preparation work to support static shared memory bank when the host physical address is not provided, Xen needs to allocate memory from the heap, so rework allocate_bank_memory moving out the page allocation in a new function called allocate_domheap_memory. The function allocate_domheap_memory takes a callback function and a pointer to some extra information passed to the callback and this function will be called for every page allocated, until a defined size is reached. In order to keep allocate_bank_memory functionality, the callback passed to allocate_domheap_memory is a wrapper for guest_physmap_add_page. Let allocate_domheap_memory be externally visible, in order to use it in the future from the static shared memory module. Take the opportunity to change the signature of allocate_bank_memory and remove the 'struct domain' parameter, which can be retrieved from 'struct kernel_info'. No functional changes is intended. Signed-off-by: Luca Fancellu --- xen/arch/arm/dom0less-build.c | 4 +- xen/arch/arm/domain_build.c | 77 +++++++++++++++++-------- xen/arch/arm/include/asm/domain_build.h | 9 ++- 3 files changed, 62 insertions(+), 28 deletions(-) diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c index 74f053c242f4..20ddf6f8f250 100644 --- a/xen/arch/arm/dom0less-build.c +++ b/xen/arch/arm/dom0less-build.c @@ -60,12 +60,12 @@ static void __init allocate_memory(struct domain *d, st= ruct kernel_info *kinfo) =20 mem->nr_banks =3D 0; bank_size =3D MIN(GUEST_RAM0_SIZE, kinfo->unassigned_mem); - if ( !allocate_bank_memory(d, kinfo, gaddr_to_gfn(GUEST_RAM0_BASE), + if ( !allocate_bank_memory(kinfo, gaddr_to_gfn(GUEST_RAM0_BASE), bank_size) ) goto fail; =20 bank_size =3D MIN(GUEST_RAM1_SIZE, kinfo->unassigned_mem); - if ( !allocate_bank_memory(d, kinfo, gaddr_to_gfn(GUEST_RAM1_BASE), + if ( !allocate_bank_memory(kinfo, gaddr_to_gfn(GUEST_RAM1_BASE), bank_size) ) goto fail; =20 diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 0784e4c5e315..148db06b8ca3 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -417,26 +417,13 @@ static void __init allocate_memory_11(struct domain *= d, } =20 #ifdef CONFIG_DOM0LESS_BOOT -bool __init allocate_bank_memory(struct domain *d, struct kernel_info *kin= fo, - gfn_t sgfn, paddr_t tot_size) +bool __init allocate_domheap_memory(struct domain *d, paddr_t tot_size, + alloc_domheap_mem_cb cb, void *extra) { - struct membanks *mem =3D kernel_info_get_mem(kinfo); - int res; + unsigned int max_order =3D UINT_MAX; struct page_info *pg; - struct membank *bank; - unsigned int max_order =3D ~0; =20 - /* - * allocate_bank_memory can be called with a tot_size of zero for - * the second memory bank. It is not an error and we can safely - * avoid creating a zero-size memory bank. - */ - if ( tot_size =3D=3D 0 ) - return true; - - bank =3D &mem->bank[mem->nr_banks]; - bank->start =3D gfn_to_gaddr(sgfn); - bank->size =3D tot_size; + BUG_ON(!cb); =20 while ( tot_size > 0 ) { @@ -463,17 +450,61 @@ bool __init allocate_bank_memory(struct domain *d, st= ruct kernel_info *kinfo, continue; } =20 - res =3D guest_physmap_add_page(d, sgfn, page_to_mfn(pg), order); - if ( res ) - { - dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res); + if ( cb(d, pg, order, extra) ) return false; - } =20 - sgfn =3D gfn_add(sgfn, 1UL << order); tot_size -=3D (1ULL << (PAGE_SHIFT + order)); } =20 + return true; +} + +static int __init guest_map_pages(struct domain *d, struct page_info *pg, + unsigned int order, void *extra) +{ + gfn_t *sgfn =3D (gfn_t *)extra; + int res; + + BUG_ON(!sgfn); + res =3D guest_physmap_add_page(d, *sgfn, page_to_mfn(pg), order); + if ( res ) + { + dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res); + return res; + } + + *sgfn =3D gfn_add(*sgfn, 1UL << order); + + return 0; +} + +bool __init allocate_bank_memory(struct kernel_info *kinfo, gfn_t sgfn, + paddr_t tot_size) +{ + struct membanks *mem =3D kernel_info_get_mem(kinfo); + struct domain *d =3D kinfo->d; + struct membank *bank; + + /* + * allocate_bank_memory can be called with a tot_size of zero for + * the second memory bank. It is not an error and we can safely + * avoid creating a zero-size memory bank. + */ + if ( tot_size =3D=3D 0 ) + return true; + + bank =3D &mem->bank[mem->nr_banks]; + bank->start =3D gfn_to_gaddr(sgfn); + bank->size =3D tot_size; + + /* + * Allocate pages from the heap until tot_size and map them to the gue= st + * using guest_map_pages, passing the starting gfn as extra parameter = for + * the map operation. + */ + if ( !allocate_domheap_memory(d, tot_size, guest_map_pages, &sgfn) ) + return false; + mem->nr_banks++; kinfo->unassigned_mem -=3D bank->size; =20 diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include= /asm/domain_build.h index 45936212ca21..9eeb5839f6ed 100644 --- a/xen/arch/arm/include/asm/domain_build.h +++ b/xen/arch/arm/include/asm/domain_build.h @@ -5,9 +5,12 @@ #include =20 typedef __be32 gic_interrupt_t[3]; - -bool allocate_bank_memory(struct domain *d, struct kernel_info *kinfo, - gfn_t sgfn, paddr_t tot_size); +typedef int (*alloc_domheap_mem_cb)(struct domain *d, struct page_info *pg, + unsigned int order, void *extra); +bool allocate_domheap_memory(struct domain *d, paddr_t tot_size, + alloc_domheap_mem_cb cb, void *extra); +bool allocate_bank_memory(struct kernel_info *kinfo, gfn_t sgfn, + paddr_t tot_size); int construct_domain(struct domain *d, struct kernel_info *kinfo); int domain_fdt_begin_node(void *fdt, const char *name, uint64_t unit); int make_chosen_node(const struct kernel_info *kinfo); --=20 2.34.1 From nobody Sat May 18 23:55:22 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 171386077937047.392902944615; Tue, 23 Apr 2024 01:26:19 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.710386.1109618 (Exim 4.92) (envelope-from ) id 1rzBTO-000420-15; Tue, 23 Apr 2024 08:25:54 +0000 Received: by outflank-mailman (output) from mailman id 710386.1109618; Tue, 23 Apr 2024 08:25:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTN-00041H-Ro; Tue, 23 Apr 2024 08:25:53 +0000 Received: by outflank-mailman (input) for mailman id 710386; Tue, 23 Apr 2024 08:25:52 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTM-0002TX-27 for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:52 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 178533e0-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:50 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EB0291477; Tue, 23 Apr 2024 01:26:17 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 135B33F64C; Tue, 23 Apr 2024 01:25:48 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 178533e0-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 6/7] xen/arm: Implement the logic for static shared memory from Xen heap Date: Tue, 23 Apr 2024 09:25:31 +0100 Message-Id: <20240423082532.776623-7-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1713860780025100001 Content-Type: text/plain; charset="utf-8" This commit implements the logic to have the static shared memory banks from the Xen heap instead of having the host physical address passed from the user. When the host physical address is not supplied, the physical memory is taken from the Xen heap using allocate_domheap_memory, the allocation needs to occur at the first handled DT node and the allocated banks need to be saved somewhere, so introduce the 'shm_heap_banks' static global variable of type 'struct meminfo' that will hold the banks allocated from the heap, its field .shmem_extra will be used to point to the bootinfo shared memory banks .shmem_extra space, so that there is not further allocation of memory and every bank in shm_heap_banks can be safely identified by the shm_id to reconstruct its traceability and if it was allocated or not. A search into 'shm_heap_banks' will reveal if the banks were allocated or not, in case the host address is not passed, and the callback given to allocate_domheap_memory will store the banks in the structure and map them to the current domain, to do that, some changes to acquire_shared_memory_bank are made to let it differentiate if the bank is from the heap and if it is, then assign_pages is called for every bank. When the bank is already allocated, for every bank allocated with the corresponding shm_id, handle_shared_mem_bank is called and the mapping are done. Signed-off-by: Luca Fancellu --- xen/arch/arm/static-shmem.c | 193 +++++++++++++++++++++++++++++------- 1 file changed, 157 insertions(+), 36 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 1c03bb7f1882..10396ed52ff1 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -9,6 +9,18 @@ #include #include =20 +typedef struct { + struct domain *d; + paddr_t gbase; + bool owner_dom_io; + const char *role_str; + struct shmem_membank_extra *bank_extra_info; +} alloc_heap_pages_cb_extra; + +static struct meminfo __initdata shm_heap_banks =3D { + .common.max_banks =3D NR_MEM_BANKS +}; + static void __init __maybe_unused build_assertions(void) { /* @@ -66,7 +78,8 @@ static bool __init is_shm_allocated_to_domio(paddr_t pbas= e) } =20 static mfn_t __init acquire_shared_memory_bank(struct domain *d, - paddr_t pbase, paddr_t psiz= e) + paddr_t pbase, paddr_t psiz= e, + bool bank_from_heap) { mfn_t smfn; unsigned long nr_pfns; @@ -86,19 +99,31 @@ static mfn_t __init acquire_shared_memory_bank(struct d= omain *d, d->max_pages +=3D nr_pfns; =20 smfn =3D maddr_to_mfn(pbase); - res =3D acquire_domstatic_pages(d, smfn, nr_pfns, 0); + if ( bank_from_heap ) + /* + * When host address is not provided, static shared memory is + * allocated from heap and shall be assigned to owner domain. + */ + res =3D assign_pages(maddr_to_page(pbase), nr_pfns, d, 0); + else + res =3D acquire_domstatic_pages(d, smfn, nr_pfns, 0); + if ( res ) { - printk(XENLOG_ERR - "%pd: failed to acquire static memory: %d.\n", d, res); - d->max_pages -=3D nr_pfns; - return INVALID_MFN; + printk(XENLOG_ERR "%pd: failed to %s static memory: %d.\n", d, + bank_from_heap ? "assign" : "acquire", res); + goto fail; } =20 return smfn; + + fail: + d->max_pages -=3D nr_pfns; + return INVALID_MFN; } =20 static int __init assign_shared_memory(struct domain *d, paddr_t gbase, + bool bank_from_heap, const struct membank *shm_bank) { mfn_t smfn; @@ -113,10 +138,7 @@ static int __init assign_shared_memory(struct domain *= d, paddr_t gbase, psize =3D shm_bank->size; nr_borrowers =3D shm_bank->shmem_extra->nr_shm_borrowers; =20 - printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpad= dr".\n", - d, pbase, pbase + psize); - - smfn =3D acquire_shared_memory_bank(d, pbase, psize); + smfn =3D acquire_shared_memory_bank(d, pbase, psize, bank_from_heap); if ( mfn_eq(smfn, INVALID_MFN) ) return -EINVAL; =20 @@ -188,6 +210,7 @@ append_shm_bank_to_domain(struct shared_meminfo *kinfo_= shm_mem, paddr_t start, static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase, bool owner_dom_io, const char *role_str, + bool bank_from_heap, const struct membank *shm_bank) { paddr_t pbase, psize; @@ -197,6 +220,10 @@ static int __init handle_shared_mem_bank(struct domain= *d, paddr_t gbase, =20 pbase =3D shm_bank->start; psize =3D shm_bank->size; + + printk("%pd: SHMEM map from %s: mphys 0x%"PRIpaddr" -> gphys 0x%"PRIpa= ddr", size 0x%"PRIpaddr"\n", + d, bank_from_heap ? "Xen heap" : "Host", pbase, gbase, psize); + /* * DOMID_IO is a fake domain and is not described in the Device-Tree. * Therefore when the owner of the shared region is DOMID_IO, we will @@ -209,7 +236,8 @@ static int __init handle_shared_mem_bank(struct domain = *d, paddr_t gbase, * We found the first borrower of the region, the owner was not * specified, so they should be assigned to dom_io. */ - ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, shm= _bank); + ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, + bank_from_heap, shm_bank); if ( ret ) return ret; } @@ -226,6 +254,40 @@ static int __init handle_shared_mem_bank(struct domain= *d, paddr_t gbase, return 0; } =20 +static int __init save_map_heap_pages(struct domain *d, struct page_info *= pg, + unsigned int order, void *extra) +{ + alloc_heap_pages_cb_extra *b_extra =3D (alloc_heap_pages_cb_extra *)ex= tra; + int idx =3D shm_heap_banks.common.nr_banks; + int ret =3D -ENOSPC; + + BUG_ON(!b_extra); + + if ( idx < shm_heap_banks.common.max_banks ) + { + shm_heap_banks.bank[idx].start =3D page_to_maddr(pg); + shm_heap_banks.bank[idx].size =3D (1ULL << (PAGE_SHIFT + order)); + shm_heap_banks.bank[idx].shmem_extra =3D b_extra->bank_extra_info; + shm_heap_banks.common.nr_banks++; + + ret =3D handle_shared_mem_bank(b_extra->d, b_extra->gbase, + b_extra->owner_dom_io, b_extra->role_= str, + true, &shm_heap_banks.bank[idx]); + if ( !ret ) + { + /* Increment guest physical address for next mapping */ + b_extra->gbase +=3D shm_heap_banks.bank[idx].size; + ret =3D 0; + } + } + + if ( ret ) + printk("Failed to allocate static shared memory from Xen heap: (%d= )\n", + ret); + + return ret; +} + int __init process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node) { @@ -264,42 +326,101 @@ int __init process_shm(struct domain *d, struct kern= el_info *kinfo, pbase =3D boot_shm_bank->start; psize =3D boot_shm_bank->size; =20 - if ( INVALID_PADDR =3D=3D pbase ) - { - printk("%pd: host physical address must be chosen by users at = the moment.", d); - return -EINVAL; - } + /* + * "role" property is optional and if it is defined explicitly, + * then the owner domain is not the default "dom_io" domain. + */ + if ( dt_property_read_string(shm_node, "role", &role_str) =3D=3D 0= ) + owner_dom_io =3D false; =20 /* - * xen,shared-mem =3D ; - * TODO: pbase is optional. + * xen,shared-mem =3D <[pbase,] gbase, size>; + * pbase is optional. */ addr_cells =3D dt_n_addr_cells(shm_node); size_cells =3D dt_n_size_cells(shm_node); prop =3D dt_find_property(shm_node, "xen,shared-mem", NULL); BUG_ON(!prop); cells =3D (const __be32 *)prop->value; - gbase =3D dt_read_paddr(cells + addr_cells, addr_cells); =20 - for ( i =3D 0; i < PFN_DOWN(psize); i++ ) - if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) - { - printk("%pd: invalid physical address 0x%"PRI_mfn"\n", - d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); - return -EINVAL; - } + if ( pbase !=3D INVALID_PADDR ) + { + /* guest phys address is after host phys address */ + gbase =3D dt_read_paddr(cells + addr_cells, addr_cells); + + for ( i =3D 0; i < PFN_DOWN(psize); i++ ) + if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) + { + printk("%pd: invalid physical address 0x%"PRI_mfn"\n", + d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); + return -EINVAL; + } + + /* The host physical address is supplied by the user */ + ret =3D handle_shared_mem_bank(d, gbase, owner_dom_io, role_st= r, + false, boot_shm_bank); + if ( ret ) + return ret; + } + else + { + /* + * The host physical address is not supplied by the user, so it + * means that the banks needs to be allocated from the Xen hea= p, + * look into the already allocated banks from the heap. + */ + const struct membank *alloc_bank =3D find_shm(&shm_heap_banks.= common, + shm_id); =20 - /* - * "role" property is optional and if it is defined explicitly, - * then the owner domain is not the default "dom_io" domain. - */ - if ( dt_property_read_string(shm_node, "role", &role_str) =3D=3D 0= ) - owner_dom_io =3D false; + /* guest phys address is right at the beginning */ + gbase =3D dt_read_paddr(cells, addr_cells); =20 - ret =3D handle_shared_mem_bank(d, gbase, owner_dom_io, role_str, - boot_shm_bank); - if ( ret ) - return ret; + if ( !alloc_bank ) + { + alloc_heap_pages_cb_extra cb_arg =3D { d, gbase, owner_dom= _io, + role_str, boot_shm_bank->shmem_extra }; + + /* shm_id identified bank is not yet allocated */ + if ( !allocate_domheap_memory(NULL, psize, save_map_heap_p= ages, + &cb_arg) ) + { + printk(XENLOG_ERR + "Failed to allocate (%"PRIpaddr"MB) pages as st= atic shared memory from heap\n", + psize >> 20); + return -EINVAL; + } + } + else + { + /* shm_id identified bank is already allocated */ + const struct membank *end_bank =3D + &shm_heap_banks.bank[shm_heap_banks.common.nr_bank= s]; + paddr_t gbase_bank =3D gbase; + + /* + * Static shared memory banks that are taken from the Xen = heap + * are allocated sequentially in shm_heap_banks, so starti= ng + * from the first bank found identified by shm_id, the cod= e can + * just advance by one bank at the time until it reaches t= he end + * of the array or it finds another bank NOT identified by + * shm_id + */ + for ( ; alloc_bank < end_bank; alloc_bank++ ) + { + if ( strncmp(shm_id, alloc_bank->shmem_extra->shm_id, + MAX_SHM_ID_LENGTH) !=3D 0 ) + break; + + ret =3D handle_shared_mem_bank(d, gbase_bank, owner_do= m_io, + role_str, true, alloc_ban= k); + if ( ret ) + return ret; + + /* Increment guest physical address for next mapping */ + gbase_bank +=3D alloc_bank->size; + } + } + } =20 /* * Record static shared memory region info for later setting --=20 2.34.1 From nobody Sat May 18 23:55:22 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1713860772757892.7910693507897; Tue, 23 Apr 2024 01:26:12 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.710387.1109624 (Exim 4.92) (envelope-from ) id 1rzBTO-00049r-Mv; Tue, 23 Apr 2024 08:25:54 +0000 Received: by outflank-mailman (output) from mailman id 710387.1109624; Tue, 23 Apr 2024 08:25:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTO-00048O-H3; Tue, 23 Apr 2024 08:25:54 +0000 Received: by outflank-mailman (input) for mailman id 710387; Tue, 23 Apr 2024 08:25:53 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTN-0002TX-9s for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:53 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 184fe73e-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:51 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 46D70339; Tue, 23 Apr 2024 01:26:19 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2DCEF3F64C; Tue, 23 Apr 2024 01:25:50 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 184fe73e-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Penny Zheng Subject: [PATCH 7/7] xen/docs: Describe static shared memory when host address is not provided Date: Tue, 23 Apr 2024 09:25:32 +0100 Message-Id: <20240423082532.776623-8-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1713860773997100007 Content-Type: text/plain; charset="utf-8" From: Penny Zheng This commit describe the new scenario where host address is not provided in "xen,shared-mem" property and a new example is added to the page to explain in details. Take the occasion to fix some typos in the page. Signed-off-by: Penny Zheng Signed-off-by: Luca Fancellu Reviewed-by: Michal Orzel --- v1: - patch from https://patchwork.kernel.org/project/xen-devel/patch/20231206= 090623.1932275-10-Penny.Zheng@arm.com/ with some changes in the commit message. --- docs/misc/arm/device-tree/booting.txt | 52 ++++++++++++++++++++------- 1 file changed, 39 insertions(+), 13 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-t= ree/booting.txt index bbd955e9c2f6..ac4bad6fe5e0 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -590,7 +590,7 @@ communication. An array takes a physical address, which is the base address of the shared memory region in host physical address space, a size, and a gue= st physical address, as the target address of the mapping. - e.g. xen,shared-mem =3D < [host physical address] [guest address] [siz= e] > + e.g. xen,shared-mem =3D < [host physical address] [guest address] [siz= e] >; =20 It shall also meet the following criteria: 1) If the SHM ID matches with an existing region, the address range of= the @@ -601,8 +601,8 @@ communication. The number of cells for the host address (and size) is the same as the guest pseudo-physical address and they are inherited from the parent n= ode. =20 - Host physical address is optional, when missing Xen decides the locati= on - (currently unimplemented). + Host physical address is optional, when missing Xen decides the locati= on. + e.g. xen,shared-mem =3D < [guest address] [size] >; =20 - role (Optional) =20 @@ -629,7 +629,7 @@ chosen { role =3D "owner"; xen,shm-id =3D "my-shared-mem-0"; xen,shared-mem =3D <0x10000000 0x10000000 0x10000000>; - } + }; =20 domU1 { compatible =3D "xen,domain"; @@ -640,25 +640,36 @@ chosen { vpl011; =20 /* - * shared memory region identified as 0x0(xen,shm-id =3D <0x0>) - * is shared between Dom0 and DomU1. + * shared memory region "my-shared-mem-0" is shared + * between Dom0 and DomU1. */ domU1-shared-mem@10000000 { compatible =3D "xen,domain-shared-memory-v1"; role =3D "borrower"; xen,shm-id =3D "my-shared-mem-0"; xen,shared-mem =3D <0x10000000 0x50000000 0x10000000>; - } + }; =20 /* - * shared memory region identified as 0x1(xen,shm-id =3D <0x1>) - * is shared between DomU1 and DomU2. + * shared memory region "my-shared-mem-1" is shared between + * DomU1 and DomU2. */ domU1-shared-mem@50000000 { compatible =3D "xen,domain-shared-memory-v1"; xen,shm-id =3D "my-shared-mem-1"; xen,shared-mem =3D <0x50000000 0x60000000 0x20000000>; - } + }; + + /* + * shared memory region "my-shared-mem-2" is shared between + * DomU1 and DomU2. + */ + domU1-shared-mem-2 { + compatible =3D "xen,domain-shared-memory-v1"; + xen,shm-id =3D "my-shared-mem-2"; + role =3D "owner"; + xen,shared-mem =3D <0x80000000 0x20000000>; + }; =20 ...... =20 @@ -672,14 +683,21 @@ chosen { cpus =3D <1>; =20 /* - * shared memory region identified as 0x1(xen,shm-id =3D <0x1>) - * is shared between domU1 and domU2. + * shared memory region "my-shared-mem-1" is shared between + * domU1 and domU2. */ domU2-shared-mem@50000000 { compatible =3D "xen,domain-shared-memory-v1"; xen,shm-id =3D "my-shared-mem-1"; xen,shared-mem =3D <0x50000000 0x70000000 0x20000000>; - } + }; + + domU2-shared-mem-2 { + compatible =3D "xen,domain-shared-memory-v1"; + xen,shm-id =3D "my-shared-mem-2"; + role =3D "borrower"; + xen,shared-mem =3D <0x90000000 0x20000000>; + }; =20 ...... }; @@ -699,3 +717,11 @@ shared between DomU1 and DomU2. It will get mapped at = 0x60000000 in DomU1 guest physical address space, and at 0x70000000 in DomU2 guest physical address = space. DomU1 and DomU2 are both the borrower domain, the owner domain is the defa= ult owner domain DOMID_IO. + +For the static shared memory region "my-shared-mem-2", since host physical +address is not provided by user, Xen will automatically allocate 512MB +from heap as static shared memory to be shared between DomU1 and DomU2. +The automatically allocated static shared memory will get mapped at +0x80000000 in DomU1 guest physical address space, and at 0x90000000 in Dom= U2 +guest physical address space. DomU1 is explicitly defined as the owner dom= ain, +and DomU2 is the borrower domain. --=20 2.34.1