From nobody Fri Nov 22 13:22:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716364342661684.3541696040103; Wed, 22 May 2024 00:52:22 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.727263.1131689 (Exim 4.92) (envelope-from ) id 1s9glZ-0005dO-4A; Wed, 22 May 2024 07:52:05 +0000 Received: by outflank-mailman (output) from mailman id 727263.1131689; Wed, 22 May 2024 07:52:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glZ-0005cP-08; Wed, 22 May 2024 07:52:05 +0000 Received: by outflank-mailman (input) for mailman id 727263; Wed, 22 May 2024 07:52:03 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glX-0005YY-M3 for xen-devel@lists.xenproject.org; Wed, 22 May 2024 07:52:03 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 2c9faaab-1810-11ef-90a0-e314d9c70b13; Wed, 22 May 2024 09:52:02 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CF224DA7; Wed, 22 May 2024 00:52:25 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E8DDB3F766; Wed, 22 May 2024 00:52:00 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2c9faaab-1810-11ef-90a0-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v3 1/7] xen/arm: Lookup bootinfo shm bank during the mapping Date: Wed, 22 May 2024 08:51:45 +0100 Message-Id: <20240522075151.3373899-2-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240522075151.3373899-1-luca.fancellu@arm.com> References: <20240522075151.3373899-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1716364343329100003 Content-Type: text/plain; charset="utf-8" The current static shared memory code is using bootinfo banks when it needs to find the number of borrowers, so every time assign_shared_memory is called, the bank is searched in the bootinfo.shmem structure. There is nothing wrong with it, however the bank can be used also to retrieve the start address and size and also to pass less argument to assign_shared_memory. When retrieving the information from the bootinfo bank, it's also possible to move the checks on alignment to process_shm_node in the early stages. So create a new function find_shm_bank_by_id() which takes a 'struct shared_meminfo' structure and the shared memory ID, to look for a bank with a matching ID, take the physical host address and size from the bank, pass the bank to assign_shared_memory() removing the now unnecessary arguments and finally remove the acquire_nr_borrower_domain() function since now the information can be extracted from the passed bank. Move the "xen,shm-id" parsing early in process_shm to bail out quickly in case of errors (unlikely), as said above, move the checks on alignment to process_shm_node. Drawback of this change is that now the bootinfo are used also when the bank doesn't need to be allocated, however it will be convenient later to use it as an argument for assign_shared_memory when dealing with the use case where the Host physical address is not supplied by the user. Signed-off-by: Luca Fancellu Reviewed-by: Michal Orzel --- v3 changes: - switch strncmp with strcmp in find_shm_bank_by_id, fix commit msg typo, add R-by Michal. v2 changes: - fix typo commit msg, renamed find_shm() to find_shm_bank_by_id(), swap region size check different from zero and size alignment, remove not necessary BUGON(). (Michal) --- xen/arch/arm/static-shmem.c | 100 +++++++++++++++++++----------------- 1 file changed, 53 insertions(+), 47 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 78881dd1d3f7..0a1c327e90ea 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -19,29 +19,21 @@ static void __init __maybe_unused build_assertions(void) offsetof(struct shared_meminfo, bank))); } =20 -static int __init acquire_nr_borrower_domain(struct domain *d, - paddr_t pbase, paddr_t psize, - unsigned long *nr_borrowers) +static const struct membank __init * +find_shm_bank_by_id(const struct membanks *shmem, const char *shm_id) { - const struct membanks *shmem =3D bootinfo_get_shmem(); unsigned int bank; =20 - /* Iterate reserved memory to find requested shm bank. */ for ( bank =3D 0 ; bank < shmem->nr_banks; bank++ ) { - paddr_t bank_start =3D shmem->bank[bank].start; - paddr_t bank_size =3D shmem->bank[bank].size; - - if ( (pbase =3D=3D bank_start) && (psize =3D=3D bank_size) ) + if ( strcmp(shm_id, shmem->bank[bank].shmem_extra->shm_id) =3D=3D = 0 ) break; } =20 if ( bank =3D=3D shmem->nr_banks ) - return -ENOENT; + return NULL; =20 - *nr_borrowers =3D shmem->bank[bank].shmem_extra->nr_shm_borrowers; - - return 0; + return &shmem->bank[bank]; } =20 /* @@ -103,14 +95,18 @@ static mfn_t __init acquire_shared_memory_bank(struct = domain *d, return smfn; } =20 -static int __init assign_shared_memory(struct domain *d, - paddr_t pbase, paddr_t psize, - paddr_t gbase) +static int __init assign_shared_memory(struct domain *d, paddr_t gbase, + const struct membank *shm_bank) { mfn_t smfn; int ret =3D 0; unsigned long nr_pages, nr_borrowers, i; struct page_info *page; + paddr_t pbase, psize; + + pbase =3D shm_bank->start; + psize =3D shm_bank->size; + nr_borrowers =3D shm_bank->shmem_extra->nr_shm_borrowers; =20 printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpad= dr".\n", d, pbase, pbase + psize); @@ -135,14 +131,6 @@ static int __init assign_shared_memory(struct domain *= d, } } =20 - /* - * Get the right amount of references per page, which is the number of - * borrower domains. - */ - ret =3D acquire_nr_borrower_domain(d, pbase, psize, &nr_borrowers); - if ( ret ) - return ret; - /* * Instead of letting borrower domain get a page ref, we add as many * additional reference as the number of borrowers when the owner @@ -199,6 +187,7 @@ int __init process_shm(struct domain *d, struct kernel_= info *kinfo, =20 dt_for_each_child_node(node, shm_node) { + const struct membank *boot_shm_bank; const struct dt_property *prop; const __be32 *cells; uint32_t addr_cells, size_cells; @@ -212,6 +201,23 @@ int __init process_shm(struct domain *d, struct kernel= _info *kinfo, if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-= v1") ) continue; =20 + if ( dt_property_read_string(shm_node, "xen,shm-id", &shm_id) ) + { + printk("%pd: invalid \"xen,shm-id\" property", d); + return -EINVAL; + } + BUG_ON((strlen(shm_id) <=3D 0) || (strlen(shm_id) >=3D MAX_SHM_ID_= LENGTH)); + + boot_shm_bank =3D find_shm_bank_by_id(bootinfo_get_shmem(), shm_id= ); + if ( !boot_shm_bank ) + { + printk("%pd: static shared memory bank not found: '%s'", d, sh= m_id); + return -ENOENT; + } + + pbase =3D boot_shm_bank->start; + psize =3D boot_shm_bank->size; + /* * xen,shared-mem =3D ; * TODO: pbase is optional. @@ -221,20 +227,7 @@ int __init process_shm(struct domain *d, struct kernel= _info *kinfo, prop =3D dt_find_property(shm_node, "xen,shared-mem", NULL); BUG_ON(!prop); cells =3D (const __be32 *)prop->value; - device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase= ); - psize =3D dt_read_paddr(cells, size_cells); - if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE= ) ) - { - printk("%pd: physical address 0x%"PRIpaddr", or guest address = 0x%"PRIpaddr" is not suitably aligned.\n", - d, pbase, gbase); - return -EINVAL; - } - if ( !IS_ALIGNED(psize, PAGE_SIZE) ) - { - printk("%pd: size 0x%"PRIpaddr" is not suitably aligned\n", - d, psize); - return -EINVAL; - } + gbase =3D dt_read_paddr(cells + addr_cells, addr_cells); =20 for ( i =3D 0; i < PFN_DOWN(psize); i++ ) if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) @@ -251,13 +244,6 @@ int __init process_shm(struct domain *d, struct kernel= _info *kinfo, if ( dt_property_read_string(shm_node, "role", &role_str) =3D=3D 0= ) owner_dom_io =3D false; =20 - if ( dt_property_read_string(shm_node, "xen,shm-id", &shm_id) ) - { - printk("%pd: invalid \"xen,shm-id\" property", d); - return -EINVAL; - } - BUG_ON((strlen(shm_id) <=3D 0) || (strlen(shm_id) >=3D MAX_SHM_ID_= LENGTH)); - /* * DOMID_IO is a fake domain and is not described in the Device-Tr= ee. * Therefore when the owner of the shared region is DOMID_IO, we w= ill @@ -270,8 +256,8 @@ int __init process_shm(struct domain *d, struct kernel_= info *kinfo, * We found the first borrower of the region, the owner was not * specified, so they should be assigned to dom_io. */ - ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, - pbase, psize, gbase); + ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, + boot_shm_bank); if ( ret ) return ret; } @@ -439,12 +425,32 @@ int __init process_shm_node(const void *fdt, int node= , uint32_t address_cells, device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gadd= r); size =3D dt_next_cell(size_cells, &cell); =20 + if ( !IS_ALIGNED(paddr, PAGE_SIZE) ) + { + printk("fdt: physical address 0x%"PRIpaddr" is not suitably aligne= d.\n", + paddr); + return -EINVAL; + } + + if ( !IS_ALIGNED(gaddr, PAGE_SIZE) ) + { + printk("fdt: guest address 0x%"PRIpaddr" is not suitably aligned.\= n", + gaddr); + return -EINVAL; + } + if ( !size ) { printk("fdt: the size for static shared memory region can not be z= ero\n"); return -EINVAL; } =20 + if ( !IS_ALIGNED(size, PAGE_SIZE) ) + { + printk("fdt: size 0x%"PRIpaddr" is not suitably aligned\n", size); + return -EINVAL; + } + end =3D paddr + size; if ( end <=3D paddr ) { --=20 2.34.1 From nobody Fri Nov 22 13:22:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716364345181675.9562402438535; Wed, 22 May 2024 00:52:25 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.727264.1131693 (Exim 4.92) (envelope-from ) id 1s9glZ-0005gU-Bj; Wed, 22 May 2024 07:52:05 +0000 Received: by outflank-mailman (output) from mailman id 727264.1131693; Wed, 22 May 2024 07:52:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glZ-0005fq-65; Wed, 22 May 2024 07:52:05 +0000 Received: by outflank-mailman (input) for mailman id 727264; Wed, 22 May 2024 07:52:04 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glY-0005YY-MP for xen-devel@lists.xenproject.org; Wed, 22 May 2024 07:52:04 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 2d3c4e7b-1810-11ef-90a0-e314d9c70b13; Wed, 22 May 2024 09:52:03 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8BB0FEC; Wed, 22 May 2024 00:52:26 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 080133F766; Wed, 22 May 2024 00:52:01 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2d3c4e7b-1810-11ef-90a0-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v3 2/7] xen/arm: Wrap shared memory mapping code in one function Date: Wed, 22 May 2024 08:51:46 +0100 Message-Id: <20240522075151.3373899-3-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240522075151.3373899-1-luca.fancellu@arm.com> References: <20240522075151.3373899-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1716364346919100007 Content-Type: text/plain; charset="utf-8" Wrap the code and logic that is calling assign_shared_memory and map_regions_p2mt into a new function 'handle_shared_mem_bank', it will become useful later when the code will allow the user to don't pass the host physical address. Signed-off-by: Luca Fancellu Reviewed-by: Michal Orzel --- v3 changes: - check return value of dt_property_read_string, add R-by Michal v2 changes: - add blank line, move owner_dom_io computation inside handle_shared_mem_bank in order to reduce args count, remove not needed BUGON(). (Michal) --- xen/arch/arm/static-shmem.c | 86 +++++++++++++++++++++++-------------- 1 file changed, 53 insertions(+), 33 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 0a1c327e90ea..c15a65130659 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -180,6 +180,53 @@ append_shm_bank_to_domain(struct kernel_info *kinfo, p= addr_t start, return 0; } =20 +static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase, + const char *role_str, + const struct membank *shm_bank) +{ + bool owner_dom_io =3D true; + paddr_t pbase, psize; + int ret; + + pbase =3D shm_bank->start; + psize =3D shm_bank->size; + + /* + * "role" property is optional and if it is defined explicitly, + * then the owner domain is not the default "dom_io" domain. + */ + if ( role_str !=3D NULL ) + owner_dom_io =3D false; + + /* + * DOMID_IO is a fake domain and is not described in the Device-Tree. + * Therefore when the owner of the shared region is DOMID_IO, we will + * only find the borrowers. + */ + if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) || + (!owner_dom_io && strcmp(role_str, "owner") =3D=3D 0) ) + { + /* + * We found the first borrower of the region, the owner was not + * specified, so they should be assigned to dom_io. + */ + ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, shm= _bank); + if ( ret ) + return ret; + } + + if ( owner_dom_io || (strcmp(role_str, "borrower") =3D=3D 0) ) + { + /* Set up P2M foreign mapping for borrower domain. */ + ret =3D map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), + _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); + if ( ret ) + return ret; + } + + return 0; +} + int __init process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node) { @@ -196,7 +243,6 @@ int __init process_shm(struct domain *d, struct kernel_= info *kinfo, unsigned int i; const char *role_str; const char *shm_id; - bool owner_dom_io =3D true; =20 if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-= v1") ) continue; @@ -237,39 +283,13 @@ int __init process_shm(struct domain *d, struct kerne= l_info *kinfo, return -EINVAL; } =20 - /* - * "role" property is optional and if it is defined explicitly, - * then the owner domain is not the default "dom_io" domain. - */ - if ( dt_property_read_string(shm_node, "role", &role_str) =3D=3D 0= ) - owner_dom_io =3D false; + /* "role" property is optional */ + if ( dt_property_read_string(shm_node, "role", &role_str) !=3D 0 ) + role_str =3D NULL; =20 - /* - * DOMID_IO is a fake domain and is not described in the Device-Tr= ee. - * Therefore when the owner of the shared region is DOMID_IO, we w= ill - * only find the borrowers. - */ - if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) || - (!owner_dom_io && strcmp(role_str, "owner") =3D=3D 0) ) - { - /* - * We found the first borrower of the region, the owner was not - * specified, so they should be assigned to dom_io. - */ - ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, - boot_shm_bank); - if ( ret ) - return ret; - } - - if ( owner_dom_io || (strcmp(role_str, "borrower") =3D=3D 0) ) - { - /* Set up P2M foreign mapping for borrower domain. */ - ret =3D map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psiz= e), - _mfn(PFN_UP(pbase)), p2m_map_foreign_rw= ); - if ( ret ) - return ret; - } + ret =3D handle_shared_mem_bank(d, gbase, role_str, boot_shm_bank); + if ( ret ) + return ret; =20 /* * Record static shared memory region info for later setting --=20 2.34.1 From nobody Fri Nov 22 13:22:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716364340951613.8157829997684; Wed, 22 May 2024 00:52:20 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.727265.1131715 (Exim 4.92) (envelope-from ) id 1s9gla-0006Eh-KJ; Wed, 22 May 2024 07:52:06 +0000 Received: by outflank-mailman (output) from mailman id 727265.1131715; Wed, 22 May 2024 07:52:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9gla-0006Cm-D3; Wed, 22 May 2024 07:52:06 +0000 Received: by outflank-mailman (input) for mailman id 727265; Wed, 22 May 2024 07:52:05 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glZ-0005YY-MQ for xen-devel@lists.xenproject.org; Wed, 22 May 2024 07:52:05 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 2dfeae50-1810-11ef-90a0-e314d9c70b13; Wed, 22 May 2024 09:52:04 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 35DC31474; Wed, 22 May 2024 00:52:28 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1BA2B3F766; Wed, 22 May 2024 00:52:03 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2dfeae50-1810-11ef-90a0-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 3/7] xen/p2m: put reference for level 2 superpage Date: Wed, 22 May 2024 08:51:47 +0100 Message-Id: <20240522075151.3373899-4-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240522075151.3373899-1-luca.fancellu@arm.com> References: <20240522075151.3373899-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1716364343193100001 Content-Type: text/plain; charset="utf-8" From: Penny Zheng We are doing foreign memory mapping for static shared memory, and there is a great possibility that it could be super mapped. But today, p2m_put_l3_page could not handle superpages. This commits implements a new function p2m_put_l2_superpage to handle 2MB superpages, specifically for helping put extra references for foreign superpages. Modify relinquish_p2m_mapping as well to take into account preemption when type is foreign memory and order is above 9 (2MB). Currently 1GB superpages are not handled because Xen is not preemptible and therefore some work is needed to handle such superpages, for which at some point Xen might end up freeing memory and therefore for such a big mapping it could end up in a very long operation. Signed-off-by: Penny Zheng Signed-off-by: Luca Fancellu Reviewed-by: Michal Orzel --- v3: - Add reasoning why we don't support now 1GB superpage, remove level_order variable from p2m_put_l2_superpage, update TODO comment inside p2m_free_entry, use XEN_PT_LEVEL_ORDER(2) instead of value 9 inside relinquish_p2m_mapping. (Michal) v2: - Do not handle 1GB super page as there might be some issue where a lot of calls to put_page(...) might be issued which could lead to free memory that is a long operation. v1: - patch from https://patchwork.kernel.org/project/xen-devel/patch/20231206= 090623.1932275-9-Penny.Zheng@arm.com/ --- xen/arch/arm/mmu/p2m.c | 63 ++++++++++++++++++++++++++++++------------ 1 file changed, 46 insertions(+), 17 deletions(-) diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c index 41fcca011cf4..b496266deef6 100644 --- a/xen/arch/arm/mmu/p2m.c +++ b/xen/arch/arm/mmu/p2m.c @@ -753,17 +753,9 @@ static int p2m_mem_access_radix_set(struct p2m_domain = *p2m, gfn_t gfn, return rc; } =20 -/* - * Put any references on the single 4K page referenced by pte. - * TODO: Handle superpages, for now we only take special references for le= af - * pages (specifically foreign ones, which can't be super mapped today). - */ -static void p2m_put_l3_page(const lpae_t pte) +/* Put any references on the single 4K page referenced by mfn. */ +static void p2m_put_l3_page(mfn_t mfn, p2m_type_t type) { - mfn_t mfn =3D lpae_get_mfn(pte); - - ASSERT(p2m_is_valid(pte)); - /* * TODO: Handle other p2m types * @@ -771,16 +763,43 @@ static void p2m_put_l3_page(const lpae_t pte) * flush the TLBs if the page is reallocated before the end of * this loop. */ - if ( p2m_is_foreign(pte.p2m.type) ) + if ( p2m_is_foreign(type) ) { ASSERT(mfn_valid(mfn)); put_page(mfn_to_page(mfn)); } /* Detect the xenheap page and mark the stored GFN as invalid. */ - else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) + else if ( p2m_is_ram(type) && is_xen_heap_mfn(mfn) ) page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); } =20 +/* Put any references on the superpage referenced by mfn. */ +static void p2m_put_l2_superpage(mfn_t mfn, p2m_type_t type) +{ + unsigned int i; + + for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + p2m_put_l3_page(mfn, type); + + mfn =3D mfn_add(mfn, 1); + } +} + +/* Put any references on the page referenced by pte. */ +static void p2m_put_page(const lpae_t pte, unsigned int level) +{ + mfn_t mfn =3D lpae_get_mfn(pte); + + ASSERT(p2m_is_valid(pte)); + + /* We have a second level 2M superpage */ + if ( p2m_is_superpage(pte, level) && (level =3D=3D 2) ) + return p2m_put_l2_superpage(mfn, pte.p2m.type); + else if ( level =3D=3D 3 ) + return p2m_put_l3_page(mfn, pte.p2m.type); +} + /* Free lpae sub-tree behind an entry */ static void p2m_free_entry(struct p2m_domain *p2m, lpae_t entry, unsigned int level) @@ -809,9 +828,16 @@ static void p2m_free_entry(struct p2m_domain *p2m, #endif =20 p2m->stats.mappings[level]--; - /* Nothing to do if the entry is a super-page. */ - if ( level =3D=3D 3 ) - p2m_put_l3_page(entry); + /* + * TODO: Currently we don't handle 1GB super-page, Xen is not + * preemptible and therefore some work is needed to handle such + * superpages, for which at some point Xen might end up freeing me= mory + * and therefore for such a big mapping it could end up in a very = long + * operation. + */ + if ( level >=3D 2 ) + p2m_put_page(entry, level); + return; } =20 @@ -1558,9 +1584,12 @@ int relinquish_p2m_mapping(struct domain *d) =20 count++; /* - * Arbitrarily preempt every 512 iterations. + * Arbitrarily preempt every 512 iterations or when type is foreign + * mapping and the order is above 9 (2MB). */ - if ( !(count % 512) && hypercall_preempt_check() ) + if ( (!(count % 512) || + (p2m_is_foreign(t) && (order > XEN_PT_LEVEL_ORDER(2)))) && + hypercall_preempt_check() ) { rc =3D -ERESTART; break; --=20 2.34.1 From nobody Fri Nov 22 13:22:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716364350412619.6483544740015; Wed, 22 May 2024 00:52:30 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.727267.1131730 (Exim 4.92) (envelope-from ) id 1s9gld-0006g4-B8; Wed, 22 May 2024 07:52:09 +0000 Received: by outflank-mailman (output) from mailman id 727267.1131730; Wed, 22 May 2024 07:52:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9gld-0006fS-6g; Wed, 22 May 2024 07:52:09 +0000 Received: by outflank-mailman (input) for mailman id 727267; Wed, 22 May 2024 07:52:08 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glc-0006Z1-9a for xen-devel@lists.xenproject.org; Wed, 22 May 2024 07:52:08 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 2eade77b-1810-11ef-b4bb-af5377834399; Wed, 22 May 2024 09:52:05 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 637E11480; Wed, 22 May 2024 00:52:29 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 635903F766; Wed, 22 May 2024 00:52:04 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2eade77b-1810-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v3 4/7] xen/arm: Parse xen,shared-mem when host phys address is not provided Date: Wed, 22 May 2024 08:51:48 +0100 Message-Id: <20240522075151.3373899-5-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240522075151.3373899-1-luca.fancellu@arm.com> References: <20240522075151.3373899-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1716364351026100013 Content-Type: text/plain; charset="utf-8" Handle the parsing of the 'xen,shared-mem' property when the host physical address is not provided, this commit is introducing the logic to parse it, but the functionality is still not implemented and will be part of future commits. Rework the logic inside process_shm_node to check the shm_id before doing the other checks, because it ease the logic itself, add more comment on the logic. Now when the host physical address is not provided, the value INVALID_PADDR is chosen to signal this condition and it is stored as start of the bank, due to that change also early_print_info_shmem and init_sharedmem_pages are changed, to not handle banks with start equal to INVALID_PADDR. Another change is done inside meminfo_overlap_check, to skip banks that are starting with the start address INVALID_PADDR, that function is used to check banks from reserved memory, shared memory and ACPI and since the comment above the function states that wrapping around is not handled, it's unlikely for these bank to have the start address as INVALID_PADDR. Same change is done inside consider_modules, find_unallocated_memory and dt_unreserved_regions functions, in order to skip banks that starts with INVALID_PADDR from any computation. The changes above holds because of this consideration. Signed-off-by: Luca Fancellu Reviewed-by: Michal Orzel --- v3 changes: - fix typo in commit msg, add R-by Michal v2 changes: - fix comments, add parenthesis to some conditions, remove unneeded variables, remove else branch, increment counter in the for loop, skip INVALID_PADDR start banks from also consider_modules, find_unallocated_memory and dt_unreserved_regions. (Michal) --- xen/arch/arm/arm32/mmu/mm.c | 11 +++- xen/arch/arm/domain_build.c | 5 ++ xen/arch/arm/setup.c | 14 +++- xen/arch/arm/static-shmem.c | 125 +++++++++++++++++++++++++----------- 4 files changed, 111 insertions(+), 44 deletions(-) diff --git a/xen/arch/arm/arm32/mmu/mm.c b/xen/arch/arm/arm32/mmu/mm.c index be480c31ea05..30a7aa1e8e51 100644 --- a/xen/arch/arm/arm32/mmu/mm.c +++ b/xen/arch/arm/arm32/mmu/mm.c @@ -101,8 +101,15 @@ static paddr_t __init consider_modules(paddr_t s, padd= r_t e, nr +=3D reserved_mem->nr_banks; for ( ; i - nr < shmem->nr_banks; i++ ) { - paddr_t r_s =3D shmem->bank[i - nr].start; - paddr_t r_e =3D r_s + shmem->bank[i - nr].size; + paddr_t r_s, r_e; + + r_s =3D shmem->bank[i - nr].start; + + /* Shared memory banks can contain INVALID_PADDR as start */ + if ( INVALID_PADDR =3D=3D r_s ) + continue; + + r_e =3D r_s + shmem->bank[i - nr].size; =20 if ( s < r_e && r_s < e ) { diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 968c497efc78..02e741685102 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -927,6 +927,11 @@ static int __init find_unallocated_memory(const struct= kernel_info *kinfo, for ( j =3D 0; j < mem_banks[i]->nr_banks; j++ ) { start =3D mem_banks[i]->bank[j].start; + + /* Shared memory banks can contain INVALID_PADDR as start */ + if ( INVALID_PADDR =3D=3D start ) + continue; + end =3D mem_banks[i]->bank[j].start + mem_banks[i]->bank[j].si= ze; res =3D rangeset_remove_range(unalloc_mem, PFN_DOWN(start), PFN_DOWN(end - 1)); diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index c4e5c19b11d6..0c2fdaceaf21 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -240,8 +240,15 @@ static void __init dt_unreserved_regions(paddr_t s, pa= ddr_t e, offset =3D reserved_mem->nr_banks; for ( ; i - offset < shmem->nr_banks; i++ ) { - paddr_t r_s =3D shmem->bank[i - offset].start; - paddr_t r_e =3D r_s + shmem->bank[i - offset].size; + paddr_t r_s, r_e; + + r_s =3D shmem->bank[i - offset].start; + + /* Shared memory banks can contain INVALID_PADDR as start */ + if ( INVALID_PADDR =3D=3D r_s ) + continue; + + r_e =3D r_s + shmem->bank[i - offset].size; =20 if ( s < r_e && r_s < e ) { @@ -272,7 +279,8 @@ static bool __init meminfo_overlap_check(const struct m= embanks *mem, bank_start =3D mem->bank[i].start; bank_end =3D bank_start + mem->bank[i].size; =20 - if ( region_end <=3D bank_start || region_start >=3D bank_end ) + if ( INVALID_PADDR =3D=3D bank_start || region_end <=3D bank_start= || + region_start >=3D bank_end ) continue; else { diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index c15a65130659..74c81904b8a4 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -264,6 +264,12 @@ int __init process_shm(struct domain *d, struct kernel= _info *kinfo, pbase =3D boot_shm_bank->start; psize =3D boot_shm_bank->size; =20 + if ( INVALID_PADDR =3D=3D pbase ) + { + printk("%pd: host physical address must be chosen by users at = the moment", d); + return -EINVAL; + } + /* * xen,shared-mem =3D ; * TODO: pbase is optional. @@ -377,7 +383,8 @@ int __init process_shm_node(const void *fdt, int node, = uint32_t address_cells, { const struct fdt_property *prop, *prop_id, *prop_role; const __be32 *cell; - paddr_t paddr, gaddr, size, end; + paddr_t paddr =3D INVALID_PADDR; + paddr_t gaddr, size, end; struct membanks *mem =3D bootinfo_get_shmem(); struct shmem_membank_extra *shmem_extra =3D bootinfo_get_shmem_extra(); unsigned int i; @@ -432,24 +439,37 @@ int __init process_shm_node(const void *fdt, int node= , uint32_t address_cells, if ( !prop ) return -ENOENT; =20 + cell =3D (const __be32 *)prop->data; if ( len !=3D dt_cells_to_size(address_cells + size_cells + address_ce= lls) ) { - if ( len =3D=3D dt_cells_to_size(size_cells + address_cells) ) - printk("fdt: host physical address must be chosen by users at = the moment.\n"); - - printk("fdt: invalid `xen,shared-mem` property.\n"); - return -EINVAL; + if ( len =3D=3D dt_cells_to_size(address_cells + size_cells) ) + device_tree_get_reg(&cell, address_cells, size_cells, &gaddr, + &size); + else + { + printk("fdt: invalid `xen,shared-mem` property.\n"); + return -EINVAL; + } } + else + { + device_tree_get_reg(&cell, address_cells, address_cells, &paddr, + &gaddr); + size =3D dt_next_cell(size_cells, &cell); =20 - cell =3D (const __be32 *)prop->data; - device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gadd= r); - size =3D dt_next_cell(size_cells, &cell); + if ( !IS_ALIGNED(paddr, PAGE_SIZE) ) + { + printk("fdt: physical address 0x%"PRIpaddr" is not suitably al= igned.\n", + paddr); + return -EINVAL; + } =20 - if ( !IS_ALIGNED(paddr, PAGE_SIZE) ) - { - printk("fdt: physical address 0x%"PRIpaddr" is not suitably aligne= d.\n", - paddr); - return -EINVAL; + end =3D paddr + size; + if ( end <=3D paddr ) + { + printk("fdt: static shared memory region %s overflow\n", shm_i= d); + return -EINVAL; + } } =20 if ( !IS_ALIGNED(gaddr, PAGE_SIZE) ) @@ -471,39 +491,64 @@ int __init process_shm_node(const void *fdt, int node= , uint32_t address_cells, return -EINVAL; } =20 - end =3D paddr + size; - if ( end <=3D paddr ) - { - printk("fdt: static shared memory region %s overflow\n", shm_id); - return -EINVAL; - } - for ( i =3D 0; i < mem->nr_banks; i++ ) { /* * Meet the following check: - * 1) The shm ID matches and the region exactly match - * 2) The shm ID doesn't match and the region doesn't overlap - * with an existing one + * - when host address is provided: + * 1) The shm ID matches and the region exactly match + * 2) The shm ID doesn't match and the region doesn't overlap + * with an existing one + * - when host address is not provided: + * 1) The shm ID matches and the region size exactly match */ - if ( paddr =3D=3D mem->bank[i].start && size =3D=3D mem->bank[i].s= ize ) + bool paddr_assigned =3D (INVALID_PADDR =3D=3D paddr); + + if ( strncmp(shm_id, shmem_extra[i].shm_id, MAX_SHM_ID_LENGTH) =3D= =3D 0 ) { - if ( strncmp(shm_id, shmem_extra[i].shm_id, - MAX_SHM_ID_LENGTH) =3D=3D 0 ) + /* + * Regions have same shm_id (cases): + * 1) physical host address is supplied: + * - OK: paddr is equal and size is equal (same region) + * - Fail: paddr doesn't match or size doesn't match (there + * cannot exists two shmem regions with same shm_id) + * 2) physical host address is NOT supplied: + * - OK: size is equal (same region) + * - Fail: size is not equal (same shm_id must identify onl= y one + * region, there can't be two different regions wit= h same + * shm_id) + */ + bool start_match =3D paddr_assigned ? (paddr =3D=3D mem->bank[= i].start) : + true; + + if ( start_match && (size =3D=3D mem->bank[i].size) ) break; else { - printk("fdt: xen,shm-id %s does not match for all the node= s using the same region.\n", + printk("fdt: different shared memory region could not shar= e the same shm ID %s\n", shm_id); return -EINVAL; } } - else if ( strncmp(shm_id, shmem_extra[i].shm_id, - MAX_SHM_ID_LENGTH) !=3D 0 ) + + /* + * Regions have different shm_id (cases): + * 1) physical host address is supplied: + * - OK: paddr different, or size different (case where paddr + * is equal but psize is different are wrong, but they + * are handled later when checking for overlapping) + * - Fail: paddr equal and size equal (the same region can't be + * identified with different shm_id) + * 2) physical host address is NOT supplied: + * - OK: Both have different shm_id so even with same size th= ey + * can exists + */ + if ( !paddr_assigned || (paddr !=3D mem->bank[i].start) || + (size !=3D mem->bank[i].size) ) continue; else { - printk("fdt: different shared memory region could not share th= e same shm ID %s\n", + printk("fdt: xen,shm-id %s does not match for all the nodes us= ing the same region\n", shm_id); return -EINVAL; } @@ -513,7 +558,8 @@ int __init process_shm_node(const void *fdt, int node, = uint32_t address_cells, { if (i < mem->max_banks) { - if ( check_reserved_regions_overlap(paddr, size) ) + if ( (paddr !=3D INVALID_PADDR) && + check_reserved_regions_overlap(paddr, size) ) return -EINVAL; =20 /* Static shared memory shall be reserved from any other use. = */ @@ -583,13 +629,13 @@ void __init early_print_info_shmem(void) { const struct membanks *shmem =3D bootinfo_get_shmem(); unsigned int bank; + unsigned int printed =3D 0; =20 - for ( bank =3D 0; bank < shmem->nr_banks; bank++ ) - { - printk(" SHMEM[%u]: %"PRIpaddr" - %"PRIpaddr"\n", bank, - shmem->bank[bank].start, - shmem->bank[bank].start + shmem->bank[bank].size - 1); - } + for ( bank =3D 0; bank < shmem->nr_banks; bank++, printed++ ) + if ( shmem->bank[bank].start !=3D INVALID_PADDR ) + printk(" SHMEM[%u]: %"PRIpaddr" - %"PRIpaddr"\n", printed, + shmem->bank[bank].start, + shmem->bank[bank].start + shmem->bank[bank].size - 1); } =20 void __init init_sharedmem_pages(void) @@ -598,7 +644,8 @@ void __init init_sharedmem_pages(void) unsigned int bank; =20 for ( bank =3D 0 ; bank < shmem->nr_banks; bank++ ) - init_staticmem_bank(&shmem->bank[bank]); + if ( shmem->bank[bank].start !=3D INVALID_PADDR ) + init_staticmem_bank(&shmem->bank[bank]); } =20 int __init remove_shm_from_rangeset(const struct kernel_info *kinfo, --=20 2.34.1 From nobody Fri Nov 22 13:22:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716364346428622.1753799555173; Wed, 22 May 2024 00:52:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.727266.1131724 (Exim 4.92) (envelope-from ) id 1s9glc-0006aT-Qa; Wed, 22 May 2024 07:52:08 +0000 Received: by outflank-mailman (output) from mailman id 727266.1131724; Wed, 22 May 2024 07:52:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glc-0006aK-NA; Wed, 22 May 2024 07:52:08 +0000 Received: by outflank-mailman (input) for mailman id 727266; Wed, 22 May 2024 07:52:07 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glb-0005YY-HV for xen-devel@lists.xenproject.org; Wed, 22 May 2024 07:52:07 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 2f414d23-1810-11ef-90a0-e314d9c70b13; Wed, 22 May 2024 09:52:06 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7716B14BF; Wed, 22 May 2024 00:52:30 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 909813F766; Wed, 22 May 2024 00:52:05 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2f414d23-1810-11ef-90a0-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v3 5/7] xen/arm: Rework heap page allocation outside allocate_bank_memory Date: Wed, 22 May 2024 08:51:49 +0100 Message-Id: <20240522075151.3373899-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240522075151.3373899-1-luca.fancellu@arm.com> References: <20240522075151.3373899-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1716364346926100008 Content-Type: text/plain; charset="utf-8" The function allocate_bank_memory allocates pages from the heap and maps them to the guest using guest_physmap_add_page. As a preparation work to support static shared memory bank when the host physical address is not provided, Xen needs to allocate memory from the heap, so rework allocate_bank_memory moving out the page allocation in a new function called allocate_domheap_memory. The function allocate_domheap_memory takes a callback function and a pointer to some extra information passed to the callback and this function will be called for every region, until a defined size is reached. In order to keep allocate_bank_memory functionality, the callback passed to allocate_domheap_memory is a wrapper for guest_physmap_add_page. Let allocate_domheap_memory be externally visible, in order to use it in the future from the static shared memory module. Take the opportunity to change the signature of allocate_bank_memory and remove the 'struct domain' parameter, which can be retrieved from 'struct kernel_info'. No functional changes is intended. Signed-off-by: Luca Fancellu Reviewed-by: Michal Orzel --- v3 changes: - Add R-by Michal v2: - Reduced scope of pg var in allocate_domheap_memory, removed not necessary BUG_ON(), changed callback to return bool and fix comment. (Michal) --- xen/arch/arm/dom0less-build.c | 4 +- xen/arch/arm/domain_build.c | 79 +++++++++++++++++-------- xen/arch/arm/include/asm/domain_build.h | 9 ++- 3 files changed, 62 insertions(+), 30 deletions(-) diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c index 74f053c242f4..20ddf6f8f250 100644 --- a/xen/arch/arm/dom0less-build.c +++ b/xen/arch/arm/dom0less-build.c @@ -60,12 +60,12 @@ static void __init allocate_memory(struct domain *d, st= ruct kernel_info *kinfo) =20 mem->nr_banks =3D 0; bank_size =3D MIN(GUEST_RAM0_SIZE, kinfo->unassigned_mem); - if ( !allocate_bank_memory(d, kinfo, gaddr_to_gfn(GUEST_RAM0_BASE), + if ( !allocate_bank_memory(kinfo, gaddr_to_gfn(GUEST_RAM0_BASE), bank_size) ) goto fail; =20 bank_size =3D MIN(GUEST_RAM1_SIZE, kinfo->unassigned_mem); - if ( !allocate_bank_memory(d, kinfo, gaddr_to_gfn(GUEST_RAM1_BASE), + if ( !allocate_bank_memory(kinfo, gaddr_to_gfn(GUEST_RAM1_BASE), bank_size) ) goto fail; =20 diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 02e741685102..669970c86fd5 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -417,30 +417,15 @@ static void __init allocate_memory_11(struct domain *= d, } =20 #ifdef CONFIG_DOM0LESS_BOOT -bool __init allocate_bank_memory(struct domain *d, struct kernel_info *kin= fo, - gfn_t sgfn, paddr_t tot_size) +bool __init allocate_domheap_memory(struct domain *d, paddr_t tot_size, + alloc_domheap_mem_cb cb, void *extra) { - struct membanks *mem =3D kernel_info_get_mem(kinfo); - int res; - struct page_info *pg; - struct membank *bank; - unsigned int max_order =3D ~0; - - /* - * allocate_bank_memory can be called with a tot_size of zero for - * the second memory bank. It is not an error and we can safely - * avoid creating a zero-size memory bank. - */ - if ( tot_size =3D=3D 0 ) - return true; - - bank =3D &mem->bank[mem->nr_banks]; - bank->start =3D gfn_to_gaddr(sgfn); - bank->size =3D tot_size; + unsigned int max_order =3D UINT_MAX; =20 while ( tot_size > 0 ) { unsigned int order =3D get_allocation_size(tot_size); + struct page_info *pg; =20 order =3D min(max_order, order); =20 @@ -463,17 +448,61 @@ bool __init allocate_bank_memory(struct domain *d, st= ruct kernel_info *kinfo, continue; } =20 - res =3D guest_physmap_add_page(d, sgfn, page_to_mfn(pg), order); - if ( res ) - { - dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res); + if ( !cb(d, pg, order, extra) ) return false; - } =20 - sgfn =3D gfn_add(sgfn, 1UL << order); tot_size -=3D (1ULL << (PAGE_SHIFT + order)); } =20 + return true; +} + +static bool __init guest_map_pages(struct domain *d, struct page_info *pg, + unsigned int order, void *extra) +{ + gfn_t *sgfn =3D (gfn_t *)extra; + int res; + + BUG_ON(!sgfn); + res =3D guest_physmap_add_page(d, *sgfn, page_to_mfn(pg), order); + if ( res ) + { + dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res); + return false; + } + + *sgfn =3D gfn_add(*sgfn, 1UL << order); + + return true; +} + +bool __init allocate_bank_memory(struct kernel_info *kinfo, gfn_t sgfn, + paddr_t tot_size) +{ + struct membanks *mem =3D kernel_info_get_mem(kinfo); + struct domain *d =3D kinfo->d; + struct membank *bank; + + /* + * allocate_bank_memory can be called with a tot_size of zero for + * the second memory bank. It is not an error and we can safely + * avoid creating a zero-size memory bank. + */ + if ( tot_size =3D=3D 0 ) + return true; + + bank =3D &mem->bank[mem->nr_banks]; + bank->start =3D gfn_to_gaddr(sgfn); + bank->size =3D tot_size; + + /* + * Allocate pages from the heap until tot_size is zero and map them to= the + * guest using guest_map_pages, passing the starting gfn as extra para= meter + * for the map operation. + */ + if ( !allocate_domheap_memory(d, tot_size, guest_map_pages, &sgfn) ) + return false; + mem->nr_banks++; kinfo->unassigned_mem -=3D bank->size; =20 diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include= /asm/domain_build.h index 45936212ca21..e712afbc7fbf 100644 --- a/xen/arch/arm/include/asm/domain_build.h +++ b/xen/arch/arm/include/asm/domain_build.h @@ -5,9 +5,12 @@ #include =20 typedef __be32 gic_interrupt_t[3]; - -bool allocate_bank_memory(struct domain *d, struct kernel_info *kinfo, - gfn_t sgfn, paddr_t tot_size); +typedef bool (*alloc_domheap_mem_cb)(struct domain *d, struct page_info *p= g, + unsigned int order, void *extra); +bool allocate_domheap_memory(struct domain *d, paddr_t tot_size, + alloc_domheap_mem_cb cb, void *extra); +bool allocate_bank_memory(struct kernel_info *kinfo, gfn_t sgfn, + paddr_t tot_size); int construct_domain(struct domain *d, struct kernel_info *kinfo); int domain_fdt_begin_node(void *fdt, const char *name, uint64_t unit); int make_chosen_node(const struct kernel_info *kinfo); --=20 2.34.1 From nobody Fri Nov 22 13:22:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716364348596519.1530558029608; Wed, 22 May 2024 00:52:28 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.727268.1131736 (Exim 4.92) (envelope-from ) id 1s9gld-0006mb-RT; Wed, 22 May 2024 07:52:09 +0000 Received: by outflank-mailman (output) from mailman id 727268.1131736; Wed, 22 May 2024 07:52:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9gld-0006lk-JE; Wed, 22 May 2024 07:52:09 +0000 Received: by outflank-mailman (input) for mailman id 727268; Wed, 22 May 2024 07:52:08 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glc-0005YY-No for xen-devel@lists.xenproject.org; Wed, 22 May 2024 07:52:08 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 2fe6d436-1810-11ef-90a0-e314d9c70b13; Wed, 22 May 2024 09:52:07 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A861150C; Wed, 22 May 2024 00:52:31 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A431B3F766; Wed, 22 May 2024 00:52:06 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2fe6d436-1810-11ef-90a0-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v3 6/7] xen/arm: Implement the logic for static shared memory from Xen heap Date: Wed, 22 May 2024 08:51:50 +0100 Message-Id: <20240522075151.3373899-7-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240522075151.3373899-1-luca.fancellu@arm.com> References: <20240522075151.3373899-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1716364348968100011 Content-Type: text/plain; charset="utf-8" This commit implements the logic to have the static shared memory banks from the Xen heap instead of having the host physical address passed from the user. When the host physical address is not supplied, the physical memory is taken from the Xen heap using allocate_domheap_memory, the allocation needs to occur at the first handled DT node and the allocated banks need to be saved somewhere. Introduce the 'shm_heap_banks' for that reason, a struct that will hold the banks allocated from the heap, its field bank[].shmem_extra will be used to point to the bootinfo shared memory banks .shmem_extra space, so that there is not further allocation of memory and every bank in shm_heap_banks can be safely identified by the shm_id to reconstruct its traceability and if it was allocated or not. A search into 'shm_heap_banks' will reveal if the banks were allocated or not, in case the host address is not passed, and the callback given to allocate_domheap_memory will store the banks in the structure and map them to the current domain, to do that, some changes to acquire_shared_memory_bank are made to let it differentiate if the bank is from the heap and if it is, then assign_pages is called for every bank. When the bank is already allocated, for every bank allocated with the corresponding shm_id, handle_shared_mem_bank is called and the mapping are done. Signed-off-by: Luca Fancellu Reviewed-by: Michal Orzel --- v3 changes: - reworded commit msg section, swap role_str and gbase in alloc_heap_pages_cb_extra to avoid padding hole in arm32, remove not needed printk, modify printk to print KB instead of MB, swap strncmp for strcmp, reduced memory footprint for shm_heap_banks. (Michal) v2 changes: - add static inline get_shmem_heap_banks(), given the changes to the struct membanks interface. Rebase changes due to removal of owner_dom_io arg from handle_shared_mem_bank. Change save_map_heap_pages return type given the changes to the allocate_domheap_memory callback type. --- xen/arch/arm/static-shmem.c | 187 ++++++++++++++++++++++++++++++------ 1 file changed, 155 insertions(+), 32 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 74c81904b8a4..53e8d3ecf030 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -9,6 +9,25 @@ #include #include =20 +typedef struct { + struct domain *d; + const char *role_str; + paddr_t gbase; + struct shmem_membank_extra *bank_extra_info; +} alloc_heap_pages_cb_extra; + +static struct { + struct membanks_hdr common; + struct membank bank[NR_SHMEM_BANKS]; +} shm_heap_banks __initdata =3D { + .common.max_banks =3D NR_SHMEM_BANKS +}; + +static inline struct membanks *get_shmem_heap_banks(void) +{ + return container_of(&shm_heap_banks.common, struct membanks, common); +} + static void __init __maybe_unused build_assertions(void) { /* @@ -63,7 +82,8 @@ static bool __init is_shm_allocated_to_domio(paddr_t pbas= e) } =20 static mfn_t __init acquire_shared_memory_bank(struct domain *d, - paddr_t pbase, paddr_t psiz= e) + paddr_t pbase, paddr_t psiz= e, + bool bank_from_heap) { mfn_t smfn; unsigned long nr_pfns; @@ -83,19 +103,31 @@ static mfn_t __init acquire_shared_memory_bank(struct = domain *d, d->max_pages +=3D nr_pfns; =20 smfn =3D maddr_to_mfn(pbase); - res =3D acquire_domstatic_pages(d, smfn, nr_pfns, 0); + if ( bank_from_heap ) + /* + * When host address is not provided, static shared memory is + * allocated from heap and shall be assigned to owner domain. + */ + res =3D assign_pages(maddr_to_page(pbase), nr_pfns, d, 0); + else + res =3D acquire_domstatic_pages(d, smfn, nr_pfns, 0); + if ( res ) { - printk(XENLOG_ERR - "%pd: failed to acquire static memory: %d.\n", d, res); - d->max_pages -=3D nr_pfns; - return INVALID_MFN; + printk(XENLOG_ERR "%pd: failed to %s static memory: %d.\n", d, + bank_from_heap ? "assign" : "acquire", res); + goto fail; } =20 return smfn; + + fail: + d->max_pages -=3D nr_pfns; + return INVALID_MFN; } =20 static int __init assign_shared_memory(struct domain *d, paddr_t gbase, + bool bank_from_heap, const struct membank *shm_bank) { mfn_t smfn; @@ -108,10 +140,7 @@ static int __init assign_shared_memory(struct domain *= d, paddr_t gbase, psize =3D shm_bank->size; nr_borrowers =3D shm_bank->shmem_extra->nr_shm_borrowers; =20 - printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpad= dr".\n", - d, pbase, pbase + psize); - - smfn =3D acquire_shared_memory_bank(d, pbase, psize); + smfn =3D acquire_shared_memory_bank(d, pbase, psize, bank_from_heap); if ( mfn_eq(smfn, INVALID_MFN) ) return -EINVAL; =20 @@ -182,6 +211,7 @@ append_shm_bank_to_domain(struct kernel_info *kinfo, pa= ddr_t start, =20 static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase, const char *role_str, + bool bank_from_heap, const struct membank *shm_bank) { bool owner_dom_io =3D true; @@ -210,7 +240,8 @@ static int __init handle_shared_mem_bank(struct domain = *d, paddr_t gbase, * We found the first borrower of the region, the owner was not * specified, so they should be assigned to dom_io. */ - ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, shm= _bank); + ret =3D assign_shared_memory(owner_dom_io ? dom_io : d, gbase, + bank_from_heap, shm_bank); if ( ret ) return ret; } @@ -227,6 +258,39 @@ static int __init handle_shared_mem_bank(struct domain= *d, paddr_t gbase, return 0; } =20 +static bool __init save_map_heap_pages(struct domain *d, struct page_info = *pg, + unsigned int order, void *extra) +{ + alloc_heap_pages_cb_extra *b_extra =3D (alloc_heap_pages_cb_extra *)ex= tra; + int idx =3D shm_heap_banks.common.nr_banks; + int ret =3D -ENOSPC; + + BUG_ON(!b_extra); + + if ( idx < shm_heap_banks.common.max_banks ) + { + shm_heap_banks.bank[idx].start =3D page_to_maddr(pg); + shm_heap_banks.bank[idx].size =3D (1ULL << (PAGE_SHIFT + order)); + shm_heap_banks.bank[idx].shmem_extra =3D b_extra->bank_extra_info; + shm_heap_banks.common.nr_banks++; + + ret =3D handle_shared_mem_bank(b_extra->d, b_extra->gbase, + b_extra->role_str, true, + &shm_heap_banks.bank[idx]); + if ( !ret ) + { + /* Increment guest physical address for next mapping */ + b_extra->gbase +=3D shm_heap_banks.bank[idx].size; + return true; + } + } + + printk("Failed to allocate static shared memory from Xen heap: (%d)\n", + ret); + + return false; +} + int __init process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node) { @@ -264,38 +328,97 @@ int __init process_shm(struct domain *d, struct kerne= l_info *kinfo, pbase =3D boot_shm_bank->start; psize =3D boot_shm_bank->size; =20 - if ( INVALID_PADDR =3D=3D pbase ) - { - printk("%pd: host physical address must be chosen by users at = the moment", d); - return -EINVAL; - } + /* "role" property is optional */ + if ( dt_property_read_string(shm_node, "role", &role_str) !=3D 0 ) + role_str =3D NULL; =20 /* - * xen,shared-mem =3D ; - * TODO: pbase is optional. + * xen,shared-mem =3D <[pbase,] gbase, size>; + * pbase is optional. */ addr_cells =3D dt_n_addr_cells(shm_node); size_cells =3D dt_n_size_cells(shm_node); prop =3D dt_find_property(shm_node, "xen,shared-mem", NULL); BUG_ON(!prop); cells =3D (const __be32 *)prop->value; - gbase =3D dt_read_paddr(cells + addr_cells, addr_cells); =20 - for ( i =3D 0; i < PFN_DOWN(psize); i++ ) - if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) - { - printk("%pd: invalid physical address 0x%"PRI_mfn"\n", - d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); - return -EINVAL; - } + if ( pbase !=3D INVALID_PADDR ) + { + /* guest phys address is after host phys address */ + gbase =3D dt_read_paddr(cells + addr_cells, addr_cells); + + for ( i =3D 0; i < PFN_DOWN(psize); i++ ) + if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) + { + printk("%pd: invalid physical address 0x%"PRI_mfn"\n", + d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); + return -EINVAL; + } + + /* The host physical address is supplied by the user */ + ret =3D handle_shared_mem_bank(d, gbase, role_str, false, + boot_shm_bank); + if ( ret ) + return ret; + } + else + { + /* + * The host physical address is not supplied by the user, so it + * means that the banks needs to be allocated from the Xen hea= p, + * look into the already allocated banks from the heap. + */ + const struct membank *alloc_bank =3D + find_shm_bank_by_id(get_shmem_heap_banks(), shm_id); =20 - /* "role" property is optional */ - if ( dt_property_read_string(shm_node, "role", &role_str) !=3D 0 ) - role_str =3D NULL; + /* guest phys address is right at the beginning */ + gbase =3D dt_read_paddr(cells, addr_cells); =20 - ret =3D handle_shared_mem_bank(d, gbase, role_str, boot_shm_bank); - if ( ret ) - return ret; + if ( !alloc_bank ) + { + alloc_heap_pages_cb_extra cb_arg =3D { d, role_str, gbase, + boot_shm_bank->shmem_extra }; + + /* shm_id identified bank is not yet allocated */ + if ( !allocate_domheap_memory(NULL, psize, save_map_heap_p= ages, + &cb_arg) ) + { + printk(XENLOG_ERR + "Failed to allocate (%"PRIpaddr"KB) pages as st= atic shared memory from heap\n", + psize >> 10); + return -EINVAL; + } + } + else + { + /* shm_id identified bank is already allocated */ + const struct membank *end_bank =3D + &shm_heap_banks.bank[shm_heap_banks.common.nr_bank= s]; + paddr_t gbase_bank =3D gbase; + + /* + * Static shared memory banks that are taken from the Xen = heap + * are allocated sequentially in shm_heap_banks, so starti= ng + * from the first bank found identified by shm_id, the cod= e can + * just advance by one bank at the time until it reaches t= he end + * of the array or it finds another bank NOT identified by + * shm_id + */ + for ( ; alloc_bank < end_bank; alloc_bank++ ) + { + if ( strcmp(shm_id, alloc_bank->shmem_extra->shm_id) != =3D 0 ) + break; + + ret =3D handle_shared_mem_bank(d, gbase_bank, role_str= , true, + alloc_bank); + if ( ret ) + return ret; + + /* Increment guest physical address for next mapping */ + gbase_bank +=3D alloc_bank->size; + } + } + } =20 /* * Record static shared memory region info for later setting --=20 2.34.1 From nobody Fri Nov 22 13:22:33 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1716364355042357.90018506712113; Wed, 22 May 2024 00:52:35 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.727269.1131754 (Exim 4.92) (envelope-from ) id 1s9glg-0007Pm-0l; Wed, 22 May 2024 07:52:12 +0000 Received: by outflank-mailman (output) from mailman id 727269.1131754; Wed, 22 May 2024 07:52:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9glf-0007Pb-Tw; Wed, 22 May 2024 07:52:11 +0000 Received: by outflank-mailman (input) for mailman id 727269; Wed, 22 May 2024 07:52:10 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s9gle-0005YY-5U for xen-devel@lists.xenproject.org; Wed, 22 May 2024 07:52:10 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 30c1425d-1810-11ef-90a0-e314d9c70b13; Wed, 22 May 2024 09:52:09 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D252E1515; Wed, 22 May 2024 00:52:32 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B7E3B3F766; Wed, 22 May 2024 00:52:07 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 30c1425d-1810-11ef-90a0-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 7/7] xen/docs: Describe static shared memory when host address is not provided Date: Wed, 22 May 2024 08:51:51 +0100 Message-Id: <20240522075151.3373899-8-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240522075151.3373899-1-luca.fancellu@arm.com> References: <20240522075151.3373899-1-luca.fancellu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1716364357297100001 Content-Type: text/plain; charset="utf-8" From: Penny Zheng This commit describe the new scenario where host address is not provided in "xen,shared-mem" property and a new example is added to the page to explain in details. Take the occasion to fix some typos in the page. Signed-off-by: Penny Zheng Signed-off-by: Luca Fancellu Reviewed-by: Michal Orzel --- v2: - Add Michal R-by v1: - patch from https://patchwork.kernel.org/project/xen-devel/patch/20231206= 090623.1932275-10-Penny.Zheng@arm.com/ with some changes in the commit message. --- docs/misc/arm/device-tree/booting.txt | 52 ++++++++++++++++++++------- 1 file changed, 39 insertions(+), 13 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-t= ree/booting.txt index bbd955e9c2f6..ac4bad6fe5e0 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -590,7 +590,7 @@ communication. An array takes a physical address, which is the base address of the shared memory region in host physical address space, a size, and a gue= st physical address, as the target address of the mapping. - e.g. xen,shared-mem =3D < [host physical address] [guest address] [siz= e] > + e.g. xen,shared-mem =3D < [host physical address] [guest address] [siz= e] >; =20 It shall also meet the following criteria: 1) If the SHM ID matches with an existing region, the address range of= the @@ -601,8 +601,8 @@ communication. The number of cells for the host address (and size) is the same as the guest pseudo-physical address and they are inherited from the parent n= ode. =20 - Host physical address is optional, when missing Xen decides the locati= on - (currently unimplemented). + Host physical address is optional, when missing Xen decides the locati= on. + e.g. xen,shared-mem =3D < [guest address] [size] >; =20 - role (Optional) =20 @@ -629,7 +629,7 @@ chosen { role =3D "owner"; xen,shm-id =3D "my-shared-mem-0"; xen,shared-mem =3D <0x10000000 0x10000000 0x10000000>; - } + }; =20 domU1 { compatible =3D "xen,domain"; @@ -640,25 +640,36 @@ chosen { vpl011; =20 /* - * shared memory region identified as 0x0(xen,shm-id =3D <0x0>) - * is shared between Dom0 and DomU1. + * shared memory region "my-shared-mem-0" is shared + * between Dom0 and DomU1. */ domU1-shared-mem@10000000 { compatible =3D "xen,domain-shared-memory-v1"; role =3D "borrower"; xen,shm-id =3D "my-shared-mem-0"; xen,shared-mem =3D <0x10000000 0x50000000 0x10000000>; - } + }; =20 /* - * shared memory region identified as 0x1(xen,shm-id =3D <0x1>) - * is shared between DomU1 and DomU2. + * shared memory region "my-shared-mem-1" is shared between + * DomU1 and DomU2. */ domU1-shared-mem@50000000 { compatible =3D "xen,domain-shared-memory-v1"; xen,shm-id =3D "my-shared-mem-1"; xen,shared-mem =3D <0x50000000 0x60000000 0x20000000>; - } + }; + + /* + * shared memory region "my-shared-mem-2" is shared between + * DomU1 and DomU2. + */ + domU1-shared-mem-2 { + compatible =3D "xen,domain-shared-memory-v1"; + xen,shm-id =3D "my-shared-mem-2"; + role =3D "owner"; + xen,shared-mem =3D <0x80000000 0x20000000>; + }; =20 ...... =20 @@ -672,14 +683,21 @@ chosen { cpus =3D <1>; =20 /* - * shared memory region identified as 0x1(xen,shm-id =3D <0x1>) - * is shared between domU1 and domU2. + * shared memory region "my-shared-mem-1" is shared between + * domU1 and domU2. */ domU2-shared-mem@50000000 { compatible =3D "xen,domain-shared-memory-v1"; xen,shm-id =3D "my-shared-mem-1"; xen,shared-mem =3D <0x50000000 0x70000000 0x20000000>; - } + }; + + domU2-shared-mem-2 { + compatible =3D "xen,domain-shared-memory-v1"; + xen,shm-id =3D "my-shared-mem-2"; + role =3D "borrower"; + xen,shared-mem =3D <0x90000000 0x20000000>; + }; =20 ...... }; @@ -699,3 +717,11 @@ shared between DomU1 and DomU2. It will get mapped at = 0x60000000 in DomU1 guest physical address space, and at 0x70000000 in DomU2 guest physical address = space. DomU1 and DomU2 are both the borrower domain, the owner domain is the defa= ult owner domain DOMID_IO. + +For the static shared memory region "my-shared-mem-2", since host physical +address is not provided by user, Xen will automatically allocate 512MB +from heap as static shared memory to be shared between DomU1 and DomU2. +The automatically allocated static shared memory will get mapped at +0x80000000 in DomU1 guest physical address space, and at 0x90000000 in Dom= U2 +guest physical address space. DomU1 is explicitly defined as the owner dom= ain, +and DomU2 is the borrower domain. --=20 2.34.1