From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 168775054708830.731062303432395; Sun, 25 Jun 2023 20:35:47 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554955.866426 (Exim 4.92) (envelope-from ) id 1qDd0T-0007iF-O8; Mon, 26 Jun 2023 03:35:13 +0000 Received: by outflank-mailman (output) from mailman id 554955.866426; Mon, 26 Jun 2023 03:35:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0T-0007hI-IP; Mon, 26 Jun 2023 03:35:13 +0000 Received: by outflank-mailman (input) for mailman id 554955; Mon, 26 Jun 2023 03:35:12 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0R-0007ej-VA for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:11 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 744454cd-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:35:11 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9511E2F4; Sun, 25 Jun 2023 20:35:54 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E48BC3F64C; Sun, 25 Jun 2023 20:35:07 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 744454cd-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Julien Grall Subject: [PATCH v3 01/52] xen/arm: remove xen_phys_start and xenheap_phys_end from config.h Date: Mon, 26 Jun 2023 11:33:52 +0800 Message-Id: <20230626033443.2943270-2-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750548119100007 Content-Type: text/plain; charset="utf-8" From: Wei Chen These two variables are stale variables, they only have declarations in config.h, they don't have any definition and no any code is using these two variables. So in this patch, we remove them from config.h. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng Acked-by: Julien Grall --- v1 -> v2: 1. Add Ab. --- v3: - no changes --- xen/arch/arm/include/asm/config.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/c= onfig.h index c969e6da58..30f4665ba9 100644 --- a/xen/arch/arm/include/asm/config.h +++ b/xen/arch/arm/include/asm/config.h @@ -204,8 +204,6 @@ #define STACK_SIZE (PAGE_SIZE << STACK_ORDER) =20 #ifndef __ASSEMBLY__ -extern unsigned long xen_phys_start; -extern unsigned long xenheap_phys_end; extern unsigned long frametable_virt_end; #endif =20 --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750546527608.2489003782954; Sun, 25 Jun 2023 20:35:46 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554956.866441 (Exim 4.92) (envelope-from ) id 1qDd0W-000892-4R; Mon, 26 Jun 2023 03:35:16 +0000 Received: by outflank-mailman (output) from mailman id 554956.866441; Mon, 26 Jun 2023 03:35:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0W-00088t-1J; Mon, 26 Jun 2023 03:35:16 +0000 Received: by outflank-mailman (input) for mailman id 554956; Mon, 26 Jun 2023 03:35:14 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0U-0007ej-RN for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:14 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 75ef70ef-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:35:14 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6EAF11FB; Sun, 25 Jun 2023 20:35:57 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 119273F64C; Sun, 25 Jun 2023 20:35:10 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 75ef70ef-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 02/52] xen/arm: make ARM_EFI selectable for Arm64 Date: Mon, 26 Jun 2023 11:33:53 +0800 Message-Id: <20230626033443.2943270-3-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750547841100005 Content-Type: text/plain; charset="utf-8" From: Wei Chen Currently, ARM_EFI will mandatorily selected by Arm64. Even if the user knows for sure that their images will not start in the EFI environment, they can't disable the EFI support for Arm64. This means there will be about 3K lines unused code in their images. So in this patch, we make ARM_EFI selectable for Arm64, and based on that, we can use CONFIG_ARM_EFI to gate the EFI specific code in head.S for those images that will not be booted in EFI environment. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng Reviewed-by: Julien Grall --- v1 -> v2: 1. New patch --- v3: 1. doc fix --- xen/arch/arm/Kconfig | 9 +++++++-- xen/arch/arm/arm64/head.S | 15 +++++++++++++-- 2 files changed, 20 insertions(+), 4 deletions(-) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 61e581b8c2..70fdc2ba63 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -7,7 +7,6 @@ config ARM_64 def_bool y depends on !ARM_32 select 64BIT - select ARM_EFI select HAS_FAST_MULTIPLY =20 config ARM @@ -70,7 +69,13 @@ config ACPI an alternative to device tree on ARM64. =20 config ARM_EFI - bool + bool "UEFI boot service support" + depends on ARM_64 + default y + help + This option provides support for boot services through + UEFI firmware. A UEFI stub is provided to allow Xen to + be booted as an EFI application. =20 config GICV3 bool "GICv3 driver" diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S index f37133cf7c..10a07db428 100644 --- a/xen/arch/arm/arm64/head.S +++ b/xen/arch/arm/arm64/head.S @@ -22,8 +22,11 @@ =20 #include #include + +#ifdef CONFIG_ARM_EFI #include #include +#endif =20 #define PT_PT 0xf7f /* nG=3D1 AF=3D1 SH=3D11 AP=3D01 NS=3D1 ATTR=3D111= T=3D1 P=3D1 */ #define PT_MEM 0xf7d /* nG=3D1 AF=3D1 SH=3D11 AP=3D01 NS=3D1 ATTR=3D111= T=3D0 P=3D1 */ @@ -172,8 +175,10 @@ efi_head: .byte 0x52 .byte 0x4d .byte 0x64 - .long pe_header - efi_head /* Offset to the PE header. */ - +#ifndef CONFIG_ARM_EFI + .long 0 /* 0 means no PE header. */ +#else + .long pe_header - efi_head /* Offset to the PE header. */ /* * Add the PE/COFF header to the file. The address of this header * is at offset 0x3c in the file, and is part of Linux "Image" @@ -279,6 +284,8 @@ section_table: .short 0 /* NumberOfLineNumbers (0 for executable= s) */ .long 0xe0500020 /* Characteristics (section flags) */ .align 5 +#endif /* CONFIG_ARM_EFI */ + real_start: /* BSS should be zeroed when booting without EFI */ mov x26, #0 /* x26 :=3D skip_zero_bss */ @@ -917,6 +924,8 @@ putn: ret ENTRY(lookup_processor_type) mov x0, #0 ret + +#ifdef CONFIG_ARM_EFI /* * Function to transition from EFI loader in C, to Xen entry point. * void noreturn efi_xen_start(void *fdt_ptr, uint32_t fdt_size); @@ -975,6 +984,8 @@ ENTRY(efi_xen_start) b real_start_efi ENDPROC(efi_xen_start) =20 +#endif /* CONFIG_ARM_EFI */ + /* * Local variables: * mode: ASM --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750545115458.7078166045475; Sun, 25 Jun 2023 20:35:45 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554957.866451 (Exim 4.92) (envelope-from ) id 1qDd0Z-0008QZ-B8; Mon, 26 Jun 2023 03:35:19 +0000 Received: by outflank-mailman (output) from mailman id 554957.866451; Mon, 26 Jun 2023 03:35:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0Z-0008QS-83; Mon, 26 Jun 2023 03:35:19 +0000 Received: by outflank-mailman (input) for mailman id 554957; Mon, 26 Jun 2023 03:35:17 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0X-0007ej-N1 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:17 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 779d4bbd-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:35:16 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4B93B1FB; Sun, 25 Jun 2023 20:36:00 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DFA903F64C; Sun, 25 Jun 2023 20:35:13 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 779d4bbd-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 03/52] xen/arm: add an option to define Xen start address for Armv8-R Date: Mon, 26 Jun 2023 11:33:54 +0800 Message-Id: <20230626033443.2943270-4-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750546835100001 Content-Type: text/plain; charset="utf-8" From: Wei Chen On Armv8-A, Xen has a fixed virtual start address (link address too) for all Armv8-A platforms. In an MMU based system, Xen can map its loaded address to this virtual start address. So, on Armv8-A platforms, the Xen start address does not need to be configurable. But on Armv8-R platforms, there is no MMU to map loaded address to a fixed virtual address and different platforms will have very different address space layout. So Xen cannot use a fixed physical address on MPU based system and need to have it configurable. In this patch we introduce one Kconfig option for users to define the default Xen start address for Armv8-R. Users can enter the address in config time, or select the tailored platform config file from arch/arm/configs. And as we introduced Armv8-R to Xen, that means the existed Arm64 MMU based platforms should not be listed in Armv8-R platform list, so we add !HAS_MPU dependency for these platforms. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng --- v1 -> v2: 1. Remove the platform header fvp_baser.h. 2. Remove the default start address for fvp_baser64. 3. Remove the description of default address from commit log. 4. Change HAS_MPU to ARM_V8R for Xen start address dependency. No matter Arm-v8r board has MPU or not, it always need to specify the start address. --- v3: 1. Remove unrelated change of "CONFIG_FVP_BASER" 2. Change ARM_V8R to HAS_MPU for Xen start address dependency --- xen/arch/arm/Kconfig | 8 ++++++++ xen/arch/arm/platforms/Kconfig | 8 +++++--- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 70fdc2ba63..ff17345cdb 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -181,6 +181,14 @@ config TEE This option enables generic TEE mediators support. It allows guests to access real TEE via one of TEE mediators implemented in XEN. =20 +config XEN_START_ADDRESS + hex "Xen start address: keep default to use platform defined address" + default 0 + depends on HAS_MPU + help + This option allows to set the customized address at which Xen will be + linked on MPU systems. This address must be aligned to a page size. + source "arch/arm/tee/Kconfig" =20 config STATIC_SHM diff --git a/xen/arch/arm/platforms/Kconfig b/xen/arch/arm/platforms/Kconfig index c93a6b2756..75af48b5f9 100644 --- a/xen/arch/arm/platforms/Kconfig +++ b/xen/arch/arm/platforms/Kconfig @@ -1,6 +1,7 @@ choice prompt "Platform Support" default ALL_PLAT + default NO_PLAT if HAS_MPU ---help--- Choose which hardware platform to enable in Xen. =20 @@ -8,13 +9,14 @@ choice =20 config ALL_PLAT bool "All Platforms" + depends on !HAS_MPU ---help--- Enable support for all available hardware platforms. It doesn't automatically select any of the related drivers. =20 config QEMU bool "QEMU aarch virt machine support" - depends on ARM_64 + depends on ARM_64 && !HAS_MPU select GICV3 select HAS_PL011 ---help--- @@ -23,7 +25,7 @@ config QEMU =20 config RCAR3 bool "Renesas RCar3 support" - depends on ARM_64 + depends on ARM_64 && !HAS_MPU select HAS_SCIF select IPMMU_VMSA ---help--- @@ -31,7 +33,7 @@ config RCAR3 =20 config MPSOC bool "Xilinx Ultrascale+ MPSoC support" - depends on ARM_64 + depends on ARM_64 && !HAS_MPU select HAS_CADENCE_UART select ARM_SMMU ---help--- --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750551385623.116780984926; Sun, 25 Jun 2023 20:35:51 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554961.866461 (Exim 4.92) (envelope-from ) id 1qDd0e-0000Lw-J0; Mon, 26 Jun 2023 03:35:24 +0000 Received: by outflank-mailman (output) from mailman id 554961.866461; Mon, 26 Jun 2023 03:35:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0e-0000Lp-G6; Mon, 26 Jun 2023 03:35:24 +0000 Received: by outflank-mailman (input) for mailman id 554961; Mon, 26 Jun 2023 03:35:23 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0d-0000HH-Fg for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:23 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 7963a96d-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:35:19 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 388F41FB; Sun, 25 Jun 2023 20:36:03 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C65593F64C; Sun, 25 Jun 2023 20:35:16 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7963a96d-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 04/52] xen/arm: add .text.idmap in ld script for Xen identity map sections Date: Mon, 26 Jun 2023 11:33:55 +0800 Message-Id: <20230626033443.2943270-5-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750553098100003 Content-Type: text/plain; charset="utf-8" From: Wei Chen Only the first 4KB of Xen image will be mapped as identity (PA =3D=3D VA). At the moment, Xen guarantees this by having everything that needs to be used in the identity mapping in .text.header section of head.S, and the size will be checked by _idmap_start and _idmap_end at link time if this fits in 4KB. In a later patch, we will move the MMU specific code out of head.S. Although we can add .text.header to the new file to guarantee all identity map code still in the first 4KB. However, the order of these two files on this 4KB depends on the build tools. Currently, we use the build tools to process the order of objs in the Makefile to ensure that head.S must be at the top. But if you change to another build tools, it might not be the same result. In this patch we introduce a new section named .text.idmap in the region between _idmap_start and _idmap_end. And in Xen link script, we force the .text.idmap contents to linked after .text.header. This will ensure code of head.S always be at the top of Xen binary. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng --- v1 -> v2: 1. New patch. --- v3: 1. adapt to changes to "_end_boot" --- xen/arch/arm/xen.lds.S | 1 + 1 file changed, 1 insertion(+) diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S index be58c2c395..4f7daa7dca 100644 --- a/xen/arch/arm/xen.lds.S +++ b/xen/arch/arm/xen.lds.S @@ -34,6 +34,7 @@ SECTIONS _stext =3D .; /* Text section */ _idmap_start =3D .; *(.text.header) + *(.text.idmap) _idmap_end =3D .; =20 *(.text.cold) --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 168775054968315.630989909035975; Sun, 25 Jun 2023 20:35:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554962.866471 (Exim 4.92) (envelope-from ) id 1qDd0f-0000cV-Rk; Mon, 26 Jun 2023 03:35:25 +0000 Received: by outflank-mailman (output) from mailman id 554962.866471; Mon, 26 Jun 2023 03:35:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0f-0000cM-ON; Mon, 26 Jun 2023 03:35:25 +0000 Received: by outflank-mailman (input) for mailman id 554962; Mon, 26 Jun 2023 03:35:24 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0e-0007ej-74 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:24 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 7b18fd6d-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:35:22 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1B9401FB; Sun, 25 Jun 2023 20:36:06 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A98723F64C; Sun, 25 Jun 2023 20:35:19 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7b18fd6d-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 05/52] xen/arm64: head: Introduce enable_boot_mmu and enable_runtime_mmu Date: Mon, 26 Jun 2023 11:33:56 +0800 Message-Id: <20230626033443.2943270-6-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750551128100001 Content-Type: text/plain; charset="utf-8" From: Wei Chen At the moment, on MMU system, enable_mmu() will return to an address in the 1:1 mapping, then each path is responsible to switch to virtual runtime mapping. Then remove_identity_mapping() is called to remove all 1:1 mapping. Since remove_identity_mapping() is not necessary on Non-MMU system, and we also avoid creating empty function for Non-MMU system, trying to keep only one codeflow in arm64/head.S, we move path switch and remove_identity_mapping() in enable_mmu() on MMU system. As the remove_identity_mapping should only be called for the boot CPU only, so we introduce enable_boot_mmu for boot CPU and enable_runtime_mmu for secondary CPUs in this patch. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng --- v3: - new patch --- xen/arch/arm/arm64/head.S | 87 +++++++++++++++++++++++++++++++-------- 1 file changed, 70 insertions(+), 17 deletions(-) diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S index 10a07db428..4dfbe0bc6f 100644 --- a/xen/arch/arm/arm64/head.S +++ b/xen/arch/arm/arm64/head.S @@ -314,21 +314,12 @@ real_start_efi: =20 bl check_cpu_mode bl cpu_init - bl create_page_tables - load_paddr x0, boot_pgtable - bl enable_mmu =20 /* We are still in the 1:1 mapping. Jump to the runtime Virtual Ad= dress. */ - ldr x0, =3Dprimary_switched - br x0 + ldr lr, =3Dprimary_switched + b enable_boot_mmu + primary_switched: - /* - * The 1:1 map may clash with other parts of the Xen virtual memory - * layout. As it is not used anymore, remove it completely to - * avoid having to worry about replacing existing mapping - * afterwards. - */ - bl remove_identity_mapping bl setup_fixmap #ifdef CONFIG_EARLY_PRINTK /* Use a virtual address to access the UART. */ @@ -373,13 +364,11 @@ GLOBAL(init_secondary) #endif bl check_cpu_mode bl cpu_init - load_paddr x0, init_ttbr - ldr x0, [x0] - bl enable_mmu =20 /* We are still in the 1:1 mapping. Jump to the runtime Virtual Ad= dress. */ - ldr x0, =3Dsecondary_switched - br x0 + ldr lr, =3Dsecondary_switched + b enable_runtime_mmu + secondary_switched: #ifdef CONFIG_EARLY_PRINTK /* Use a virtual address to access the UART. */ @@ -694,6 +683,70 @@ enable_mmu: ret ENDPROC(enable_mmu) =20 +/* + * Turn on the Data Cache and the MMU. The function will return + * to the virtual address provided in LR (e.g. the runtime mapping). + * + * Inputs: + * lr : Virtual address to return to. + * + * Clobbers x0 - x5 + */ +enable_runtime_mmu: + mov x5, lr + + load_paddr x0, init_ttbr + ldr x0, [x0] + + bl enable_mmu + mov lr, x5 + + /* return to secondary_switched */ + ret +ENDPROC(enable_runtime_mmu) + +/* + * Turn on the Data Cache and the MMU. The function will return + * to the virtual address provided in LR (e.g. the runtime mapping). + * + * Inputs: + * lr : Virtual address to return to. + * + * Clobbers x0 - x5 + */ +enable_boot_mmu: + mov x5, lr + + bl create_page_tables + load_paddr x0, boot_pgtable + + bl enable_mmu + mov lr, x5 + + /* + * The MMU is turned on and we are in the 1:1 mapping. Switch + * to the runtime mapping. + */ + ldr x0, =3D1f + br x0 +1: + /* + * The 1:1 map may clash with other parts of the Xen virtual memory + * layout. As it is not used anymore, remove it completely to + * avoid having to worry about replacing existing mapping + * afterwards. Function will return to primary_switched. + */ + b remove_identity_mapping + + /* + * Here might not be reached, as "ret" in remove_identity_mapping + * will use the return address in LR in advance. But keep ret here + * might be more safe if "ret" in remove_identity_mapping is remov= ed + * in future. + */ + ret +ENDPROC(enable_boot_mmu) + /* * Remove the 1:1 map from the page-tables. It is not easy to keep track * where the 1:1 map was mapped, so we will look for the top-level entry --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750836926938.4976702285325; Sun, 25 Jun 2023 20:40:36 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555042.866670 (Exim 4.92) (envelope-from ) id 1qDd57-0001UO-3D; Mon, 26 Jun 2023 03:40:01 +0000 Received: by outflank-mailman (output) from mailman id 555042.866670; Mon, 26 Jun 2023 03:40:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd56-0001P7-7W; Mon, 26 Jun 2023 03:40:00 +0000 Received: by outflank-mailman (input) for mailman id 555042; Mon, 26 Jun 2023 03:39:58 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0i-0007ej-Ad for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:28 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 7cebead3-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:35:25 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3DF042F4; Sun, 25 Jun 2023 20:36:09 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8C27C3F64C; Sun, 25 Jun 2023 20:35:22 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7cebead3-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 06/52] xen/arm: introduce CONFIG_HAS_MMU Date: Mon, 26 Jun 2023 11:33:57 +0800 Message-Id: <20230626033443.2943270-7-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750838065100001 Content-Type: text/plain; charset="utf-8" This commit wants to introduce a new Kconfig CONFIG_HAS_MMU to guard MMU-related codes, to tell two different memory management architecture: VMAS and PMSA. In a VMSA system, a Memory Management Unit (MMU) provides fine-grained control of a memory system through a set of virtual to physical address mappings and associated memory properties held in memory-mapped tables known as translation tables. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new patch --- xen/arch/arm/Kconfig | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index ff17345cdb..fb77392b82 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -60,6 +60,14 @@ config PADDR_BITS =20 source "arch/Kconfig" =20 +config HAS_MMU + bool "Memory Management Unit support in a VMSA system" + default y + help + In a VMSA system, a Memory Management Unit (MMU) provides fine-grained = control of + a memory system through a set of virtual to physical address mappings a= nd associated memory + properties held in memory-mapped tables known as translation tables. + config ACPI bool "ACPI (Advanced Configuration and Power Interface) Support (UNSUPPOR= TED)" if UNSUPPORTED depends on ARM_64 --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750560271784.5714038835941; Sun, 25 Jun 2023 20:36:00 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554967.866481 (Exim 4.92) (envelope-from ) id 1qDd0n-0001GH-89; Mon, 26 Jun 2023 03:35:33 +0000 Received: by outflank-mailman (output) from mailman id 554967.866481; Mon, 26 Jun 2023 03:35:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0n-0001Fx-1r; Mon, 26 Jun 2023 03:35:33 +0000 Received: by outflank-mailman (input) for mailman id 554967; Mon, 26 Jun 2023 03:35:30 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0k-0000HH-Rx for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:30 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 7eab5472-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:35:28 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 16C011FB; Sun, 25 Jun 2023 20:36:12 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AD90A3F64C; Sun, 25 Jun 2023 20:35:25 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7eab5472-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 07/52] xen/arm64: prepare for moving MMU related code from head.S Date: Mon, 26 Jun 2023 11:33:58 +0800 Message-Id: <20230626033443.2943270-8-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750562699100001 Content-Type: text/plain; charset="utf-8" From: Wei Chen We want to reuse head.S for MPU systems, but there are some code are implemented for MMU systems only. We will move such code to another MMU specific file. But before that we will do some indentations fix in this patch to make them be easier for reviewing: 1. Fix the indentations of code comments. 2. Fix the indentations for .text.header section. 3. Rename puts() to asm_puts() for global export Signed-off-by: Wei Chen Signed-off-by: Penny Zheng --- v1 -> v2: 1. New patch. --- v3: 1. fix commit message 2. Rename puts() to asm_puts() for global export --- xen/arch/arm/arm64/head.S | 42 +++++++++++++++++++-------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S index 4dfbe0bc6f..66347eedcc 100644 --- a/xen/arch/arm/arm64/head.S +++ b/xen/arch/arm/arm64/head.S @@ -94,7 +94,7 @@ #define PRINT(_s) \ mov x3, lr ; \ adr x0, 98f ; \ - bl puts ; \ + bl asm_puts ; \ mov lr, x3 ; \ RODATA_STR(98, _s) =20 @@ -136,22 +136,22 @@ add \xb, \xb, x20 .endm =20 - .section .text.header, "ax", %progbits - /*.aarch64*/ +.section .text.header, "ax", %progbits +/*.aarch64*/ =20 - /* - * Kernel startup entry point. - * --------------------------- - * - * The requirements are: - * MMU =3D off, D-cache =3D off, I-cache =3D on or off, - * x0 =3D physical address to the FDT blob. - * - * This must be the very first address in the loaded image. - * It should be linked at XEN_VIRT_START, and loaded at any - * 4K-aligned address. All of text+data+bss must fit in 2MB, - * or the initial pagetable code below will need adjustment. - */ +/* + * Kernel startup entry point. + * --------------------------- + * + * The requirements are: + * MMU =3D off, D-cache =3D off, I-cache =3D on or off, + * x0 =3D physical address to the FDT blob. + * + * This must be the very first address in the loaded image. + * It should be linked at XEN_VIRT_START, and loaded at any + * 4K-aligned address. All of text+data+bss must fit in 2MB, + * or the initial pagetable code below will need adjustment. + */ =20 GLOBAL(start) /* @@ -520,7 +520,7 @@ ENDPROC(cpu_init) * Macro to create a mapping entry in \tbl to \phys. Only mapping in 3rd * level table (i.e page granularity) is supported. * - * ptbl: table symbol where the entry will be created + * ptbl: table symbol where the entry will be created * virt: virtual address * phys: physical address (should be page aligned) * tmp1: scratch register @@ -928,15 +928,15 @@ ENDPROC(init_uart) * x0: Nul-terminated string to print. * x23: Early UART base address * Clobbers x0-x1 */ -puts: +ENTRY(asm_puts) early_uart_ready x23, 1 ldrb w1, [x0], #1 /* Load next char */ cbz w1, 1f /* Exit on nul */ early_uart_transmit x23, w1 - b puts + b asm_puts 1: ret -ENDPROC(puts) +ENDPROC(asm_puts) =20 /* * Print a 64-bit number in hex. @@ -966,7 +966,7 @@ hex: .ascii "0123456789abcdef" =20 ENTRY(early_puts) init_uart: -puts: +asm_puts: putn: ret =20 #endif /* !CONFIG_EARLY_PRINTK */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750563674564.1436550628281; Sun, 25 Jun 2023 20:36:03 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554969.866491 (Exim 4.92) (envelope-from ) id 1qDd0q-0001oS-PV; Mon, 26 Jun 2023 03:35:36 +0000 Received: by outflank-mailman (output) from mailman id 554969.866491; Mon, 26 Jun 2023 03:35:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0q-0001nU-JP; Mon, 26 Jun 2023 03:35:36 +0000 Received: by outflank-mailman (input) for mailman id 554969; Mon, 26 Jun 2023 03:35:34 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0o-0000HH-Pq for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:34 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 8089362d-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:35:31 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4CB0E1FB; Sun, 25 Jun 2023 20:36:15 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 87DEE3F64C; Sun, 25 Jun 2023 20:35:28 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8089362d-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 08/52] xen/arm64: move MMU related code from head.S to mmu/head.S Date: Mon, 26 Jun 2023 11:33:59 +0800 Message-Id: <20230626033443.2943270-9-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750564933100001 Content-Type: text/plain; charset="utf-8" From: Wei Chen There are lots of MMU specific code in head.S. This code will not be used in MPU systems. If we use #ifdef to gate them, the code will become messy and hard to maintain. So we move MMU related code to mmu/head.S, and keep common code still in head.S. We also add .text.idmap in mmu/head.S to make all code in this new file are still in identity map page but will be linked after head.S. As "fail" in head.S is very simple and this name is too easy to be conflicted, so duplicate it in mmu/head.S instead of exporting it. And some assembly macros that will be shared by MMU and MPU later, we move them to macros.h. Rename enable_boot_mmu()/enable_runtime_mmu() to a more generic name enable_boot_mm()/enable_runtime_mm(), in order to make them common interfac= es to be used for both MMU and later MPU system. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng --- v1 -> v2: 1. Move macros to macros.h 2. Remove the indention modification 3. Duplicate "fail" instead of exporting it. --- v3: - Rename enable_boot_mmu()/enable_runtime_mmu() to a more generic name enable_boot_mm()/enable_runtime_mm() --- xen/arch/arm/arm64/Makefile | 3 + xen/arch/arm/arm64/head.S | 469 +----------------------- xen/arch/arm/arm64/mmu/head.S | 453 +++++++++++++++++++++++ xen/arch/arm/include/asm/arm64/macros.h | 51 +++ 4 files changed, 509 insertions(+), 467 deletions(-) create mode 100644 xen/arch/arm/arm64/mmu/head.S diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile index 54ad55c75c..0c4b177be9 100644 --- a/xen/arch/arm/arm64/Makefile +++ b/xen/arch/arm/arm64/Makefile @@ -8,6 +8,9 @@ obj-y +=3D domctl.o obj-y +=3D domain.o obj-y +=3D entry.o obj-y +=3D head.o +ifeq ($(CONFIG_HAS_MMU),y) +obj-y +=3D mmu/head.o +endif obj-y +=3D insn.o obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o obj-y +=3D mm.o diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S index 66347eedcc..e63886b037 100644 --- a/xen/arch/arm/arm64/head.S +++ b/xen/arch/arm/arm64/head.S @@ -28,17 +28,6 @@ #include #endif =20 -#define PT_PT 0xf7f /* nG=3D1 AF=3D1 SH=3D11 AP=3D01 NS=3D1 ATTR=3D111= T=3D1 P=3D1 */ -#define PT_MEM 0xf7d /* nG=3D1 AF=3D1 SH=3D11 AP=3D01 NS=3D1 ATTR=3D111= T=3D0 P=3D1 */ -#define PT_MEM_L3 0xf7f /* nG=3D1 AF=3D1 SH=3D11 AP=3D01 NS=3D1 ATTR=3D111= T=3D1 P=3D1 */ -#define PT_DEV 0xe71 /* nG=3D1 AF=3D1 SH=3D10 AP=3D01 NS=3D1 ATTR=3D100= T=3D0 P=3D1 */ -#define PT_DEV_L3 0xe73 /* nG=3D1 AF=3D1 SH=3D10 AP=3D01 NS=3D1 ATTR=3D100= T=3D1 P=3D1 */ - -/* Convenience defines to get slot used by Xen mapping. */ -#define XEN_ZEROETH_SLOT zeroeth_table_offset(XEN_VIRT_START) -#define XEN_FIRST_SLOT first_table_offset(XEN_VIRT_START) -#define XEN_SECOND_SLOT second_table_offset(XEN_VIRT_START) - #define __HEAD_FLAG_PAGE_SIZE ((PAGE_SHIFT - 10) / 2) =20 #define __HEAD_FLAG_PHYS_BASE 1 @@ -85,57 +74,6 @@ * x30 - lr */ =20 -#ifdef CONFIG_EARLY_PRINTK -/* - * Macro to print a string to the UART, if there is one. - * - * Clobbers x0 - x3 - */ -#define PRINT(_s) \ - mov x3, lr ; \ - adr x0, 98f ; \ - bl asm_puts ; \ - mov lr, x3 ; \ - RODATA_STR(98, _s) - -/* - * Macro to print the value of register \xb - * - * Clobbers x0 - x4 - */ -.macro print_reg xb - mov x0, \xb - mov x4, lr - bl putn - mov lr, x4 -.endm - -#else /* CONFIG_EARLY_PRINTK */ -#define PRINT(s) - -.macro print_reg xb -.endm - -#endif /* !CONFIG_EARLY_PRINTK */ - -/* - * Pseudo-op for PC relative adr , where is - * within the range +/- 4GB of the PC. - * - * @dst: destination register (64 bit wide) - * @sym: name of the symbol - */ -.macro adr_l, dst, sym - adrp \dst, \sym - add \dst, \dst, :lo12:\sym -.endm - -/* Load the physical address of a symbol into xb */ -.macro load_paddr xb, sym - ldr \xb, =3D\sym - add \xb, \xb, x20 -.endm - .section .text.header, "ax", %progbits /*.aarch64*/ =20 @@ -317,7 +255,7 @@ real_start_efi: =20 /* We are still in the 1:1 mapping. Jump to the runtime Virtual Ad= dress. */ ldr lr, =3Dprimary_switched - b enable_boot_mmu + b enable_boot_mm =20 primary_switched: bl setup_fixmap @@ -367,7 +305,7 @@ GLOBAL(init_secondary) =20 /* We are still in the 1:1 mapping. Jump to the runtime Virtual Ad= dress. */ ldr lr, =3Dsecondary_switched - b enable_runtime_mmu + b enable_runtime_mm =20 secondary_switched: #ifdef CONFIG_EARLY_PRINTK @@ -475,364 +413,6 @@ cpu_init: ret ENDPROC(cpu_init) =20 -/* - * Macro to find the slot number at a given page-table level - * - * slot: slot computed - * virt: virtual address - * lvl: page-table level - */ -.macro get_table_slot, slot, virt, lvl - ubfx \slot, \virt, #XEN_PT_LEVEL_SHIFT(\lvl), #XEN_PT_LPAE_SHIFT -.endm - -/* - * Macro to create a page table entry in \ptbl to \tbl - * - * ptbl: table symbol where the entry will be created - * tbl: table symbol to point to - * virt: virtual address - * lvl: page-table level - * tmp1: scratch register - * tmp2: scratch register - * tmp3: scratch register - * - * Preserves \virt - * Clobbers \tmp1, \tmp2, \tmp3 - * - * Also use x20 for the phys offset. - * - * Note that all parameters using registers should be distinct. - */ -.macro create_table_entry, ptbl, tbl, virt, lvl, tmp1, tmp2, tmp3 - get_table_slot \tmp1, \virt, \lvl /* \tmp1 :=3D slot in \tlb */ - - load_paddr \tmp2, \tbl - mov \tmp3, #PT_PT /* \tmp3 :=3D right for linear= PT */ - orr \tmp3, \tmp3, \tmp2 /* + \tlb paddr */ - - adr_l \tmp2, \ptbl - - str \tmp3, [\tmp2, \tmp1, lsl #3] -.endm - -/* - * Macro to create a mapping entry in \tbl to \phys. Only mapping in 3rd - * level table (i.e page granularity) is supported. - * - * ptbl: table symbol where the entry will be created - * virt: virtual address - * phys: physical address (should be page aligned) - * tmp1: scratch register - * tmp2: scratch register - * tmp3: scratch register - * type: mapping type. If not specified it will be normal memory (PT_ME= M_L3) - * - * Preserves \virt, \phys - * Clobbers \tmp1, \tmp2, \tmp3 - * - * Note that all parameters using registers should be distinct. - */ -.macro create_mapping_entry, ptbl, virt, phys, tmp1, tmp2, tmp3, type=3DPT= _MEM_L3 - and \tmp3, \phys, #THIRD_MASK /* \tmp3 :=3D PAGE_ALIGNED(phy= s) */ - - get_table_slot \tmp1, \virt, 3 /* \tmp1 :=3D slot in \tlb */ - - mov \tmp2, #\type /* \tmp2 :=3D right for sectio= n PT */ - orr \tmp2, \tmp2, \tmp3 /* + PAGE_ALIGNED(phy= s) */ - - adr_l \tmp3, \ptbl - - str \tmp2, [\tmp3, \tmp1, lsl #3] -.endm - -/* - * Rebuild the boot pagetable's first-level entries. The structure - * is described in mm.c. - * - * After the CPU enables paging it will add the fixmap mapping - * to these page tables, however this may clash with the 1:1 - * mapping. So each CPU must rebuild the page tables here with - * the 1:1 in place. - * - * Inputs: - * x19: paddr(start) - * x20: phys offset - * - * Clobbers x0 - x4 - */ -create_page_tables: - /* Prepare the page-tables for mapping Xen */ - ldr x0, =3DXEN_VIRT_START - create_table_entry boot_pgtable, boot_first, x0, 0, x1, x2, x3 - create_table_entry boot_first, boot_second, x0, 1, x1, x2, x3 - create_table_entry boot_second, boot_third, x0, 2, x1, x2, x3 - - /* Map Xen */ - adr_l x4, boot_third - - lsr x2, x19, #THIRD_SHIFT /* Base address for 4K mapping */ - lsl x2, x2, #THIRD_SHIFT - mov x3, #PT_MEM_L3 /* x2 :=3D Section map */ - orr x2, x2, x3 - - /* ... map of vaddr(start) in boot_third */ - mov x1, xzr -1: str x2, [x4, x1] /* Map vaddr(start) */ - add x2, x2, #PAGE_SIZE /* Next page */ - add x1, x1, #8 /* Next slot */ - cmp x1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512 entries per page */ - b.lt 1b - - /* - * If Xen is loaded at exactly XEN_VIRT_START then we don't - * need an additional 1:1 mapping, the virtual mapping will - * suffice. - */ - ldr x0, =3DXEN_VIRT_START - cmp x19, x0 - bne 1f - ret -1: - /* - * Setup the 1:1 mapping so we can turn the MMU on. Note that - * only the first page of Xen will be part of the 1:1 mapping. - */ - - /* - * Find the zeroeth slot used. If the slot is not - * XEN_ZEROETH_SLOT, then the 1:1 mapping will use its own set of - * page-tables from the first level. - */ - get_table_slot x0, x19, 0 /* x0 :=3D zeroeth slot */ - cmp x0, #XEN_ZEROETH_SLOT - beq 1f - create_table_entry boot_pgtable, boot_first_id, x19, 0, x0, x1, x2 - b link_from_first_id - -1: - /* - * Find the first slot used. If the slot is not XEN_FIRST_SLOT, - * then the 1:1 mapping will use its own set of page-tables from - * the second level. - */ - get_table_slot x0, x19, 1 /* x0 :=3D first slot */ - cmp x0, #XEN_FIRST_SLOT - beq 1f - create_table_entry boot_first, boot_second_id, x19, 1, x0, x1, x2 - b link_from_second_id - -1: - /* - * Find the second slot used. If the slot is XEN_SECOND_SLOT, then= the - * 1:1 mapping will use its own set of page-tables from the - * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to h= andle - * it. - */ - get_table_slot x0, x19, 2 /* x0 :=3D second slot */ - cmp x0, #XEN_SECOND_SLOT - beq virtphys_clash - create_table_entry boot_second, boot_third_id, x19, 2, x0, x1, x2 - b link_from_third_id - -link_from_first_id: - create_table_entry boot_first_id, boot_second_id, x19, 1, x0, x1, = x2 -link_from_second_id: - create_table_entry boot_second_id, boot_third_id, x19, 2, x0, x1, = x2 -link_from_third_id: - create_mapping_entry boot_third_id, x19, x19, x0, x1, x2 - ret - -virtphys_clash: - /* Identity map clashes with boot_third, which we cannot handle ye= t */ - PRINT("- Unable to build boot page tables - virt and phys addresse= s clash. -\r\n") - b fail -ENDPROC(create_page_tables) - -/* - * Turn on the Data Cache and the MMU. The function will return on the 1:1 - * mapping. In other word, the caller is responsible to switch to the runt= ime - * mapping. - * - * Inputs: - * x0 : Physical address of the page tables. - * - * Clobbers x0 - x4 - */ -enable_mmu: - mov x4, x0 - PRINT("- Turning on paging -\r\n") - - /* - * The state of the TLBs is unknown before turning on the MMU. - * Flush them to avoid stale one. - */ - tlbi alle2 /* Flush hypervisor TLBs */ - dsb nsh - - /* Write Xen's PT's paddr into TTBR0_EL2 */ - msr TTBR0_EL2, x4 - isb - - mrs x0, SCTLR_EL2 - orr x0, x0, #SCTLR_Axx_ELx_M /* Enable MMU */ - orr x0, x0, #SCTLR_Axx_ELx_C /* Enable D-cache */ - dsb sy /* Flush PTE writes and finish reads = */ - msr SCTLR_EL2, x0 /* now paging is enabled */ - isb /* Now, flush the icache */ - ret -ENDPROC(enable_mmu) - -/* - * Turn on the Data Cache and the MMU. The function will return - * to the virtual address provided in LR (e.g. the runtime mapping). - * - * Inputs: - * lr : Virtual address to return to. - * - * Clobbers x0 - x5 - */ -enable_runtime_mmu: - mov x5, lr - - load_paddr x0, init_ttbr - ldr x0, [x0] - - bl enable_mmu - mov lr, x5 - - /* return to secondary_switched */ - ret -ENDPROC(enable_runtime_mmu) - -/* - * Turn on the Data Cache and the MMU. The function will return - * to the virtual address provided in LR (e.g. the runtime mapping). - * - * Inputs: - * lr : Virtual address to return to. - * - * Clobbers x0 - x5 - */ -enable_boot_mmu: - mov x5, lr - - bl create_page_tables - load_paddr x0, boot_pgtable - - bl enable_mmu - mov lr, x5 - - /* - * The MMU is turned on and we are in the 1:1 mapping. Switch - * to the runtime mapping. - */ - ldr x0, =3D1f - br x0 -1: - /* - * The 1:1 map may clash with other parts of the Xen virtual memory - * layout. As it is not used anymore, remove it completely to - * avoid having to worry about replacing existing mapping - * afterwards. Function will return to primary_switched. - */ - b remove_identity_mapping - - /* - * Here might not be reached, as "ret" in remove_identity_mapping - * will use the return address in LR in advance. But keep ret here - * might be more safe if "ret" in remove_identity_mapping is remov= ed - * in future. - */ - ret -ENDPROC(enable_boot_mmu) - -/* - * Remove the 1:1 map from the page-tables. It is not easy to keep track - * where the 1:1 map was mapped, so we will look for the top-level entry - * exclusive to the 1:1 map and remove it. - * - * Inputs: - * x19: paddr(start) - * - * Clobbers x0 - x1 - */ -remove_identity_mapping: - /* - * Find the zeroeth slot used. Remove the entry from zeroeth - * table if the slot is not XEN_ZEROETH_SLOT. - */ - get_table_slot x1, x19, 0 /* x1 :=3D zeroeth slot */ - cmp x1, #XEN_ZEROETH_SLOT - beq 1f - /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */ - ldr x0, =3Dboot_pgtable /* x0 :=3D root table */ - str xzr, [x0, x1, lsl #3] - b identity_mapping_removed - -1: - /* - * Find the first slot used. Remove the entry for the first - * table if the slot is not XEN_FIRST_SLOT. - */ - get_table_slot x1, x19, 1 /* x1 :=3D first slot */ - cmp x1, #XEN_FIRST_SLOT - beq 1f - /* It is not in slot XEN_FIRST_SLOT, remove the entry. */ - ldr x0, =3Dboot_first /* x0 :=3D first table */ - str xzr, [x0, x1, lsl #3] - b identity_mapping_removed - -1: - /* - * Find the second slot used. Remove the entry for the first - * table if the slot is not XEN_SECOND_SLOT. - */ - get_table_slot x1, x19, 2 /* x1 :=3D second slot */ - cmp x1, #XEN_SECOND_SLOT - beq identity_mapping_removed - /* It is not in slot 1, remove the entry */ - ldr x0, =3Dboot_second /* x0 :=3D second table */ - str xzr, [x0, x1, lsl #3] - -identity_mapping_removed: - /* See asm/arm64/flushtlb.h for the explanation of the sequence. */ - dsb nshst - tlbi alle2 - dsb nsh - isb - - ret -ENDPROC(remove_identity_mapping) - -/* - * Map the UART in the fixmap (when earlyprintk is used) and hook the - * fixmap table in the page tables. - * - * The fixmap cannot be mapped in create_page_tables because it may - * clash with the 1:1 mapping. - * - * Inputs: - * x20: Physical offset - * x23: Early UART base physical address - * - * Clobbers x0 - x3 - */ -setup_fixmap: -#ifdef CONFIG_EARLY_PRINTK - /* Add UART to the fixmap table */ - ldr x0, =3DEARLY_UART_VIRTUAL_ADDRESS - create_mapping_entry xen_fixmap, x0, x23, x1, x2, x3, type=3DPT_DE= V_L3 -#endif - /* Map fixmap into boot_second */ - ldr x0, =3DFIXMAP_ADDR(0) - create_table_entry boot_second, xen_fixmap, x0, 2, x1, x2, x3 - /* Ensure any page table updates made above have occurred. */ - dsb nshst - - ret -ENDPROC(setup_fixmap) - /* * Setup the initial stack and jump to the C world * @@ -861,51 +441,6 @@ fail: PRINT("- Boot failed -\r\n") b 1b ENDPROC(fail) =20 -/* - * Switch TTBR - * - * x0 ttbr - */ -ENTRY(switch_ttbr_id) - /* 1) Ensure any previous read/write have completed */ - dsb ish - isb - - /* 2) Turn off MMU */ - mrs x1, SCTLR_EL2 - bic x1, x1, #SCTLR_Axx_ELx_M - msr SCTLR_EL2, x1 - isb - - /* - * 3) Flush the TLBs. - * See asm/arm64/flushtlb.h for the explanation of the sequence. - */ - dsb nshst - tlbi alle2 - dsb nsh - isb - - /* 4) Update the TTBR */ - msr TTBR0_EL2, x0 - isb - - /* - * 5) Flush I-cache - * This should not be necessary but it is kept for safety. - */ - ic iallu - isb - - /* 6) Turn on the MMU */ - mrs x1, SCTLR_EL2 - orr x1, x1, #SCTLR_Axx_ELx_M /* Enable MMU */ - msr SCTLR_EL2, x1 - isb - - ret -ENDPROC(switch_ttbr_id) - #ifdef CONFIG_EARLY_PRINTK /* * Initialize the UART. Should only be called on the boot CPU. diff --git a/xen/arch/arm/arm64/mmu/head.S b/xen/arch/arm/arm64/mmu/head.S new file mode 100644 index 0000000000..2b209fc3ce --- /dev/null +++ b/xen/arch/arm/arm64/mmu/head.S @@ -0,0 +1,453 @@ +/* + * xen/arch/arm/mmu/head.S + * + * Start-of-day code for an ARMv8. + * + * Ian Campbell + * Copyright (c) 2012 Citrix Systems. + * + * Based on ARMv7-A head.S by + * Tim Deegan + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include +#include + +#define PT_PT 0xf7f /* nG=3D1 AF=3D1 SH=3D11 AP=3D01 NS=3D1 ATTR=3D111= T=3D1 P=3D1 */ +#define PT_MEM 0xf7d /* nG=3D1 AF=3D1 SH=3D11 AP=3D01 NS=3D1 ATTR=3D111= T=3D0 P=3D1 */ +#define PT_MEM_L3 0xf7f /* nG=3D1 AF=3D1 SH=3D11 AP=3D01 NS=3D1 ATTR=3D111= T=3D1 P=3D1 */ +#define PT_DEV 0xe71 /* nG=3D1 AF=3D1 SH=3D10 AP=3D01 NS=3D1 ATTR=3D100= T=3D0 P=3D1 */ +#define PT_DEV_L3 0xe73 /* nG=3D1 AF=3D1 SH=3D10 AP=3D01 NS=3D1 ATTR=3D100= T=3D1 P=3D1 */ + +/* Convenience defines to get slot used by Xen mapping. */ +#define XEN_ZEROETH_SLOT zeroeth_table_offset(XEN_VIRT_START) +#define XEN_FIRST_SLOT first_table_offset(XEN_VIRT_START) +#define XEN_SECOND_SLOT second_table_offset(XEN_VIRT_START) + +/* + * Macro to find the slot number at a given page-table level + * + * slot: slot computed + * virt: virtual address + * lvl: page-table level + */ +.macro get_table_slot, slot, virt, lvl + ubfx \slot, \virt, #XEN_PT_LEVEL_SHIFT(\lvl), #XEN_PT_LPAE_SHIFT +.endm + +/* + * Macro to create a page table entry in \ptbl to \tbl + * + * ptbl: table symbol where the entry will be created + * tbl: table symbol to point to + * virt: virtual address + * lvl: page-table level + * tmp1: scratch register + * tmp2: scratch register + * tmp3: scratch register + * + * Preserves \virt + * Clobbers \tmp1, \tmp2, \tmp3 + * + * Also use x20 for the phys offset. + * + * Note that all parameters using registers should be distinct. + */ +.macro create_table_entry, ptbl, tbl, virt, lvl, tmp1, tmp2, tmp3 + get_table_slot \tmp1, \virt, \lvl /* \tmp1 :=3D slot in \tlb */ + + load_paddr \tmp2, \tbl + mov \tmp3, #PT_PT /* \tmp3 :=3D right for linear= PT */ + orr \tmp3, \tmp3, \tmp2 /* + \tlb paddr */ + + adr_l \tmp2, \ptbl + + str \tmp3, [\tmp2, \tmp1, lsl #3] +.endm + +/* + * Macro to create a mapping entry in \tbl to \phys. Only mapping in 3rd + * level table (i.e page granularity) is supported. + * + * ptbl: table symbol where the entry will be created + * virt: virtual address + * phys: physical address (should be page aligned) + * tmp1: scratch register + * tmp2: scratch register + * tmp3: scratch register + * type: mapping type. If not specified it will be normal memory (PT_ME= M_L3) + * + * Preserves \virt, \phys + * Clobbers \tmp1, \tmp2, \tmp3 + * + * Note that all parameters using registers should be distinct. + */ +.macro create_mapping_entry, ptbl, virt, phys, tmp1, tmp2, tmp3, type=3DPT= _MEM_L3 + and \tmp3, \phys, #THIRD_MASK /* \tmp3 :=3D PAGE_ALIGNED(phy= s) */ + + get_table_slot \tmp1, \virt, 3 /* \tmp1 :=3D slot in \tlb */ + + mov \tmp2, #\type /* \tmp2 :=3D right for sectio= n PT */ + orr \tmp2, \tmp2, \tmp3 /* + PAGE_ALIGNED(phy= s) */ + + adr_l \tmp3, \ptbl + + str \tmp2, [\tmp3, \tmp1, lsl #3] +.endm + +.section .text.idmap, "ax", %progbits + +/* + * Rebuild the boot pagetable's first-level entries. The structure + * is described in mm.c. + * + * After the CPU enables paging it will add the fixmap mapping + * to these page tables, however this may clash with the 1:1 + * mapping. So each CPU must rebuild the page tables here with + * the 1:1 in place. + * + * Inputs: + * x19: paddr(start) + * x20: phys offset + * + * Clobbers x0 - x4 + */ +create_page_tables: + /* Prepare the page-tables for mapping Xen */ + ldr x0, =3DXEN_VIRT_START + create_table_entry boot_pgtable, boot_first, x0, 0, x1, x2, x3 + create_table_entry boot_first, boot_second, x0, 1, x1, x2, x3 + create_table_entry boot_second, boot_third, x0, 2, x1, x2, x3 + + /* Map Xen */ + adr_l x4, boot_third + + lsr x2, x19, #THIRD_SHIFT /* Base address for 4K mapping */ + lsl x2, x2, #THIRD_SHIFT + mov x3, #PT_MEM_L3 /* x2 :=3D Section map */ + orr x2, x2, x3 + + /* ... map of vaddr(start) in boot_third */ + mov x1, xzr +1: str x2, [x4, x1] /* Map vaddr(start) */ + add x2, x2, #PAGE_SIZE /* Next page */ + add x1, x1, #8 /* Next slot */ + cmp x1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512 entries per page */ + b.lt 1b + + /* + * If Xen is loaded at exactly XEN_VIRT_START then we don't + * need an additional 1:1 mapping, the virtual mapping will + * suffice. + */ + ldr x0, =3DXEN_VIRT_START + cmp x19, x0 + bne 1f + ret +1: + /* + * Setup the 1:1 mapping so we can turn the MMU on. Note that + * only the first page of Xen will be part of the 1:1 mapping. + */ + + /* + * Find the zeroeth slot used. If the slot is not + * XEN_ZEROETH_SLOT, then the 1:1 mapping will use its own set of + * page-tables from the first level. + */ + get_table_slot x0, x19, 0 /* x0 :=3D zeroeth slot */ + cmp x0, #XEN_ZEROETH_SLOT + beq 1f + create_table_entry boot_pgtable, boot_first_id, x19, 0, x0, x1, x2 + b link_from_first_id + +1: + /* + * Find the first slot used. If the slot is not XEN_FIRST_SLOT, + * then the 1:1 mapping will use its own set of page-tables from + * the second level. + */ + get_table_slot x0, x19, 1 /* x0 :=3D first slot */ + cmp x0, #XEN_FIRST_SLOT + beq 1f + create_table_entry boot_first, boot_second_id, x19, 1, x0, x1, x2 + b link_from_second_id + +1: + /* + * Find the second slot used. If the slot is XEN_SECOND_SLOT, then= the + * 1:1 mapping will use its own set of page-tables from the + * third level. For slot XEN_SECOND_SLOT, Xen is not yet able to h= andle + * it. + */ + get_table_slot x0, x19, 2 /* x0 :=3D second slot */ + cmp x0, #XEN_SECOND_SLOT + beq virtphys_clash + create_table_entry boot_second, boot_third_id, x19, 2, x0, x1, x2 + b link_from_third_id + +link_from_first_id: + create_table_entry boot_first_id, boot_second_id, x19, 1, x0, x1, = x2 +link_from_second_id: + create_table_entry boot_second_id, boot_third_id, x19, 2, x0, x1, = x2 +link_from_third_id: + create_mapping_entry boot_third_id, x19, x19, x0, x1, x2 + ret + +virtphys_clash: + /* Identity map clashes with boot_third, which we cannot handle ye= t */ + PRINT("- Unable to build boot page tables - virt and phys addresse= s clash. -\r\n") + b fail +ENDPROC(create_page_tables) + +/* + * Turn on the Data Cache and the MMU. The function will return on the 1:1 + * mapping. In other word, the caller is responsible to switch to the runt= ime + * mapping. + * + * Inputs: + * x0 : Physical address of the page tables. + * + * Clobbers x0 - x4 + */ +enable_mmu: + mov x4, x0 + PRINT("- Turning on paging -\r\n") + + /* + * The state of the TLBs is unknown before turning on the MMU. + * Flush them to avoid stale one. + */ + tlbi alle2 /* Flush hypervisor TLBs */ + dsb nsh + + /* Write Xen's PT's paddr into TTBR0_EL2 */ + msr TTBR0_EL2, x4 + isb + + mrs x0, SCTLR_EL2 + orr x0, x0, #SCTLR_Axx_ELx_M /* Enable MMU */ + orr x0, x0, #SCTLR_Axx_ELx_C /* Enable D-cache */ + dsb sy /* Flush PTE writes and finish reads = */ + msr SCTLR_EL2, x0 /* now paging is enabled */ + isb /* Now, flush the icache */ + ret +ENDPROC(enable_mmu) + +/* + * Turn on the Data Cache and the MMU. The function will return + * to the virtual address provided in LR (e.g. the runtime mapping). + * + * Inputs: + * lr : Virtual address to return to. + * + * Clobbers x0 - x5 + */ +ENTRY(enable_runtime_mm) + /* save return address */ + mov x5, lr + + load_paddr x0, init_ttbr + ldr x0, [x0] + bl enable_mmu + mov lr, x5 + + /* return to secondary_switched */ + ret +ENDPROC(enable_runtime_mm) + +/* + * Turn on the Data Cache and the MMU. The function will return + * to the virtual address provided in LR (e.g. the runtime mapping). + * + * Inputs: + * lr : Virtual address to return to. + * + * Clobbers x0 - x5 + */ +ENTRY(enable_boot_mm) + /* save return address */ + mov x5, lr + + bl create_page_tables + load_paddr x0, boot_pgtable + bl enable_mmu + mov lr, x5 + + /* + * The MMU is turned on and we are in the 1:1 mapping. Switch + * to the runtime mapping. + */ + ldr x0, =3D1f + br x0 +1: + /* + * The 1:1 map may clash with other parts of the Xen virtual memory + * layout. As it is not used anymore, remove it completely to + * avoid having to worry about replacing existing mapping + * afterwards. Function will return to primary_switched. + */ + b remove_identity_mapping + + /* + * Here might not be reached, as "ret" in remove_identity_mapping + * will use the return address in LR in advance. But keep ret here + * might be more safe if "ret" in remove_identity_mapping is remov= ed + * in future. + */ + ret +ENDPROC(enable_boot_mm) + +/* + * Remove the 1:1 map from the page-tables. It is not easy to keep track + * where the 1:1 map was mapped, so we will look for the top-level entry + * exclusive to the 1:1 map and remove it. + * + * Inputs: + * x19: paddr(start) + * + * Clobbers x0 - x1 + */ +remove_identity_mapping: + /* + * Find the zeroeth slot used. Remove the entry from zeroeth + * table if the slot is not XEN_ZEROETH_SLOT. + */ + get_table_slot x1, x19, 0 /* x1 :=3D zeroeth slot */ + cmp x1, #XEN_ZEROETH_SLOT + beq 1f + /* It is not in slot XEN_ZEROETH_SLOT, remove the entry. */ + ldr x0, =3Dboot_pgtable /* x0 :=3D root table */ + str xzr, [x0, x1, lsl #3] + b identity_mapping_removed + +1: + /* + * Find the first slot used. Remove the entry for the first + * table if the slot is not XEN_FIRST_SLOT. + */ + get_table_slot x1, x19, 1 /* x1 :=3D first slot */ + cmp x1, #XEN_FIRST_SLOT + beq 1f + /* It is not in slot XEN_FIRST_SLOT, remove the entry. */ + ldr x0, =3Dboot_first /* x0 :=3D first table */ + str xzr, [x0, x1, lsl #3] + b identity_mapping_removed + +1: + /* + * Find the second slot used. Remove the entry for the first + * table if the slot is not XEN_SECOND_SLOT. + */ + get_table_slot x1, x19, 2 /* x1 :=3D second slot */ + cmp x1, #XEN_SECOND_SLOT + beq identity_mapping_removed + /* It is not in slot 1, remove the entry */ + ldr x0, =3Dboot_second /* x0 :=3D second table */ + str xzr, [x0, x1, lsl #3] + +identity_mapping_removed: + /* See asm/arm64/flushtlb.h for the explanation of the sequence. */ + dsb nshst + tlbi alle2 + dsb nsh + isb + + ret +ENDPROC(remove_identity_mapping) + +/* + * Map the UART in the fixmap (when earlyprintk is used) and hook the + * fixmap table in the page tables. + * + * The fixmap cannot be mapped in create_page_tables because it may + * clash with the 1:1 mapping. + * + * Inputs: + * x20: Physical offset + * x23: Early UART base physical address + * + * Clobbers x0 - x3 + */ +ENTRY(setup_fixmap) +#ifdef CONFIG_EARLY_PRINTK + /* Add UART to the fixmap table */ + ldr x0, =3DEARLY_UART_VIRTUAL_ADDRESS + create_mapping_entry xen_fixmap, x0, x23, x1, x2, x3, type=3DPT_DE= V_L3 +#endif + /* Map fixmap into boot_second */ + ldr x0, =3DFIXMAP_ADDR(0) + create_table_entry boot_second, xen_fixmap, x0, 2, x1, x2, x3 + /* Ensure any page table updates made above have occurred. */ + dsb nshst + + ret +ENDPROC(setup_fixmap) + +/* Fail-stop */ +fail: PRINT("- Boot failed -\r\n") +1: wfe + b 1b +ENDPROC(fail) + +/* + * Switch TTBR + * + * x0 ttbr + */ +ENTRY(switch_ttbr_id) + /* 1) Ensure any previous read/write have completed */ + dsb ish + isb + + /* 2) Turn off MMU */ + mrs x1, SCTLR_EL2 + bic x1, x1, #SCTLR_Axx_ELx_M + msr SCTLR_EL2, x1 + isb + + /* + * 3) Flush the TLBs. + * See asm/arm64/flushtlb.h for the explanation of the sequence. + */ + dsb nshst + tlbi alle2 + dsb nsh + isb + + /* 4) Update the TTBR */ + msr TTBR0_EL2, x0 + isb + + /* + * 5) Flush I-cache + * This should not be necessary but it is kept for safety. + */ + ic iallu + isb + + /* 6) Turn on the MMU */ + mrs x1, SCTLR_EL2 + orr x1, x1, #SCTLR_Axx_ELx_M /* Enable MMU */ + msr SCTLR_EL2, x1 + isb + + ret +ENDPROC(switch_ttbr_id) + +/* + * Local variables: + * mode: ASM + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/arm64/macros.h b/xen/arch/arm/include= /asm/arm64/macros.h index 140e223b4c..2116e48b7c 100644 --- a/xen/arch/arm/include/asm/arm64/macros.h +++ b/xen/arch/arm/include/asm/arm64/macros.h @@ -32,6 +32,57 @@ hint #22 .endm =20 +#ifdef CONFIG_EARLY_PRINTK +/* + * Macro to print a string to the UART, if there is one. + * + * Clobbers x0 - x3 + */ +#define PRINT(_s) \ + mov x3, lr ; \ + adr x0, 98f ; \ + bl asm_puts ; \ + mov lr, x3 ; \ + RODATA_STR(98, _s) + +/* + * Macro to print the value of register \xb + * + * Clobbers x0 - x4 + */ +.macro print_reg xb + mov x0, \xb + mov x4, lr + bl putn + mov lr, x4 +.endm + +#else /* CONFIG_EARLY_PRINTK */ +#define PRINT(s) + +.macro print_reg xb +.endm + +#endif /* !CONFIG_EARLY_PRINTK */ + +/* + * Pseudo-op for PC relative adr , where is + * within the range +/- 4GB of the PC. + * + * @dst: destination register (64 bit wide) + * @sym: name of the symbol + */ +.macro adr_l, dst, sym + adrp \dst, \sym + add \dst, \dst, :lo12:\sym +.endm + +/* Load the physical address of a symbol into xb */ +.macro load_paddr xb, sym + ldr \xb, =3D\sym + add \xb, \xb, x20 +.endm + /* * Register aliases. */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 168775086534469.9131577206856; Sun, 25 Jun 2023 20:41:05 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555076.866795 (Exim 4.92) (envelope-from ) id 1qDd5X-0008Ai-JI; Mon, 26 Jun 2023 03:40:27 +0000 Received: by outflank-mailman (output) from mailman id 555076.866795; Mon, 26 Jun 2023 03:40:27 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5W-00083m-7k; Mon, 26 Jun 2023 03:40:26 +0000 Received: by outflank-mailman (input) for mailman id 555076; Mon, 26 Jun 2023 03:40:22 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0p-0007ej-9f for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:35 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 8244304c-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:35:34 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 33DF32F4; Sun, 25 Jun 2023 20:36:18 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BE9583F64C; Sun, 25 Jun 2023 20:35:31 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8244304c-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 09/52] xen/arm: use PA == VA for EARLY_UART_VIRTUAL_ADDRESS on MPU systems Date: Mon, 26 Jun 2023 11:34:00 +0800 Message-Id: <20230626033443.2943270-10-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750866125100001 Content-Type: text/plain; charset="utf-8" From: Wei Chen There is no VMSA support on MPU systems, so we can not map early UART to FIXMAP_CONSOLE. In stead, we can use PA =3D=3D VA for early UART on MPU systems. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng --- v2: 1. New patch --- v3: 1. fix comment 2. change CONFIG_ARM_V8R to !CONFIG_HAS_MMU --- xen/arch/arm/include/asm/early_printk.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/xen/arch/arm/include/asm/early_printk.h b/xen/arch/arm/include= /asm/early_printk.h index c5149b2976..ec5bcc343c 100644 --- a/xen/arch/arm/include/asm/early_printk.h +++ b/xen/arch/arm/include/asm/early_printk.h @@ -15,10 +15,22 @@ =20 #ifdef CONFIG_EARLY_PRINTK =20 +#ifndef CONFIG_HAS_MMU + +/* + * For MPU systems, there is no VMSA support in EL2, so we use VA =3D=3D PA + * for EARLY_UART_VIRTUAL_ADDRESS. + */ +#define EARLY_UART_VIRTUAL_ADDRESS CONFIG_EARLY_UART_BASE_ADDRESS + +#else + /* need to add the uart address offset in page to the fixmap address */ #define EARLY_UART_VIRTUAL_ADDRESS \ (FIXMAP_ADDR(FIXMAP_CONSOLE) + (CONFIG_EARLY_UART_BASE_ADDRESS & ~PAGE= _MASK)) =20 +#endif /* CONFIG_HAS_MMU */ + #endif /* !CONFIG_EARLY_PRINTK */ =20 #endif --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750566790727.0366268839535; Sun, 25 Jun 2023 20:36:06 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554973.866501 (Exim 4.92) (envelope-from ) id 1qDd0u-0002PY-Ai; Mon, 26 Jun 2023 03:35:40 +0000 Received: by outflank-mailman (output) from mailman id 554973.866501; Mon, 26 Jun 2023 03:35:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0u-0002PJ-6v; Mon, 26 Jun 2023 03:35:40 +0000 Received: by outflank-mailman (input) for mailman id 554973; Mon, 26 Jun 2023 03:35:39 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0t-0000HH-JS for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:39 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 83ed363f-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:35:37 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D0D21FB; Sun, 25 Jun 2023 20:36:21 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A44103F64C; Sun, 25 Jun 2023 20:35:34 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 83ed363f-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 10/52] xen/arm: Move MMU related definitions from config.h to mmu/layout.h Date: Mon, 26 Jun 2023 11:34:01 +0800 Message-Id: <20230626033443.2943270-11-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750567154100001 Content-Type: text/plain; charset="utf-8" From: Wei Chen Xen defines some global configuration macros for Arm in config.h. We still want to use it for MMU systems, but there are some address layout related definitions that are defined for MMU systems only. These definitions could not be used by MPU systems, but adding ifdefery with CONFIG_HAS_MPU to gate these definitions will result in a messy and hard-to-read/maintain code. So we keep some common definitions still in config.h, but move MMU related definitions to a new file - mmu/layout.h to avoid spreading "#ifdef" everywhere. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng --- v1 -> v2: 1. Remove duplicated FIXMAP definitions from config_mmu.h --- v3: 1. name the new header layout.h --- xen/arch/arm/include/asm/config.h | 127 +---------------------- xen/arch/arm/include/asm/mmu/layout.h | 141 ++++++++++++++++++++++++++ 2 files changed, 143 insertions(+), 125 deletions(-) create mode 100644 xen/arch/arm/include/asm/mmu/layout.h diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/c= onfig.h index 30f4665ba9..204b3dec13 100644 --- a/xen/arch/arm/include/asm/config.h +++ b/xen/arch/arm/include/asm/config.h @@ -71,131 +71,8 @@ #include #include =20 -/* - * ARM32 layout: - * 0 - 2M Unmapped - * 2M - 4M Xen text, data, bss - * 4M - 6M Fixmap: special-purpose 4K mapping slots - * 6M - 10M Early boot mapping of FDT - * 10M - 12M Livepatch vmap (if compiled in) - * - * 32M - 128M Frametable: 32 bytes per page for 12GB of RAM - * 256M - 1G VMAP: ioremap and early_ioremap use this virtual address - * space - * - * 1G - 2G Xenheap: always-mapped memory - * 2G - 4G Domheap: on-demand-mapped - * - * ARM64 layout: - * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3]) - * - * Reserved to identity map Xen - * - * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]) - * (Relative offsets) - * 0 - 2M Unmapped - * 2M - 4M Xen text, data, bss - * 4M - 6M Fixmap: special-purpose 4K mapping slots - * 6M - 10M Early boot mapping of FDT - * 10M - 12M Livepatch vmap (if compiled in) - * - * 1G - 2G VMAP: ioremap and early_ioremap - * - * 32G - 64G Frametable: 56 bytes per page for 2TB of RAM - * - * 0x0000028000000000 - 0x00007fffffffffff (125TB, L0 slots [5..255]) - * Unused - * - * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265]) - * 1:1 mapping of RAM - * - * 0x0000850000000000 - 0x0000ffffffffffff (123TB, L0 slots [266..511]) - * Unused - */ - -#ifdef CONFIG_ARM_32 -#define XEN_VIRT_START _AT(vaddr_t, MB(2)) -#else - -#define SLOT0_ENTRY_BITS 39 -#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS) -#define SLOT0_ENTRY_SIZE SLOT0(1) - -#define XEN_VIRT_START (SLOT0(4) + _AT(vaddr_t, MB(2))) -#endif - -#define XEN_VIRT_SIZE _AT(vaddr_t, MB(2)) - -#define FIXMAP_VIRT_START (XEN_VIRT_START + XEN_VIRT_SIZE) -#define FIXMAP_VIRT_SIZE _AT(vaddr_t, MB(2)) - -#define FIXMAP_ADDR(n) (FIXMAP_VIRT_START + (n) * PAGE_SIZE) - -#define BOOT_FDT_VIRT_START (FIXMAP_VIRT_START + FIXMAP_VIRT_SIZE) -#define BOOT_FDT_VIRT_SIZE _AT(vaddr_t, MB(4)) - -#ifdef CONFIG_LIVEPATCH -#define LIVEPATCH_VMAP_START (BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE) -#define LIVEPATCH_VMAP_SIZE _AT(vaddr_t, MB(2)) -#endif - -#define HYPERVISOR_VIRT_START XEN_VIRT_START - -#ifdef CONFIG_ARM_32 - -#define CONFIG_SEPARATE_XENHEAP 1 - -#define FRAMETABLE_VIRT_START _AT(vaddr_t, MB(32)) -#define FRAMETABLE_SIZE MB(128-32) -#define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) - -#define VMAP_VIRT_START _AT(vaddr_t, MB(256)) -#define VMAP_VIRT_SIZE _AT(vaddr_t, GB(1) - MB(256)) - -#define XENHEAP_VIRT_START _AT(vaddr_t, GB(1)) -#define XENHEAP_VIRT_SIZE _AT(vaddr_t, GB(1)) - -#define DOMHEAP_VIRT_START _AT(vaddr_t, GB(2)) -#define DOMHEAP_VIRT_SIZE _AT(vaddr_t, GB(2)) - -#define DOMHEAP_ENTRIES 1024 /* 1024 2MB mapping slots */ - -/* Number of domheap pagetable pages required at the second level (2MB map= pings) */ -#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT) - -/* - * The temporary area is overlapping with the domheap area. This may - * be used to create an alias of the first slot containing Xen mappings - * when turning on/off the MMU. - */ -#define TEMPORARY_AREA_FIRST_SLOT (first_table_offset(DOMHEAP_VIRT_STAR= T)) - -/* Calculate the address in the temporary area */ -#define TEMPORARY_AREA_ADDR(addr) \ - (((addr) & ~XEN_PT_LEVEL_MASK(1)) | \ - (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1))) - -#define TEMPORARY_XEN_VIRT_START TEMPORARY_AREA_ADDR(XEN_VIRT_START) - -#else /* ARM_64 */ - -#define IDENTITY_MAPPING_AREA_NR_L0 4 - -#define VMAP_VIRT_START (SLOT0(4) + GB(1)) -#define VMAP_VIRT_SIZE GB(1) - -#define FRAMETABLE_VIRT_START (SLOT0(4) + GB(32)) -#define FRAMETABLE_SIZE GB(32) -#define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) - -#define DIRECTMAP_VIRT_START SLOT0(256) -#define DIRECTMAP_SIZE (SLOT0_ENTRY_SIZE * (266 - 256)) -#define DIRECTMAP_VIRT_END (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1) - -#define XENHEAP_VIRT_START directmap_virt_start - -#define HYPERVISOR_VIRT_END DIRECTMAP_VIRT_END - +#ifndef CONFIG_HAS_MPU +#include #endif =20 #define NR_hypercalls 64 diff --git a/xen/arch/arm/include/asm/mmu/layout.h b/xen/arch/arm/include/a= sm/mmu/layout.h new file mode 100644 index 0000000000..8deda6b84d --- /dev/null +++ b/xen/arch/arm/include/asm/mmu/layout.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ARM_MMU_LAYOUT_H__ +#define __ARM_MMU_LAYOUT_H__ + +/* + * ARM32 layout: + * 0 - 2M Unmapped + * 2M - 4M Xen text, data, bss + * 4M - 6M Fixmap: special-purpose 4K mapping slots + * 6M - 10M Early boot mapping of FDT + * 10M - 12M Livepatch vmap (if compiled in) + * + * 32M - 128M Frametable: 32 bytes per page for 12GB of RAM + * 256M - 1G VMAP: ioremap and early_ioremap use this virtual address + * space + * + * 1G - 2G Xenheap: always-mapped memory + * 2G - 4G Domheap: on-demand-mapped + * + * ARM64 layout: + * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3]) + * + * Reserved to identity map Xen + * + * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]) + * (Relative offsets) + * 0 - 2M Unmapped + * 2M - 4M Xen text, data, bss + * 4M - 6M Fixmap: special-purpose 4K mapping slots + * 6M - 10M Early boot mapping of FDT + * 10M - 12M Livepatch vmap (if compiled in) + * + * 1G - 2G VMAP: ioremap and early_ioremap + * + * 32G - 64G Frametable: 56 bytes per page for 2TB of RAM + * + * 0x0000028000000000 - 0x00007fffffffffff (125TB, L0 slots [5..255]) + * Unused + * + * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265]) + * 1:1 mapping of RAM + * + * 0x0000850000000000 - 0x0000ffffffffffff (123TB, L0 slots [266..511]) + * Unused + */ + +#ifdef CONFIG_ARM_32 +#define XEN_VIRT_START _AT(vaddr_t, MB(2)) +#else + +#define SLOT0_ENTRY_BITS 39 +#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS) +#define SLOT0_ENTRY_SIZE SLOT0(1) + +#define XEN_VIRT_START (SLOT0(4) + _AT(vaddr_t, MB(2))) +#endif + +#define XEN_VIRT_SIZE _AT(vaddr_t, MB(2)) + +#define FIXMAP_VIRT_START (XEN_VIRT_START + XEN_VIRT_SIZE) +#define FIXMAP_VIRT_SIZE _AT(vaddr_t, MB(2)) + +#define FIXMAP_ADDR(n) (FIXMAP_VIRT_START + (n) * PAGE_SIZE) + +#define BOOT_FDT_VIRT_START (FIXMAP_VIRT_START + FIXMAP_VIRT_SIZE) +#define BOOT_FDT_VIRT_SIZE _AT(vaddr_t, MB(4)) + +#ifdef CONFIG_LIVEPATCH +#define LIVEPATCH_VMAP_START (BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE) +#define LIVEPATCH_VMAP_SIZE _AT(vaddr_t, MB(2)) +#endif + +#define HYPERVISOR_VIRT_START XEN_VIRT_START + +#ifdef CONFIG_ARM_32 + +#define CONFIG_SEPARATE_XENHEAP 1 + +#define FRAMETABLE_VIRT_START _AT(vaddr_t, MB(32)) +#define FRAMETABLE_SIZE MB(128-32) +#define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) + +#define VMAP_VIRT_START _AT(vaddr_t, MB(256)) +#define VMAP_VIRT_SIZE _AT(vaddr_t, GB(1) - MB(256)) + +#define XENHEAP_VIRT_START _AT(vaddr_t, GB(1)) +#define XENHEAP_VIRT_SIZE _AT(vaddr_t, GB(1)) + +#define DOMHEAP_VIRT_START _AT(vaddr_t, GB(2)) +#define DOMHEAP_VIRT_SIZE _AT(vaddr_t, GB(2)) + +#define DOMHEAP_ENTRIES 1024 /* 1024 2MB mapping slots */ + +/* Number of domheap pagetable pages required at the second level (2MB map= pings) */ +#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT) + +/* + * The temporary area is overlapping with the domheap area. This may + * be used to create an alias of the first slot containing Xen mappings + * when turning on/off the MMU. + */ +#define TEMPORARY_AREA_FIRST_SLOT (first_table_offset(DOMHEAP_VIRT_STAR= T)) + +/* Calculate the address in the temporary area */ +#define TEMPORARY_AREA_ADDR(addr) \ + (((addr) & ~XEN_PT_LEVEL_MASK(1)) | \ + (TEMPORARY_AREA_FIRST_SLOT << XEN_PT_LEVEL_SHIFT(1))) + +#define TEMPORARY_XEN_VIRT_START TEMPORARY_AREA_ADDR(XEN_VIRT_START) + +#else /* ARM_64 */ + +#define IDENTITY_MAPPING_AREA_NR_L0 4 + +#define VMAP_VIRT_START (SLOT0(4) + GB(1)) +#define VMAP_VIRT_SIZE GB(1) + +#define FRAMETABLE_VIRT_START (SLOT0(4) + GB(32)) +#define FRAMETABLE_SIZE GB(32) +#define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) + +#define DIRECTMAP_VIRT_START SLOT0(256) +#define DIRECTMAP_SIZE (SLOT0_ENTRY_SIZE * (266 - 256)) +#define DIRECTMAP_VIRT_END (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1) + +#define XENHEAP_VIRT_START directmap_virt_start + +#define HYPERVISOR_VIRT_END DIRECTMAP_VIRT_END + +#endif + +#endif /* __ARM_MMU_LAYOUT_H__ */ +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750878568326.2591218319071; Sun, 25 Jun 2023 20:41:18 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555090.866863 (Exim 4.92) (envelope-from ) id 1qDd5n-0003np-Gk; Mon, 26 Jun 2023 03:40:43 +0000 Received: by outflank-mailman (output) from mailman id 555090.866863; Mon, 26 Jun 2023 03:40:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5m-0003fE-4O; Mon, 26 Jun 2023 03:40:42 +0000 Received: by outflank-mailman (input) for mailman id 555090; Mon, 26 Jun 2023 03:40:37 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0w-0000HH-M8 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:42 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 85cf78f1-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:35:40 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 256A01FB; Sun, 25 Jun 2023 20:36:24 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7DE363F64C; Sun, 25 Jun 2023 20:35:37 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 85cf78f1-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 11/52] xen/arm: mmu: fold FIXMAP into MMU system Date: Mon, 26 Jun 2023 11:34:02 +0800 Message-Id: <20230626033443.2943270-12-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750880173100007 Content-Type: text/plain; charset="utf-8" FIXMAP in MMU system is used to do special-purpose 4K mapping, like mapping early UART, temporarily mapping source codes for copy and paste (copy_from_paddr), etc. As FIXMAP feature is highly dependent on virtual address translation, we introduce a new Kconfig CONFIG_HAS_FIXMAP to wrap all releated codes, then we fold it into MMU system. Since PMAP relies on FIXMAP, so we fold it too into MMU system. Under !CONFIG_HAS_FIXMAP, we provide empty stubbers for not breaking compilation. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v1 -> v2 - new patch --- v3: - fold CONFIG_HAS_FIXMAP into CONFIG_HAS_MMU - change CONFIG_HAS_FIXMAP to an Arm-specific Kconfig --- xen/arch/arm/Kconfig | 7 ++++++- xen/arch/arm/include/asm/fixmap.h | 31 ++++++++++++++++++++++++++++--- 2 files changed, 34 insertions(+), 4 deletions(-) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index fb77392b82..22b28b8ba2 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -15,7 +15,6 @@ config ARM select HAS_DEVICE_TREE select HAS_PASSTHROUGH select HAS_PDX - select HAS_PMAP select IOMMU_FORCE_PT_SHARE =20 config ARCH_DEFCONFIG @@ -63,11 +62,17 @@ source "arch/Kconfig" config HAS_MMU bool "Memory Management Unit support in a VMSA system" default y + select HAS_PMAP help In a VMSA system, a Memory Management Unit (MMU) provides fine-grained = control of a memory system through a set of virtual to physical address mappings a= nd associated memory properties held in memory-mapped tables known as translation tables. =20 +config HAS_FIXMAP + bool "Provide special-purpose 4K mapping slots in a VMSA" + depends on HAS_MMU + default y + config ACPI bool "ACPI (Advanced Configuration and Power Interface) Support (UNSUPPOR= TED)" if UNSUPPORTED depends on ARM_64 diff --git a/xen/arch/arm/include/asm/fixmap.h b/xen/arch/arm/include/asm/f= ixmap.h index d0c9a52c8c..1b5b62866b 100644 --- a/xen/arch/arm/include/asm/fixmap.h +++ b/xen/arch/arm/include/asm/fixmap.h @@ -4,9 +4,6 @@ #ifndef __ASM_FIXMAP_H #define __ASM_FIXMAP_H =20 -#include -#include - /* Fixmap slots */ #define FIXMAP_CONSOLE 0 /* The primary UART */ #define FIXMAP_MISC 1 /* Ephemeral mappings of hardware */ @@ -22,6 +19,11 @@ =20 #ifndef __ASSEMBLY__ =20 +#ifdef CONFIG_HAS_FIXMAP + +#include +#include + /* * Direct access to xen_fixmap[] should only happen when {set, * clear}_fixmap() is unusable (e.g. where we would end up to @@ -43,6 +45,29 @@ static inline unsigned int virt_to_fix(vaddr_t vaddr) return ((vaddr - FIXADDR_START) >> PAGE_SHIFT); } =20 +#else /* !CONFIG_HAS_FIXMAP */ + +#include +#include + +static inline void set_fixmap(unsigned int map, mfn_t mfn, + unsigned int attributes) +{ + ASSERT_UNREACHABLE(); +} + +static inline void clear_fixmap(unsigned int map) +{ + ASSERT_UNREACHABLE(); +} + +static inline unsigned int virt_to_fix(vaddr_t vaddr) +{ + ASSERT_UNREACHABLE(); + return -EINVAL; +} +#endif /* !CONFIG_HAS_FIXMAP */ + #endif /* __ASSEMBLY__ */ =20 #endif /* __ASM_FIXMAP_H */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750816218432.65778028152897; Sun, 25 Jun 2023 20:40:16 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555018.866611 (Exim 4.92) (envelope-from ) id 1qDd4w-0007E7-48; Mon, 26 Jun 2023 03:39:50 +0000 Received: by outflank-mailman (output) from mailman id 555018.866611; Mon, 26 Jun 2023 03:39:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4v-00079C-EB; Mon, 26 Jun 2023 03:39:49 +0000 Received: by outflank-mailman (input) for mailman id 555018; Mon, 26 Jun 2023 03:39:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd0z-0000HH-H9 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:45 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 87bbbe50-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:35:43 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 617721FB; Sun, 25 Jun 2023 20:36:27 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 96D703F64C; Sun, 25 Jun 2023 20:35:40 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 87bbbe50-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 12/52] xen/mmu: extract early uart mapping from setup_fixmap Date: Mon, 26 Jun 2023 11:34:03 +0800 Message-Id: <20230626033443.2943270-13-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750817895100009 Content-Type: text/plain; charset="utf-8" Original setup_fixmap is actually doing two seperate tasks, one is enabling the early UART when earlyprintk on, and the other is to set up the fixmap even when earlyprintk is not configured. To be more dedicated and precise, the old function shall be split into two functions, setup_early_uart and new setup_fixmap. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new patch --- xen/arch/arm/arm64/head.S | 3 +++ xen/arch/arm/arm64/mmu/head.S | 24 +++++++++++++++++------- 2 files changed, 20 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S index e63886b037..55a4cfe69e 100644 --- a/xen/arch/arm/arm64/head.S +++ b/xen/arch/arm/arm64/head.S @@ -258,7 +258,10 @@ real_start_efi: b enable_boot_mm =20 primary_switched: + bl setup_early_uart +#ifdef CONFIG_HAS_FIXMAP bl setup_fixmap +#endif #ifdef CONFIG_EARLY_PRINTK /* Use a virtual address to access the UART. */ ldr x23, =3DEARLY_UART_VIRTUAL_ADDRESS diff --git a/xen/arch/arm/arm64/mmu/head.S b/xen/arch/arm/arm64/mmu/head.S index 2b209fc3ce..295596aca1 100644 --- a/xen/arch/arm/arm64/mmu/head.S +++ b/xen/arch/arm/arm64/mmu/head.S @@ -367,24 +367,34 @@ identity_mapping_removed: ENDPROC(remove_identity_mapping) =20 /* - * Map the UART in the fixmap (when earlyprintk is used) and hook the - * fixmap table in the page tables. - * - * The fixmap cannot be mapped in create_page_tables because it may - * clash with the 1:1 mapping. + * Map the UART in the fixmap (when earlyprintk is used) * * Inputs: - * x20: Physical offset * x23: Early UART base physical address * * Clobbers x0 - x3 */ -ENTRY(setup_fixmap) +ENTRY(setup_early_uart) #ifdef CONFIG_EARLY_PRINTK /* Add UART to the fixmap table */ ldr x0, =3DEARLY_UART_VIRTUAL_ADDRESS create_mapping_entry xen_fixmap, x0, x23, x1, x2, x3, type=3DPT_DE= V_L3 + /* Ensure any page table updates made above have occurred. */ + dsb nshst + + ret #endif +ENDPROC(setup_early_uart) + +/* + * Map the fixmap table in the page tables. + * + * The fixmap cannot be mapped in create_page_tables because it may + * clash with the 1:1 mapping. + * + * Clobbers x0 - x3 + */ +ENTRY(setup_fixmap) /* Map fixmap into boot_second */ ldr x0, =3DFIXMAP_ADDR(0) create_table_entry boot_second, xen_fixmap, x0, 2, x1, x2, x3 --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750837132574.4105553278239; Sun, 25 Jun 2023 20:40:37 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555044.866679 (Exim 4.92) (envelope-from ) id 1qDd59-0002Ak-MV; Mon, 26 Jun 2023 03:40:03 +0000 Received: by outflank-mailman (output) from mailman id 555044.866679; Mon, 26 Jun 2023 03:40:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd58-000263-S1; Mon, 26 Jun 2023 03:40:02 +0000 Received: by outflank-mailman (input) for mailman id 555044; Mon, 26 Jun 2023 03:39:59 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd13-0007ej-Iq for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:49 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 89a01ff9-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:35:47 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8B1991FB; Sun, 25 Jun 2023 20:36:30 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D7E4D3F64C; Sun, 25 Jun 2023 20:35:43 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 89a01ff9-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 13/52] xen/mmu: extract mmu-specific codes from mm.c/mm.h Date: Mon, 26 Jun 2023 11:34:04 +0800 Message-Id: <20230626033443.2943270-14-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750838145100003 Content-Type: text/plain; charset="utf-8" From: Wei Chen To make the code readable and maintainable, we move MMU-specific memory management code from mm.c to mmu/mm.c, arm64/mm.c to arm64/mmu/mm.c, and MMU-specific definitions from mm.h/page.h/setup.h to mmu/mm.h. In order to globally export a few variables/functions for both MMU and later MPU system, We rename them to more generic names, like, init_ttbr -> init_m= m, mmu_init_secondary_cpu() -> mm_init_secondary_cpu(), update_identity_mappin= g() -> update_identity_mm(). As later we will create mpu/mm.h and mpu/mm.c for MPU-specific memory management code, this will avoid lots of #ifdef in memory management code and header files. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng --- v2: - new patch --- v3: - introduce new directories: mmu/ which includes MMU-specific codes - rename init_ttbr to init_mm, mmu_init_secondary_cpu() to mm_init_secondary_cpu(), update_identity_mapping() -> update_identity_mm(). - move mmu-specific function definitions from page.h/setup.h to mmu/mm.h. --- xen/arch/arm/Makefile | 3 + xen/arch/arm/arm32/head.S | 2 +- xen/arch/arm/arm64/Makefile | 2 +- xen/arch/arm/arm64/mmu/head.S | 2 +- xen/arch/arm/arm64/{ =3D> mmu}/mm.c | 7 +- xen/arch/arm/arm64/smpboot.c | 6 +- xen/arch/arm/include/asm/arm64/mm.h | 7 +- xen/arch/arm/include/asm/mm.h | 33 +- xen/arch/arm/include/asm/mmu/mm.h | 54 ++ xen/arch/arm/include/asm/page.h | 15 - xen/arch/arm/include/asm/setup.h | 3 - xen/arch/arm/mm.c | 1102 +------------------------- xen/arch/arm/mmu/mm.c | 1125 +++++++++++++++++++++++++++ xen/arch/arm/smpboot.c | 4 +- 14 files changed, 1216 insertions(+), 1149 deletions(-) rename xen/arch/arm/arm64/{ =3D> mmu}/mm.c (97%) create mode 100644 xen/arch/arm/include/asm/mmu/mm.h create mode 100644 xen/arch/arm/mmu/mm.c diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 7bf07e9920..f825d95e29 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -36,6 +36,9 @@ obj-y +=3D irq.o obj-y +=3D kernel.init.o obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o obj-y +=3D mem_access.o +ifeq ($(CONFIG_HAS_MMU), y) +obj-y +=3D mmu/mm.o +endif obj-y +=3D mm.o obj-y +=3D monitor.o obj-y +=3D p2m.o diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S index f9f7be9588..51e7e41a55 100644 --- a/xen/arch/arm/arm32/head.S +++ b/xen/arch/arm/arm32/head.S @@ -243,7 +243,7 @@ secondary_switched: * * XXX: This is not compliant with the Arm Arm. */ - mov_w r4, init_ttbr /* VA of HTTBR value stashed by CPU 0= */ + mov_w r4, init_mm /* VA of HTTBR value stashed by CPU 0= */ ldrd r4, r5, [r4] /* Actual value */ dsb mcrr CP64(r4, r5, HTTBR) diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile index 0c4b177be9..55895ecb53 100644 --- a/xen/arch/arm/arm64/Makefile +++ b/xen/arch/arm/arm64/Makefile @@ -10,10 +10,10 @@ obj-y +=3D entry.o obj-y +=3D head.o ifeq ($(CONFIG_HAS_MMU),y) obj-y +=3D mmu/head.o +obj-y +=3D mmu/mm.o endif obj-y +=3D insn.o obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o -obj-y +=3D mm.o obj-y +=3D smc.o obj-y +=3D smpboot.o obj-$(CONFIG_ARM64_SVE) +=3D sve.o sve-asm.o diff --git a/xen/arch/arm/arm64/mmu/head.S b/xen/arch/arm/arm64/mmu/head.S index 295596aca1..3f3f4be829 100644 --- a/xen/arch/arm/arm64/mmu/head.S +++ b/xen/arch/arm/arm64/mmu/head.S @@ -257,7 +257,7 @@ ENTRY(enable_runtime_mm) /* save return address */ mov x5, lr =20 - load_paddr x0, init_ttbr + load_paddr x0, init_mm ldr x0, [x0] bl enable_mmu mov lr, x5 diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mmu/mm.c similarity index 97% rename from xen/arch/arm/arm64/mm.c rename to xen/arch/arm/arm64/mmu/mm.c index 78b7c7eb00..888ca5d8fc 100644 --- a/xen/arch/arm/arm64/mm.c +++ b/xen/arch/arm/arm64/mmu/mm.c @@ -106,7 +106,7 @@ void __init arch_setup_page_tables(void) prepare_runtime_identity_mapping(); } =20 -void update_identity_mapping(bool enable) +static void update_identity_mapping(bool enable) { paddr_t id_addr =3D virt_to_maddr(_start); int rc; @@ -120,6 +120,11 @@ void update_identity_mapping(bool enable) BUG_ON(rc); } =20 +void update_mm_mapping(bool enable) +{ + update_identity_mapping(enable); +} + extern void switch_ttbr_id(uint64_t ttbr); =20 typedef void (switch_ttbr_fn)(uint64_t ttbr); diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c index 9637f42469..2b1d086a1e 100644 --- a/xen/arch/arm/arm64/smpboot.c +++ b/xen/arch/arm/arm64/smpboot.c @@ -111,18 +111,18 @@ int arch_cpu_up(int cpu) if ( !smp_enable_ops[cpu].prepare_cpu ) return -ENODEV; =20 - update_identity_mapping(true); + update_mm_mapping(true); =20 rc =3D smp_enable_ops[cpu].prepare_cpu(cpu); if ( rc ) - update_identity_mapping(false); + update_mm_mapping(false); =20 return rc; } =20 void arch_cpu_up_finish(void) { - update_identity_mapping(false); + update_mm_mapping(false); } =20 /* diff --git a/xen/arch/arm/include/asm/arm64/mm.h b/xen/arch/arm/include/asm= /arm64/mm.h index e0bd23a6ed..edd9601d42 100644 --- a/xen/arch/arm/include/asm/arm64/mm.h +++ b/xen/arch/arm/include/asm/arm64/mm.h @@ -15,13 +15,14 @@ static inline bool arch_mfns_in_directmap(unsigned long= mfn, unsigned long nr) void arch_setup_page_tables(void); =20 /* - * Enable/disable the identity mapping in the live page-tables (i.e. - * the one pointed by TTBR_EL2). + * In MMU system, we enable/disable the identity mapping in the live + * page-tables (i.e. the one pointed by TTBR_EL2) through + * update_identity_mapping. * * Note that nested call (e.g. enable=3Dtrue, enable=3Dtrue) is not * supported. */ -void update_identity_mapping(bool enable); +void update_mm_mapping(bool enable); =20 #endif /* __ARM_ARM64_MM_H__ */ =20 diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index 4262165ce2..5d890a6a45 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -14,6 +14,10 @@ # error "unknown ARM variant" #endif =20 +#ifdef CONFIG_HAS_MMU +#include +#endif + /* Align Xen to a 2 MiB boundary. */ #define XEN_PADDR_ALIGN (1 << 21) =20 @@ -165,13 +169,6 @@ struct page_info #define _PGC_need_scrub _PGC_allocated #define PGC_need_scrub PGC_allocated =20 -extern mfn_t directmap_mfn_start, directmap_mfn_end; -extern vaddr_t directmap_virt_end; -#ifdef CONFIG_ARM_64 -extern vaddr_t directmap_virt_start; -extern unsigned long directmap_base_pdx; -#endif - #ifdef CONFIG_ARM_32 #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page)) #define is_xen_heap_mfn(mfn) ({ \ @@ -194,7 +191,6 @@ extern unsigned long directmap_base_pdx; =20 #define maddr_get_owner(ma) (page_get_owner(maddr_to_page((ma)))) =20 -#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) /* PDX of the first page in the frame table. */ extern unsigned long frametable_base_pdx; =20 @@ -203,25 +199,24 @@ extern unsigned long total_pages; =20 #define PDX_GROUP_SHIFT SECOND_SHIFT =20 +extern uint64_t init_mm; + /* Boot-time pagetable setup */ extern void setup_pagetables(unsigned long boot_phys_offset); /* Map FDT in boot pagetable */ extern void *early_fdt_map(paddr_t fdt_paddr); -/* Switch to a new root page-tables */ -extern void switch_ttbr(uint64_t ttbr); /* Remove early mappings */ extern void remove_early_mappings(void); -/* Allocate and initialise pagetables for a secondary CPU. Sets init_ttbr = to the - * new page table */ -extern int init_secondary_pagetables(int cpu); -/* Switch secondary CPUS to its own pagetables and finalise MMU setup */ -extern void mmu_init_secondary_cpu(void); /* - * For Arm32, set up the direct-mapped xenheap: up to 1GB of contiguous, - * always-mapped memory. Base must be 32MB aligned and size a multiple of = 32MB. - * For Arm64, map the region in the directmap area. + * Allocate and initialise memory mapping for a secondary CPU. + * Sets init_mm to the new memory mapping table + */ +extern int init_secondary_mm(int cpu); +/* + * Switch secondary CPUS to its own memory mapping tanle + * and finalise MMU/MPU setup */ -extern void setup_directmap_mappings(unsigned long base_mfn, unsigned long= nr_mfns); +extern void mm_init_secondary_cpu(void); /* Map a frame table to cover physical addresses ps through pe */ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe); /* map a physical range in virtual memory */ diff --git a/xen/arch/arm/include/asm/mmu/mm.h b/xen/arch/arm/include/asm/m= mu/mm.h new file mode 100644 index 0000000000..240410f66e --- /dev/null +++ b/xen/arch/arm/include/asm/mmu/mm.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ARCH_ARM_MM_MMU__ +#define __ARCH_ARM_MM_MMU__ + +extern mfn_t directmap_mfn_start, directmap_mfn_end; +extern vaddr_t directmap_virt_end; +#ifdef CONFIG_ARM_64 +extern vaddr_t directmap_virt_start; +extern unsigned long directmap_base_pdx; +#endif + +#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) + +/* + * Print a walk of a page table or p2m + * + * ttbr is the base address register (TTBR0_EL2 or VTTBR_EL2) + * addr is the PA or IPA to translate + * root_level is the starting level of the page table + * (e.g. TCR_EL2.SL0 or VTCR_EL2.SL0 ) + * nr_root_tables is the number of concatenated tables at the root. + * this can only be !=3D 1 for P2M walks starting at the first or + * subsequent level. + */ +void dump_pt_walk(paddr_t ttbr, paddr_t addr, + unsigned int root_level, + unsigned int nr_root_tables); + +/* Find where Xen will be residing at runtime and return a PT entry */ +lpae_t pte_of_xenaddr(vaddr_t); + +/* Switch to a new root page-tables */ +extern void switch_ttbr(uint64_t ttbr); +/* + * For Arm32, set up the direct-mapped xenheap: up to 1GB of contiguous, + * always-mapped memory. Base must be 32MB aligned and size a multiple of = 32MB. + * For Arm64, map the region in the directmap area. + */ +extern void setup_directmap_mappings(unsigned long base_mfn, unsigned long= nr_mfns); +extern int xen_pt_update(unsigned long virt, mfn_t mfn, + /* const on purpose as it is used for TLB flush */ + const unsigned long nr_mfns, + unsigned int flags); + +#endif /* __ARCH_ARM_MM_MMU__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/page.h b/xen/arch/arm/include/asm/pag= e.h index e7cd62190c..3893303c8f 100644 --- a/xen/arch/arm/include/asm/page.h +++ b/xen/arch/arm/include/asm/page.h @@ -240,21 +240,6 @@ static inline int clean_and_invalidate_dcache_va_range /* Flush the dcache for an entire page. */ void flush_page_to_ram(unsigned long mfn, bool sync_icache); =20 -/* - * Print a walk of a page table or p2m - * - * ttbr is the base address register (TTBR0_EL2 or VTTBR_EL2) - * addr is the PA or IPA to translate - * root_level is the starting level of the page table - * (e.g. TCR_EL2.SL0 or VTCR_EL2.SL0 ) - * nr_root_tables is the number of concatenated tables at the root. - * this can only be !=3D 1 for P2M walks starting at the first or - * subsequent level. - */ -void dump_pt_walk(paddr_t ttbr, paddr_t addr, - unsigned int root_level, - unsigned int nr_root_tables); - /* Print a walk of the hypervisor's page tables for a virtual addr. */ extern void dump_hyp_walk(vaddr_t addr); /* Print a walk of the p2m for a domain for a physical address. */ diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/se= tup.h index 19dc637d55..f0f64d228c 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -176,9 +176,6 @@ extern lpae_t boot_first_id[XEN_PT_LPAE_ENTRIES]; extern lpae_t boot_second_id[XEN_PT_LPAE_ENTRIES]; extern lpae_t boot_third_id[XEN_PT_LPAE_ENTRIES]; =20 -/* Find where Xen will be residing at runtime and return a PT entry */ -lpae_t pte_of_xenaddr(vaddr_t); - extern const char __ro_after_init_start[], __ro_after_init_end[]; =20 struct init_info diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index e460249736..e665d1f97a 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -15,16 +15,12 @@ #include #include #include -#include -#include -#include #include #include #include =20 #include =20 -#include #include =20 #include @@ -32,104 +28,9 @@ /* Override macros from asm/page.h to make them work with mfn_t */ #undef virt_to_mfn #define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) -#undef mfn_to_virt -#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn)) =20 -#ifdef NDEBUG -static inline void -__attribute__ ((__format__ (__printf__, 1, 2))) -mm_printk(const char *fmt, ...) {} -#else -#define mm_printk(fmt, args...) \ - do \ - { \ - dprintk(XENLOG_ERR, fmt, ## args); \ - WARN(); \ - } while (0) -#endif - -/* Static start-of-day pagetables that we use before the allocators - * are up. These are used by all CPUs during bringup before switching - * to the CPUs own pagetables. - * - * These pagetables have a very simple structure. They include: - * - 2MB worth of 4K mappings of xen at XEN_VIRT_START, boot_first and - * boot_second are used to populate the tables down to boot_third - * which contains the actual mapping. - * - a 1:1 mapping of xen at its current physical address. This uses a - * section mapping at whichever of boot_{pgtable,first,second} - * covers that physical address. - * - * For the boot CPU these mappings point to the address where Xen was - * loaded by the bootloader. For secondary CPUs they point to the - * relocated copy of Xen for the benefit of secondary CPUs. - * - * In addition to the above for the boot CPU the device-tree is - * initially mapped in the boot misc slot. This mapping is not present - * for secondary CPUs. - * - * Finally, if EARLY_PRINTK is enabled then xen_fixmap will be mapped - * by the CPU once it has moved off the 1:1 mapping. - */ -DEFINE_BOOT_PAGE_TABLE(boot_pgtable); -#ifdef CONFIG_ARM_64 -DEFINE_BOOT_PAGE_TABLE(boot_first); -DEFINE_BOOT_PAGE_TABLE(boot_first_id); -#endif -DEFINE_BOOT_PAGE_TABLE(boot_second_id); -DEFINE_BOOT_PAGE_TABLE(boot_third_id); -DEFINE_BOOT_PAGE_TABLE(boot_second); -DEFINE_BOOT_PAGE_TABLE(boot_third); - -/* Main runtime page tables */ - -/* - * For arm32 xen_pgtable are per-PCPU and are allocated before - * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs. - * - * xen_second, xen_fixmap and xen_xenmap are always shared between all - * PCPUs. - */ - -#ifdef CONFIG_ARM_64 -#define HYP_PT_ROOT_LEVEL 0 -DEFINE_PAGE_TABLE(xen_pgtable); -static DEFINE_PAGE_TABLE(xen_first); -#define THIS_CPU_PGTABLE xen_pgtable -#else -#define HYP_PT_ROOT_LEVEL 1 -/* Per-CPU pagetable pages */ -/* xen_pgtable =3D=3D root of the trie (zeroeth level on 64-bit, first on = 32-bit) */ -DEFINE_PER_CPU(lpae_t *, xen_pgtable); -#define THIS_CPU_PGTABLE this_cpu(xen_pgtable) -/* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */ -static DEFINE_PAGE_TABLE(cpu0_pgtable); -#endif - -/* Common pagetable leaves */ -/* Second level page table used to cover Xen virtual address space */ -static DEFINE_PAGE_TABLE(xen_second); -/* Third level page table used for fixmap */ -DEFINE_BOOT_PAGE_TABLE(xen_fixmap); -/* - * Third level page table used to map Xen itself with the XN bit set - * as appropriate. - */ -static DEFINE_PAGE_TABLE(xen_xenmap); - -/* Non-boot CPUs use this to find the correct pagetables. */ -uint64_t init_ttbr; - -static paddr_t phys_offset; - -/* Limits of the Xen heap */ -mfn_t directmap_mfn_start __read_mostly =3D INVALID_MFN_INITIALIZER; -mfn_t directmap_mfn_end __read_mostly; -vaddr_t directmap_virt_end __read_mostly; -#ifdef CONFIG_ARM_64 -vaddr_t directmap_virt_start __read_mostly; -unsigned long directmap_base_pdx __read_mostly; -#endif +/* Non-boot CPUs use this to find the correct memory mapping table. */ +uint64_t init_mm; =20 unsigned long frametable_base_pdx __read_mostly; unsigned long frametable_virt_end __read_mostly; @@ -139,243 +40,6 @@ unsigned long total_pages; =20 extern char __init_begin[], __init_end[]; =20 -/* Checking VA memory layout alignment. */ -static void __init __maybe_unused build_assertions(void) -{ - /* 2MB aligned regions */ - BUILD_BUG_ON(XEN_VIRT_START & ~SECOND_MASK); - BUILD_BUG_ON(FIXMAP_ADDR(0) & ~SECOND_MASK); - /* 1GB aligned regions */ -#ifdef CONFIG_ARM_32 - BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK); -#else - BUILD_BUG_ON(DIRECTMAP_VIRT_START & ~FIRST_MASK); -#endif - /* Page table structure constraints */ -#ifdef CONFIG_ARM_64 - /* - * The first few slots of the L0 table is reserved for the identity - * mapping. Check that none of the other regions are overlapping - * with it. - */ -#define CHECK_OVERLAP_WITH_IDMAP(virt) \ - BUILD_BUG_ON(zeroeth_table_offset(virt) < IDENTITY_MAPPING_AREA_NR_L0) - - CHECK_OVERLAP_WITH_IDMAP(XEN_VIRT_START); - CHECK_OVERLAP_WITH_IDMAP(VMAP_VIRT_START); - CHECK_OVERLAP_WITH_IDMAP(FRAMETABLE_VIRT_START); - CHECK_OVERLAP_WITH_IDMAP(DIRECTMAP_VIRT_START); -#undef CHECK_OVERLAP_WITH_IDMAP -#endif - BUILD_BUG_ON(first_table_offset(XEN_VIRT_START)); -#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE - BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK); -#endif - /* - * The boot code expects the regions XEN_VIRT_START, FIXMAP_ADDR(0), - * BOOT_FDT_VIRT_START to use the same 0th (arm64 only) and 1st - * slot in the page tables. - */ -#define CHECK_SAME_SLOT(level, virt1, virt2) \ - BUILD_BUG_ON(level##_table_offset(virt1) !=3D level##_table_offset(vir= t2)) - -#define CHECK_DIFFERENT_SLOT(level, virt1, virt2) \ - BUILD_BUG_ON(level##_table_offset(virt1) =3D=3D level##_table_offset(v= irt2)) - -#ifdef CONFIG_ARM_64 - CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, FIXMAP_ADDR(0)); - CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, BOOT_FDT_VIRT_START); -#endif - CHECK_SAME_SLOT(first, XEN_VIRT_START, FIXMAP_ADDR(0)); - CHECK_SAME_SLOT(first, XEN_VIRT_START, BOOT_FDT_VIRT_START); - - /* - * For arm32, the temporary mapping will re-use the domheap - * first slot and the second slots will match. - */ -#ifdef CONFIG_ARM_32 - CHECK_SAME_SLOT(first, TEMPORARY_XEN_VIRT_START, DOMHEAP_VIRT_START); - CHECK_DIFFERENT_SLOT(first, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START); - CHECK_SAME_SLOT(second, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START); -#endif - -#undef CHECK_SAME_SLOT -#undef CHECK_DIFFERENT_SLOT -} - -static lpae_t *xen_map_table(mfn_t mfn) -{ - /* - * During early boot, map_domain_page() may be unusable. Use the - * PMAP to map temporarily a page-table. - */ - if ( system_state =3D=3D SYS_STATE_early_boot ) - return pmap_map(mfn); - - return map_domain_page(mfn); -} - -static void xen_unmap_table(const lpae_t *table) -{ - /* - * During early boot, xen_map_table() will not use map_domain_page() - * but the PMAP. - */ - if ( system_state =3D=3D SYS_STATE_early_boot ) - pmap_unmap(table); - else - unmap_domain_page(table); -} - -void dump_pt_walk(paddr_t ttbr, paddr_t addr, - unsigned int root_level, - unsigned int nr_root_tables) -{ - static const char *level_strs[4] =3D { "0TH", "1ST", "2ND", "3RD" }; - const mfn_t root_mfn =3D maddr_to_mfn(ttbr); - DECLARE_OFFSETS(offsets, addr); - lpae_t pte, *mapping; - unsigned int level, root_table; - -#ifdef CONFIG_ARM_32 - BUG_ON(root_level < 1); -#endif - BUG_ON(root_level > 3); - - if ( nr_root_tables > 1 ) - { - /* - * Concatenated root-level tables. The table number will be - * the offset at the previous level. It is not possible to - * concatenate a level-0 root. - */ - BUG_ON(root_level =3D=3D 0); - root_table =3D offsets[root_level - 1]; - printk("Using concatenated root table %u\n", root_table); - if ( root_table >=3D nr_root_tables ) - { - printk("Invalid root table offset\n"); - return; - } - } - else - root_table =3D 0; - - mapping =3D xen_map_table(mfn_add(root_mfn, root_table)); - - for ( level =3D root_level; ; level++ ) - { - if ( offsets[level] > XEN_PT_LPAE_ENTRIES ) - break; - - pte =3D mapping[offsets[level]]; - - printk("%s[0x%03x] =3D 0x%"PRIx64"\n", - level_strs[level], offsets[level], pte.bits); - - if ( level =3D=3D 3 || !pte.walk.valid || !pte.walk.table ) - break; - - /* For next iteration */ - xen_unmap_table(mapping); - mapping =3D xen_map_table(lpae_get_mfn(pte)); - } - - xen_unmap_table(mapping); -} - -void dump_hyp_walk(vaddr_t addr) -{ - uint64_t ttbr =3D READ_SYSREG64(TTBR0_EL2); - - printk("Walking Hypervisor VA 0x%"PRIvaddr" " - "on CPU%d via TTBR 0x%016"PRIx64"\n", - addr, smp_processor_id(), ttbr); - - dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1); -} - -lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr) -{ - lpae_t e =3D (lpae_t) { - .pt =3D { - .valid =3D 1, /* Mappings are present */ - .table =3D 0, /* Set to 1 for links and 4k maps */ - .ai =3D attr, - .ns =3D 1, /* Hyp mode is in the non-secure world= */ - .up =3D 1, /* See below */ - .ro =3D 0, /* Assume read-write */ - .af =3D 1, /* No need for access tracking */ - .ng =3D 1, /* Makes TLB flushes easier */ - .contig =3D 0, /* Assume non-contiguous */ - .xn =3D 1, /* No need to execute outside .text */ - .avail =3D 0, /* Reference count for domheap mapping= */ - }}; - /* - * For EL2 stage-1 page table, up (aka AP[1]) is RES1 as the translati= on - * regime applies to only one exception level (see D4.4.4 and G4.6.1 - * in ARM DDI 0487B.a). If this changes, remember to update the - * hard-coded values in head.S too. - */ - - switch ( attr ) - { - case MT_NORMAL_NC: - /* - * ARM ARM: Overlaying the shareability attribute (DDI - * 0406C.b B3-1376 to 1377) - * - * A memory region with a resultant memory type attribute of Norma= l, - * and a resultant cacheability attribute of Inner Non-cacheable, - * Outer Non-cacheable, must have a resultant shareability attribu= te - * of Outer Shareable, otherwise shareability is UNPREDICTABLE. - * - * On ARMv8 sharability is ignored and explicitly treated as Outer - * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable. - */ - e.pt.sh =3D LPAE_SH_OUTER; - break; - case MT_DEVICE_nGnRnE: - case MT_DEVICE_nGnRE: - /* - * Shareability is ignored for non-Normal memory, Outer is as - * good as anything. - * - * On ARMv8 sharability is ignored and explicitly treated as Outer - * Shareable for any device memory type. - */ - e.pt.sh =3D LPAE_SH_OUTER; - break; - default: - e.pt.sh =3D LPAE_SH_INNER; /* Xen mappings are SMP coherent */ - break; - } - - ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK)); - - lpae_set_mfn(e, mfn); - - return e; -} - -/* Map a 4k page in a fixmap entry */ -void set_fixmap(unsigned int map, mfn_t mfn, unsigned int flags) -{ - int res; - - res =3D map_pages_to_xen(FIXMAP_ADDR(map), mfn, 1, flags); - BUG_ON(res !=3D 0); -} - -/* Remove a mapping from a fixmap entry */ -void clear_fixmap(unsigned int map) -{ - int res; - - res =3D destroy_xen_mappings(FIXMAP_ADDR(map), FIXMAP_ADDR(map) + PAGE= _SIZE); - BUG_ON(res !=3D 0); -} - void flush_page_to_ram(unsigned long mfn, bool sync_icache) { void *v =3D map_domain_page(_mfn(mfn)); @@ -395,13 +59,6 @@ void flush_page_to_ram(unsigned long mfn, bool sync_ica= che) invalidate_icache(); } =20 -lpae_t pte_of_xenaddr(vaddr_t va) -{ - paddr_t ma =3D va + phys_offset; - - return mfn_to_xen_entry(maddr_to_mfn(ma), MT_NORMAL); -} - void * __init early_fdt_map(paddr_t fdt_paddr) { /* We are using 2MB superpage for mapping the FDT */ @@ -455,761 +112,11 @@ void * __init early_fdt_map(paddr_t fdt_paddr) return fdt_virt; } =20 -void __init remove_early_mappings(void) -{ - int rc; - - /* destroy the _PAGE_BLOCK mapping */ - rc =3D modify_xen_mappings(BOOT_FDT_VIRT_START, - BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE, - _PAGE_BLOCK); - BUG_ON(rc); -} - -/* - * After boot, Xen page-tables should not contain mapping that are both - * Writable and eXecutables. - * - * This should be called on each CPU to enforce the policy. - */ -static void xen_pt_enforce_wnx(void) -{ - WRITE_SYSREG(READ_SYSREG(SCTLR_EL2) | SCTLR_Axx_ELx_WXN, SCTLR_EL2); - /* - * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized - * before flushing the TLBs. - */ - isb(); - flush_xen_tlb_local(); -} - -/* Clear a translation table and clean & invalidate the cache */ -static void clear_table(void *table) -{ - clear_page(table); - clean_and_invalidate_dcache_va_range(table, PAGE_SIZE); -} - -/* Boot-time pagetable setup. - * Changes here may need matching changes in head.S */ -void __init setup_pagetables(unsigned long boot_phys_offset) -{ - uint64_t ttbr; - lpae_t pte, *p; - int i; - - phys_offset =3D boot_phys_offset; - - arch_setup_page_tables(); - -#ifdef CONFIG_ARM_64 - pte =3D pte_of_xenaddr((uintptr_t)xen_first); - pte.pt.table =3D 1; - pte.pt.xn =3D 0; - xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] =3D pte; - - p =3D (void *) xen_first; -#else - p =3D (void *) cpu0_pgtable; -#endif - - /* Map xen second level page-table */ - p[0] =3D pte_of_xenaddr((uintptr_t)(xen_second)); - p[0].pt.table =3D 1; - p[0].pt.xn =3D 0; - - /* Break up the Xen mapping into 4k pages and protect them separately.= */ - for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) - { - vaddr_t va =3D XEN_VIRT_START + (i << PAGE_SHIFT); - - if ( !is_kernel(va) ) - break; - pte =3D pte_of_xenaddr(va); - pte.pt.table =3D 1; /* 4k mappings always have this bit set */ - if ( is_kernel_text(va) || is_kernel_inittext(va) ) - { - pte.pt.xn =3D 0; - pte.pt.ro =3D 1; - } - if ( is_kernel_rodata(va) ) - pte.pt.ro =3D 1; - xen_xenmap[i] =3D pte; - } - - /* Initialise xen second level entries ... */ - /* ... Xen's text etc */ - - pte =3D pte_of_xenaddr((vaddr_t)xen_xenmap); - pte.pt.table =3D 1; - xen_second[second_table_offset(XEN_VIRT_START)] =3D pte; - - /* ... Fixmap */ - pte =3D pte_of_xenaddr((vaddr_t)xen_fixmap); - pte.pt.table =3D 1; - xen_second[second_table_offset(FIXMAP_ADDR(0))] =3D pte; - -#ifdef CONFIG_ARM_64 - ttbr =3D (uintptr_t) xen_pgtable + phys_offset; -#else - ttbr =3D (uintptr_t) cpu0_pgtable + phys_offset; -#endif - - switch_ttbr(ttbr); - - xen_pt_enforce_wnx(); - -#ifdef CONFIG_ARM_32 - per_cpu(xen_pgtable, 0) =3D cpu0_pgtable; -#endif -} - -static void clear_boot_pagetables(void) -{ - /* - * Clear the copy of the boot pagetables. Each secondary CPU - * rebuilds these itself (see head.S). - */ - clear_table(boot_pgtable); -#ifdef CONFIG_ARM_64 - clear_table(boot_first); - clear_table(boot_first_id); -#endif - clear_table(boot_second); - clear_table(boot_third); -} - -#ifdef CONFIG_ARM_64 -int init_secondary_pagetables(int cpu) -{ - clear_boot_pagetables(); - - /* Set init_ttbr for this CPU coming up. All CPus share a single setof - * pagetables, but rewrite it each time for consistency with 32 bit. */ - init_ttbr =3D (uintptr_t) xen_pgtable + phys_offset; - clean_dcache(init_ttbr); - return 0; -} -#else -int init_secondary_pagetables(int cpu) -{ - lpae_t *first; - - first =3D alloc_xenheap_page(); /* root =3D=3D first level on 32-bit 3= -level trie */ - - if ( !first ) - { - printk("CPU%u: Unable to allocate the first page-table\n", cpu); - return -ENOMEM; - } - - /* Initialise root pagetable from root of boot tables */ - memcpy(first, cpu0_pgtable, PAGE_SIZE); - per_cpu(xen_pgtable, cpu) =3D first; - - if ( !init_domheap_mappings(cpu) ) - { - printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu); - per_cpu(xen_pgtable, cpu) =3D NULL; - free_xenheap_page(first); - return -ENOMEM; - } - - clear_boot_pagetables(); - - /* Set init_ttbr for this CPU coming up */ - init_ttbr =3D __pa(first); - clean_dcache(init_ttbr); - - return 0; -} -#endif - -/* MMU setup for secondary CPUS (which already have paging enabled) */ -void mmu_init_secondary_cpu(void) -{ - xen_pt_enforce_wnx(); -} - -#ifdef CONFIG_ARM_32 -/* - * Set up the direct-mapped xenheap: - * up to 1GB of contiguous, always-mapped memory. - */ -void __init setup_directmap_mappings(unsigned long base_mfn, - unsigned long nr_mfns) -{ - int rc; - - rc =3D map_pages_to_xen(XENHEAP_VIRT_START, _mfn(base_mfn), nr_mfns, - PAGE_HYPERVISOR_RW | _PAGE_BLOCK); - if ( rc ) - panic("Unable to setup the directmap mappings.\n"); - - /* Record where the directmap is, for translation routines. */ - directmap_virt_end =3D XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE; -} -#else /* CONFIG_ARM_64 */ -/* Map the region in the directmap area. */ -void __init setup_directmap_mappings(unsigned long base_mfn, - unsigned long nr_mfns) -{ - int rc; - - /* First call sets the directmap physical and virtual offset. */ - if ( mfn_eq(directmap_mfn_start, INVALID_MFN) ) - { - unsigned long mfn_gb =3D base_mfn & ~((FIRST_SIZE >> PAGE_SHIFT) -= 1); - - directmap_mfn_start =3D _mfn(base_mfn); - directmap_base_pdx =3D mfn_to_pdx(_mfn(base_mfn)); - /* - * The base address may not be aligned to the first level - * size (e.g. 1GB when using 4KB pages). This would prevent - * superpage mappings for all the regions because the virtual - * address and machine address should both be suitably aligned. - * - * Prevent that by offsetting the start of the directmap virtual - * address. - */ - directmap_virt_start =3D DIRECTMAP_VIRT_START + - (base_mfn - mfn_gb) * PAGE_SIZE; - } - - if ( base_mfn < mfn_x(directmap_mfn_start) ) - panic("cannot add directmap mapping at %lx below heap start %lx\n", - base_mfn, mfn_x(directmap_mfn_start)); - - rc =3D map_pages_to_xen((vaddr_t)__mfn_to_virt(base_mfn), - _mfn(base_mfn), nr_mfns, - PAGE_HYPERVISOR_RW | _PAGE_BLOCK); - if ( rc ) - panic("Unable to setup the directmap mappings.\n"); -} -#endif - -/* Map a frame table to cover physical addresses ps through pe */ -void __init setup_frametable_mappings(paddr_t ps, paddr_t pe) -{ - unsigned long nr_pdxs =3D mfn_to_pdx(mfn_add(maddr_to_mfn(pe), -1)) - - mfn_to_pdx(maddr_to_mfn(ps)) + 1; - unsigned long frametable_size =3D nr_pdxs * sizeof(struct page_info); - mfn_t base_mfn; - const unsigned long mapping_size =3D frametable_size < MB(32) ? MB(2) = : MB(32); - int rc; - - /* - * The size of paddr_t should be sufficient for the complete range of - * physical address. - */ - BUILD_BUG_ON((sizeof(paddr_t) * BITS_PER_BYTE) < PADDR_BITS); - BUILD_BUG_ON(sizeof(struct page_info) !=3D PAGE_INFO_SIZE); - - if ( frametable_size > FRAMETABLE_SIZE ) - panic("The frametable cannot cover the physical region %#"PRIpaddr= " - %#"PRIpaddr"\n", - ps, pe); - - frametable_base_pdx =3D mfn_to_pdx(maddr_to_mfn(ps)); - /* Round up to 2M or 32M boundary, as appropriate. */ - frametable_size =3D ROUNDUP(frametable_size, mapping_size); - base_mfn =3D alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-1= 2)); - - rc =3D map_pages_to_xen(FRAMETABLE_VIRT_START, base_mfn, - frametable_size >> PAGE_SHIFT, - PAGE_HYPERVISOR_RW | _PAGE_BLOCK); - if ( rc ) - panic("Unable to setup the frametable mappings.\n"); - - memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info)); - memset(&frame_table[nr_pdxs], -1, - frametable_size - (nr_pdxs * sizeof(struct page_info))); - - frametable_virt_end =3D FRAMETABLE_VIRT_START + (nr_pdxs * sizeof(stru= ct page_info)); -} - -void *__init arch_vmap_virt_end(void) -{ - return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE); -} - -/* - * This function should only be used to remap device address ranges - * TODO: add a check to verify this assumption - */ -void *ioremap_attr(paddr_t pa, size_t len, unsigned int attributes) -{ - mfn_t mfn =3D _mfn(PFN_DOWN(pa)); - unsigned int offs =3D pa & (PAGE_SIZE - 1); - unsigned int nr =3D PFN_UP(offs + len); - void *ptr =3D __vmap(&mfn, nr, 1, 1, attributes, VMAP_DEFAULT); - - if ( ptr =3D=3D NULL ) - return NULL; - - return ptr + offs; -} - void *ioremap(paddr_t pa, size_t len) { return ioremap_attr(pa, len, PAGE_HYPERVISOR_NOCACHE); } =20 -static int create_xen_table(lpae_t *entry) -{ - mfn_t mfn; - void *p; - lpae_t pte; - - if ( system_state !=3D SYS_STATE_early_boot ) - { - struct page_info *pg =3D alloc_domheap_page(NULL, 0); - - if ( pg =3D=3D NULL ) - return -ENOMEM; - - mfn =3D page_to_mfn(pg); - } - else - mfn =3D alloc_boot_pages(1, 1); - - p =3D xen_map_table(mfn); - clear_page(p); - xen_unmap_table(p); - - pte =3D mfn_to_xen_entry(mfn, MT_NORMAL); - pte.pt.table =3D 1; - write_pte(entry, pte); - - return 0; -} - -#define XEN_TABLE_MAP_FAILED 0 -#define XEN_TABLE_SUPER_PAGE 1 -#define XEN_TABLE_NORMAL_PAGE 2 - -/* - * Take the currently mapped table, find the corresponding entry, - * and map the next table, if available. - * - * The read_only parameters indicates whether intermediate tables should - * be allocated when not present. - * - * Return values: - * XEN_TABLE_MAP_FAILED: Either read_only was set and the entry - * was empty, or allocating a new page failed. - * XEN_TABLE_NORMAL_PAGE: next level mapped normally - * XEN_TABLE_SUPER_PAGE: The next entry points to a superpage. - */ -static int xen_pt_next_level(bool read_only, unsigned int level, - lpae_t **table, unsigned int offset) -{ - lpae_t *entry; - int ret; - mfn_t mfn; - - entry =3D *table + offset; - - if ( !lpae_is_valid(*entry) ) - { - if ( read_only ) - return XEN_TABLE_MAP_FAILED; - - ret =3D create_xen_table(entry); - if ( ret ) - return XEN_TABLE_MAP_FAILED; - } - - /* The function xen_pt_next_level is never called at the 3rd level */ - if ( lpae_is_mapping(*entry, level) ) - return XEN_TABLE_SUPER_PAGE; - - mfn =3D lpae_get_mfn(*entry); - - xen_unmap_table(*table); - *table =3D xen_map_table(mfn); - - return XEN_TABLE_NORMAL_PAGE; -} - -/* Sanity check of the entry */ -static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level, - unsigned int flags) -{ - /* Sanity check when modifying an entry. */ - if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) ) - { - /* We don't allow modifying an invalid entry. */ - if ( !lpae_is_valid(entry) ) - { - mm_printk("Modifying invalid entry is not allowed.\n"); - return false; - } - - /* We don't allow modifying a table entry */ - if ( !lpae_is_mapping(entry, level) ) - { - mm_printk("Modifying a table entry is not allowed.\n"); - return false; - } - - /* We don't allow changing memory attributes. */ - if ( entry.pt.ai !=3D PAGE_AI_MASK(flags) ) - { - mm_printk("Modifying memory attributes is not allowed (0x%x ->= 0x%x).\n", - entry.pt.ai, PAGE_AI_MASK(flags)); - return false; - } - - /* We don't allow modifying entry with contiguous bit set. */ - if ( entry.pt.contig ) - { - mm_printk("Modifying entry with contiguous bit set is not allo= wed.\n"); - return false; - } - } - /* Sanity check when inserting a mapping */ - else if ( flags & _PAGE_PRESENT ) - { - /* We should be here with a valid MFN. */ - ASSERT(!mfn_eq(mfn, INVALID_MFN)); - - /* - * We don't allow replacing any valid entry. - * - * Note that the function xen_pt_update() relies on this - * assumption and will skip the TLB flush. The function will need - * to be updated if the check is relaxed. - */ - if ( lpae_is_valid(entry) ) - { - if ( lpae_is_mapping(entry, level) ) - mm_printk("Changing MFN for a valid entry is not allowed (= %#"PRI_mfn" -> %#"PRI_mfn").\n", - mfn_x(lpae_get_mfn(entry)), mfn_x(mfn)); - else - mm_printk("Trying to replace a table with a mapping.\n"); - return false; - } - } - /* Sanity check when removing a mapping. */ - else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) =3D=3D 0 ) - { - /* We should be here with an invalid MFN. */ - ASSERT(mfn_eq(mfn, INVALID_MFN)); - - /* We don't allow removing a table */ - if ( lpae_is_table(entry, level) ) - { - mm_printk("Removing a table is not allowed.\n"); - return false; - } - - /* We don't allow removing a mapping with contiguous bit set. */ - if ( entry.pt.contig ) - { - mm_printk("Removing entry with contiguous bit set is not allow= ed.\n"); - return false; - } - } - /* Sanity check when populating the page-table. No check so far. */ - else - { - ASSERT(flags & _PAGE_POPULATE); - /* We should be here with an invalid MFN */ - ASSERT(mfn_eq(mfn, INVALID_MFN)); - } - - return true; -} - -/* Update an entry at the level @target. */ -static int xen_pt_update_entry(mfn_t root, unsigned long virt, - mfn_t mfn, unsigned int target, - unsigned int flags) -{ - int rc; - unsigned int level; - lpae_t *table; - /* - * The intermediate page tables are read-only when the MFN is not valid - * and we are not populating page table. - * This means we either modify permissions or remove an entry. - */ - bool read_only =3D mfn_eq(mfn, INVALID_MFN) && !(flags & _PAGE_POPULAT= E); - lpae_t pte, *entry; - - /* convenience aliases */ - DECLARE_OFFSETS(offsets, (paddr_t)virt); - - /* _PAGE_POPULATE and _PAGE_PRESENT should never be set together. */ - ASSERT((flags & (_PAGE_POPULATE|_PAGE_PRESENT)) !=3D (_PAGE_POPULATE|_= PAGE_PRESENT)); - - table =3D xen_map_table(root); - for ( level =3D HYP_PT_ROOT_LEVEL; level < target; level++ ) - { - rc =3D xen_pt_next_level(read_only, level, &table, offsets[level]); - if ( rc =3D=3D XEN_TABLE_MAP_FAILED ) - { - /* - * We are here because xen_pt_next_level has failed to map - * the intermediate page table (e.g the table does not exist - * and the pt is read-only). It is a valid case when - * removing a mapping as it may not exist in the page table. - * In this case, just ignore it. - */ - if ( flags & (_PAGE_PRESENT|_PAGE_POPULATE) ) - { - mm_printk("%s: Unable to map level %u\n", __func__, level); - rc =3D -ENOENT; - goto out; - } - else - { - rc =3D 0; - goto out; - } - } - else if ( rc !=3D XEN_TABLE_NORMAL_PAGE ) - break; - } - - if ( level !=3D target ) - { - mm_printk("%s: Shattering superpage is not supported\n", __func__); - rc =3D -EOPNOTSUPP; - goto out; - } - - entry =3D table + offsets[level]; - - rc =3D -EINVAL; - if ( !xen_pt_check_entry(*entry, mfn, level, flags) ) - goto out; - - /* If we are only populating page-table, then we are done. */ - rc =3D 0; - if ( flags & _PAGE_POPULATE ) - goto out; - - /* We are removing the page */ - if ( !(flags & _PAGE_PRESENT) ) - memset(&pte, 0x00, sizeof(pte)); - else - { - /* We are inserting a mapping =3D> Create new pte. */ - if ( !mfn_eq(mfn, INVALID_MFN) ) - { - pte =3D mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags)); - - /* - * First and second level pages set pte.pt.table =3D 0, but - * third level entries set pte.pt.table =3D 1. - */ - pte.pt.table =3D (level =3D=3D 3); - } - else /* We are updating the permission =3D> Copy the current pte. = */ - pte =3D *entry; - - /* Set permission */ - pte.pt.ro =3D PAGE_RO_MASK(flags); - pte.pt.xn =3D PAGE_XN_MASK(flags); - /* Set contiguous bit */ - pte.pt.contig =3D !!(flags & _PAGE_CONTIG); - } - - write_pte(entry, pte); - - rc =3D 0; - -out: - xen_unmap_table(table); - - return rc; -} - -/* Return the level where mapping should be done */ -static int xen_pt_mapping_level(unsigned long vfn, mfn_t mfn, unsigned lon= g nr, - unsigned int flags) -{ - unsigned int level; - unsigned long mask; - - /* - * Don't take into account the MFN when removing mapping (i.e - * MFN_INVALID) to calculate the correct target order. - * - * Per the Arm Arm, `vfn` and `mfn` must be both superpage aligned. - * They are or-ed together and then checked against the size of - * each level. - * - * `left` is not included and checked separately to allow - * superpage mapping even if it is not properly aligned (the - * user may have asked to map 2MB + 4k). - */ - mask =3D !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0; - mask |=3D vfn; - - /* - * Always use level 3 mapping unless the caller request block - * mapping. - */ - if ( likely(!(flags & _PAGE_BLOCK)) ) - level =3D 3; - else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) && - (nr >=3D BIT(FIRST_ORDER, UL)) ) - level =3D 1; - else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) && - (nr >=3D BIT(SECOND_ORDER, UL)) ) - level =3D 2; - else - level =3D 3; - - return level; -} - -#define XEN_PT_4K_NR_CONTIG 16 - -/* - * Check whether the contiguous bit can be set. Return the number of - * contiguous entry allowed. If not allowed, return 1. - */ -static unsigned int xen_pt_check_contig(unsigned long vfn, mfn_t mfn, - unsigned int level, unsigned long = left, - unsigned int flags) -{ - unsigned long nr_contig; - - /* - * Allow the contiguous bit to set when the caller requests block - * mapping. - */ - if ( !(flags & _PAGE_BLOCK) ) - return 1; - - /* - * We don't allow to remove mapping with the contiguous bit set. - * So shortcut the logic and directly return 1. - */ - if ( mfn_eq(mfn, INVALID_MFN) ) - return 1; - - /* - * The number of contiguous entries varies depending on the page - * granularity used. The logic below assumes 4KB. - */ - BUILD_BUG_ON(PAGE_SIZE !=3D SZ_4K); - - /* - * In order to enable the contiguous bit, we should have enough entries - * to map left and both the virtual and physical address should be - * aligned to the size of 16 translation tables entries. - */ - nr_contig =3D BIT(XEN_PT_LEVEL_ORDER(level), UL) * XEN_PT_4K_NR_CONTIG; - - if ( (left < nr_contig) || ((mfn_x(mfn) | vfn) & (nr_contig - 1)) ) - return 1; - - return XEN_PT_4K_NR_CONTIG; -} - -static DEFINE_SPINLOCK(xen_pt_lock); - -static int xen_pt_update(unsigned long virt, - mfn_t mfn, - /* const on purpose as it is used for TLB flush */ - const unsigned long nr_mfns, - unsigned int flags) -{ - int rc =3D 0; - unsigned long vfn =3D virt >> PAGE_SHIFT; - unsigned long left =3D nr_mfns; - - /* - * For arm32, page-tables are different on each CPUs. Yet, they share - * some common mappings. It is assumed that only common mappings - * will be modified with this function. - * - * XXX: Add a check. - */ - const mfn_t root =3D maddr_to_mfn(READ_SYSREG64(TTBR0_EL2)); - - /* - * The hardware was configured to forbid mapping both writeable and - * executable. - * When modifying/creating mapping (i.e _PAGE_PRESENT is set), - * prevent any update if this happen. - */ - if ( (flags & _PAGE_PRESENT) && !PAGE_RO_MASK(flags) && - !PAGE_XN_MASK(flags) ) - { - mm_printk("Mappings should not be both Writeable and Executable.\n= "); - return -EINVAL; - } - - if ( flags & _PAGE_CONTIG ) - { - mm_printk("_PAGE_CONTIG is an internal only flag.\n"); - return -EINVAL; - } - - if ( !IS_ALIGNED(virt, PAGE_SIZE) ) - { - mm_printk("The virtual address is not aligned to the page-size.\n"= ); - return -EINVAL; - } - - spin_lock(&xen_pt_lock); - - while ( left ) - { - unsigned int order, level, nr_contig, new_flags; - - level =3D xen_pt_mapping_level(vfn, mfn, left, flags); - order =3D XEN_PT_LEVEL_ORDER(level); - - ASSERT(left >=3D BIT(order, UL)); - - /* - * Check if we can set the contiguous mapping and update the - * flags accordingly. - */ - nr_contig =3D xen_pt_check_contig(vfn, mfn, level, left, flags); - new_flags =3D flags | ((nr_contig > 1) ? _PAGE_CONTIG : 0); - - for ( ; nr_contig > 0; nr_contig-- ) - { - rc =3D xen_pt_update_entry(root, vfn << PAGE_SHIFT, mfn, level, - new_flags); - if ( rc ) - break; - - vfn +=3D 1U << order; - if ( !mfn_eq(mfn, INVALID_MFN) ) - mfn =3D mfn_add(mfn, 1U << order); - - left -=3D (1U << order); - } - - if ( rc ) - break; - } - - /* - * The TLBs flush can be safely skipped when a mapping is inserted - * as we don't allow mapping replacement (see xen_pt_check_entry()). - * - * For all the other cases, the TLBs will be flushed unconditionally - * even if the mapping has failed. This is because we may have - * partially modified the PT. This will prevent any unexpected - * behavior afterwards. - */ - if ( !((flags & _PAGE_PRESENT) && !mfn_eq(mfn, INVALID_MFN)) ) - flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns); - - spin_unlock(&xen_pt_lock); - - return rc; -} - int map_pages_to_xen(unsigned long virt, mfn_t mfn, unsigned long nr_mfns, @@ -1218,11 +125,6 @@ int map_pages_to_xen(unsigned long virt, return xen_pt_update(virt, mfn, nr_mfns, flags); } =20 -int populate_pt_range(unsigned long virt, unsigned long nr_mfns) -{ - return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE); -} - int destroy_xen_mappings(unsigned long s, unsigned long e) { ASSERT(IS_ALIGNED(s, PAGE_SIZE)); diff --git a/xen/arch/arm/mmu/mm.c b/xen/arch/arm/mmu/mm.c new file mode 100644 index 0000000000..43c19fa914 --- /dev/null +++ b/xen/arch/arm/mmu/mm.c @@ -0,0 +1,1125 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * xen/arch/arm/mm-mmu.c + * + * MMU code for an ARMv7-A with virt extensions. + * + * Tim Deegan + * Copyright (c) 2011 Citrix Systems. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#undef mfn_to_virt +#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn)) + +#ifdef NDEBUG +static inline void +__attribute__ ((__format__ (__printf__, 1, 2))) +mm_printk(const char *fmt, ...) {} +#else +#define mm_printk(fmt, args...) \ + do \ + { \ + dprintk(XENLOG_ERR, fmt, ## args); \ + WARN(); \ + } while (0) +#endif + +/* Static start-of-day pagetables that we use before the allocators + * are up. These are used by all CPUs during bringup before switching + * to the CPUs own pagetables. + * + * These pagetables have a very simple structure. They include: + * - 2MB worth of 4K mappings of xen at XEN_VIRT_START, boot_first and + * boot_second are used to populate the tables down to boot_third + * which contains the actual mapping. + * - a 1:1 mapping of xen at its current physical address. This uses a + * section mapping at whichever of boot_{pgtable,first,second} + * covers that physical address. + * + * For the boot CPU these mappings point to the address where Xen was + * loaded by the bootloader. For secondary CPUs they point to the + * relocated copy of Xen for the benefit of secondary CPUs. + * + * In addition to the above for the boot CPU the device-tree is + * initially mapped in the boot misc slot. This mapping is not present + * for secondary CPUs. + * + * Finally, if EARLY_PRINTK is enabled then xen_fixmap will be mapped + * by the CPU once it has moved off the 1:1 mapping. + */ +DEFINE_BOOT_PAGE_TABLE(boot_pgtable); +#ifdef CONFIG_ARM_64 +DEFINE_BOOT_PAGE_TABLE(boot_first); +DEFINE_BOOT_PAGE_TABLE(boot_first_id); +#endif +DEFINE_BOOT_PAGE_TABLE(boot_second_id); +DEFINE_BOOT_PAGE_TABLE(boot_third_id); +DEFINE_BOOT_PAGE_TABLE(boot_second); +DEFINE_BOOT_PAGE_TABLE(boot_third); + +/* Main runtime page tables */ + +/* + * For arm32 xen_pgtable are per-PCPU and are allocated before + * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs. + * + * xen_second, xen_fixmap and xen_xenmap are always shared between all + * PCPUs. + */ + +#ifdef CONFIG_ARM_64 +#define HYP_PT_ROOT_LEVEL 0 +DEFINE_PAGE_TABLE(xen_pgtable); +static DEFINE_PAGE_TABLE(xen_first); +#define THIS_CPU_PGTABLE xen_pgtable +#else +#define HYP_PT_ROOT_LEVEL 1 +/* Per-CPU pagetable pages */ +/* xen_pgtable =3D=3D root of the trie (zeroeth level on 64-bit, first on = 32-bit) */ +DEFINE_PER_CPU(lpae_t *, xen_pgtable); +#define THIS_CPU_PGTABLE this_cpu(xen_pgtable) +/* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */ +static DEFINE_PAGE_TABLE(cpu0_pgtable); +#endif + +/* Common pagetable leaves */ +/* Second level page table used to cover Xen virtual address space */ +static DEFINE_PAGE_TABLE(xen_second); +/* Third level page table used for fixmap */ +DEFINE_BOOT_PAGE_TABLE(xen_fixmap); +/* + * Third level page table used to map Xen itself with the XN bit set + * as appropriate. + */ +static DEFINE_PAGE_TABLE(xen_xenmap); + +static paddr_t phys_offset; + +/* Limits of the Xen heap */ +mfn_t directmap_mfn_start __read_mostly =3D INVALID_MFN_INITIALIZER; +mfn_t directmap_mfn_end __read_mostly; +vaddr_t directmap_virt_end __read_mostly; +#ifdef CONFIG_ARM_64 +vaddr_t directmap_virt_start __read_mostly; +unsigned long directmap_base_pdx __read_mostly; +#endif + +/* Checking VA memory layout alignment. */ +static void __init __maybe_unused build_assertions(void) +{ + /* 2MB aligned regions */ + BUILD_BUG_ON(XEN_VIRT_START & ~SECOND_MASK); + BUILD_BUG_ON(FIXMAP_ADDR(0) & ~SECOND_MASK); + /* 1GB aligned regions */ +#ifdef CONFIG_ARM_32 + BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK); +#else + BUILD_BUG_ON(DIRECTMAP_VIRT_START & ~FIRST_MASK); +#endif + /* Page table structure constraints */ +#ifdef CONFIG_ARM_64 + /* + * The first few slots of the L0 table is reserved for the identity + * mapping. Check that none of the other regions are overlapping + * with it. + */ +#define CHECK_OVERLAP_WITH_IDMAP(virt) \ + BUILD_BUG_ON(zeroeth_table_offset(virt) < IDENTITY_MAPPING_AREA_NR_L0) + + CHECK_OVERLAP_WITH_IDMAP(XEN_VIRT_START); + CHECK_OVERLAP_WITH_IDMAP(VMAP_VIRT_START); + CHECK_OVERLAP_WITH_IDMAP(FRAMETABLE_VIRT_START); + CHECK_OVERLAP_WITH_IDMAP(DIRECTMAP_VIRT_START); +#undef CHECK_OVERLAP_WITH_IDMAP +#endif + BUILD_BUG_ON(first_table_offset(XEN_VIRT_START)); +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE + BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK); +#endif + /* + * The boot code expects the regions XEN_VIRT_START, FIXMAP_ADDR(0), + * BOOT_FDT_VIRT_START to use the same 0th (arm64 only) and 1st + * slot in the page tables. + */ +#define CHECK_SAME_SLOT(level, virt1, virt2) \ + BUILD_BUG_ON(level##_table_offset(virt1) !=3D level##_table_offset(vir= t2)) + +#define CHECK_DIFFERENT_SLOT(level, virt1, virt2) \ + BUILD_BUG_ON(level##_table_offset(virt1) =3D=3D level##_table_offset(v= irt2)) + +#ifdef CONFIG_ARM_64 + CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, FIXMAP_ADDR(0)); + CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, BOOT_FDT_VIRT_START); +#endif + CHECK_SAME_SLOT(first, XEN_VIRT_START, FIXMAP_ADDR(0)); + CHECK_SAME_SLOT(first, XEN_VIRT_START, BOOT_FDT_VIRT_START); + + /* + * For arm32, the temporary mapping will re-use the domheap + * first slot and the second slots will match. + */ +#ifdef CONFIG_ARM_32 + CHECK_SAME_SLOT(first, TEMPORARY_XEN_VIRT_START, DOMHEAP_VIRT_START); + CHECK_DIFFERENT_SLOT(first, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START); + CHECK_SAME_SLOT(second, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START); +#endif + +#undef CHECK_SAME_SLOT +#undef CHECK_DIFFERENT_SLOT +} + +static lpae_t *xen_map_table(mfn_t mfn) +{ + /* + * During early boot, map_domain_page() may be unusable. Use the + * PMAP to map temporarily a page-table. + */ + if ( system_state =3D=3D SYS_STATE_early_boot ) + return pmap_map(mfn); + + return map_domain_page(mfn); +} + +static void xen_unmap_table(const lpae_t *table) +{ + /* + * During early boot, xen_map_table() will not use map_domain_page() + * but the PMAP. + */ + if ( system_state =3D=3D SYS_STATE_early_boot ) + pmap_unmap(table); + else + unmap_domain_page(table); +} + +void dump_pt_walk(paddr_t ttbr, paddr_t addr, + unsigned int root_level, + unsigned int nr_root_tables) +{ + static const char *level_strs[4] =3D { "0TH", "1ST", "2ND", "3RD" }; + const mfn_t root_mfn =3D maddr_to_mfn(ttbr); + DECLARE_OFFSETS(offsets, addr); + lpae_t pte, *mapping; + unsigned int level, root_table; + +#ifdef CONFIG_ARM_32 + BUG_ON(root_level < 1); +#endif + BUG_ON(root_level > 3); + + if ( nr_root_tables > 1 ) + { + /* + * Concatenated root-level tables. The table number will be + * the offset at the previous level. It is not possible to + * concatenate a level-0 root. + */ + BUG_ON(root_level =3D=3D 0); + root_table =3D offsets[root_level - 1]; + printk("Using concatenated root table %u\n", root_table); + if ( root_table >=3D nr_root_tables ) + { + printk("Invalid root table offset\n"); + return; + } + } + else + root_table =3D 0; + + mapping =3D xen_map_table(mfn_add(root_mfn, root_table)); + + for ( level =3D root_level; ; level++ ) + { + if ( offsets[level] > XEN_PT_LPAE_ENTRIES ) + break; + + pte =3D mapping[offsets[level]]; + + printk("%s[0x%03x] =3D 0x%"PRIx64"\n", + level_strs[level], offsets[level], pte.bits); + + if ( level =3D=3D 3 || !pte.walk.valid || !pte.walk.table ) + break; + + /* For next iteration */ + xen_unmap_table(mapping); + mapping =3D xen_map_table(lpae_get_mfn(pte)); + } + + xen_unmap_table(mapping); +} + +void dump_hyp_walk(vaddr_t addr) +{ + uint64_t ttbr =3D READ_SYSREG64(TTBR0_EL2); + + printk("Walking Hypervisor VA 0x%"PRIvaddr" " + "on CPU%d via TTBR 0x%016"PRIx64"\n", + addr, smp_processor_id(), ttbr); + + dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1); +} + +lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr) +{ + lpae_t e =3D (lpae_t) { + .pt =3D { + .valid =3D 1, /* Mappings are present */ + .table =3D 0, /* Set to 1 for links and 4k maps */ + .ai =3D attr, + .ns =3D 1, /* Hyp mode is in the non-secure world= */ + .up =3D 1, /* See below */ + .ro =3D 0, /* Assume read-write */ + .af =3D 1, /* No need for access tracking */ + .ng =3D 1, /* Makes TLB flushes easier */ + .contig =3D 0, /* Assume non-contiguous */ + .xn =3D 1, /* No need to execute outside .text */ + .avail =3D 0, /* Reference count for domheap mapping= */ + }}; + /* + * For EL2 stage-1 page table, up (aka AP[1]) is RES1 as the translati= on + * regime applies to only one exception level (see D4.4.4 and G4.6.1 + * in ARM DDI 0487B.a). If this changes, remember to update the + * hard-coded values in head.S too. + */ + + switch ( attr ) + { + case MT_NORMAL_NC: + /* + * ARM ARM: Overlaying the shareability attribute (DDI + * 0406C.b B3-1376 to 1377) + * + * A memory region with a resultant memory type attribute of Norma= l, + * and a resultant cacheability attribute of Inner Non-cacheable, + * Outer Non-cacheable, must have a resultant shareability attribu= te + * of Outer Shareable, otherwise shareability is UNPREDICTABLE. + * + * On ARMv8 sharability is ignored and explicitly treated as Outer + * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable. + */ + e.pt.sh =3D LPAE_SH_OUTER; + break; + case MT_DEVICE_nGnRnE: + case MT_DEVICE_nGnRE: + /* + * Shareability is ignored for non-Normal memory, Outer is as + * good as anything. + * + * On ARMv8 sharability is ignored and explicitly treated as Outer + * Shareable for any device memory type. + */ + e.pt.sh =3D LPAE_SH_OUTER; + break; + default: + e.pt.sh =3D LPAE_SH_INNER; /* Xen mappings are SMP coherent */ + break; + } + + ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK)); + + lpae_set_mfn(e, mfn); + + return e; +} + +/* Map a 4k page in a fixmap entry */ +void set_fixmap(unsigned int map, mfn_t mfn, unsigned int flags) +{ + int res; + + res =3D map_pages_to_xen(FIXMAP_ADDR(map), mfn, 1, flags); + BUG_ON(res !=3D 0); +} + +/* Remove a mapping from a fixmap entry */ +void clear_fixmap(unsigned int map) +{ + int res; + + res =3D destroy_xen_mappings(FIXMAP_ADDR(map), FIXMAP_ADDR(map) + PAGE= _SIZE); + BUG_ON(res !=3D 0); +} + +lpae_t pte_of_xenaddr(vaddr_t va) +{ + paddr_t ma =3D va + phys_offset; + + return mfn_to_xen_entry(maddr_to_mfn(ma), MT_NORMAL); +} + +void __init remove_early_mappings(void) +{ + int rc; + + /* destroy the _PAGE_BLOCK mapping */ + rc =3D modify_xen_mappings(BOOT_FDT_VIRT_START, + BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE, + _PAGE_BLOCK); + BUG_ON(rc); +} + +/* + * After boot, Xen page-tables should not contain mapping that are both + * Writable and eXecutables. + * + * This should be called on each CPU to enforce the policy. + */ +static void xen_pt_enforce_wnx(void) +{ + WRITE_SYSREG(READ_SYSREG(SCTLR_EL2) | SCTLR_Axx_ELx_WXN, SCTLR_EL2); + /* + * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized + * before flushing the TLBs. + */ + isb(); + flush_xen_tlb_local(); +} + +/* Clear a translation table and clean & invalidate the cache */ +static void clear_table(void *table) +{ + clear_page(table); + clean_and_invalidate_dcache_va_range(table, PAGE_SIZE); +} + +/* Boot-time pagetable setup. + * Changes here may need matching changes in head.S */ +void __init setup_pagetables(unsigned long boot_phys_offset) +{ + uint64_t ttbr; + lpae_t pte, *p; + int i; + + phys_offset =3D boot_phys_offset; + + arch_setup_page_tables(); + +#ifdef CONFIG_ARM_64 + pte =3D pte_of_xenaddr((uintptr_t)xen_first); + pte.pt.table =3D 1; + pte.pt.xn =3D 0; + xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] =3D pte; + + p =3D (void *) xen_first; +#else + p =3D (void *) cpu0_pgtable; +#endif + + /* Map xen second level page-table */ + p[0] =3D pte_of_xenaddr((uintptr_t)(xen_second)); + p[0].pt.table =3D 1; + p[0].pt.xn =3D 0; + + /* Break up the Xen mapping into 4k pages and protect them separately.= */ + for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + vaddr_t va =3D XEN_VIRT_START + (i << PAGE_SHIFT); + + if ( !is_kernel(va) ) + break; + pte =3D pte_of_xenaddr(va); + pte.pt.table =3D 1; /* 4k mappings always have this bit set */ + if ( is_kernel_text(va) || is_kernel_inittext(va) ) + { + pte.pt.xn =3D 0; + pte.pt.ro =3D 1; + } + if ( is_kernel_rodata(va) ) + pte.pt.ro =3D 1; + xen_xenmap[i] =3D pte; + } + + /* Initialise xen second level entries ... */ + /* ... Xen's text etc */ + + pte =3D pte_of_xenaddr((vaddr_t)xen_xenmap); + pte.pt.table =3D 1; + xen_second[second_table_offset(XEN_VIRT_START)] =3D pte; + + /* ... Fixmap */ + pte =3D pte_of_xenaddr((vaddr_t)xen_fixmap); + pte.pt.table =3D 1; + xen_second[second_table_offset(FIXMAP_ADDR(0))] =3D pte; + +#ifdef CONFIG_ARM_64 + ttbr =3D (uintptr_t) xen_pgtable + phys_offset; +#else + ttbr =3D (uintptr_t) cpu0_pgtable + phys_offset; +#endif + + switch_ttbr(ttbr); + + xen_pt_enforce_wnx(); + +#ifdef CONFIG_ARM_32 + per_cpu(xen_pgtable, 0) =3D cpu0_pgtable; +#endif +} + +static void clear_boot_pagetables(void) +{ + /* + * Clear the copy of the boot pagetables. Each secondary CPU + * rebuilds these itself (see head.S). + */ + clear_table(boot_pgtable); +#ifdef CONFIG_ARM_64 + clear_table(boot_first); + clear_table(boot_first_id); +#endif + clear_table(boot_second); + clear_table(boot_third); +} + +#ifdef CONFIG_ARM_64 +int init_secondary_mm(int cpu) +{ + clear_boot_pagetables(); + + /* Set init_mm for this CPU coming up. All CPus share a single setof + * pagetables, but rewrite it each time for consistency with 32 bit. */ + init_mm =3D (uintptr_t) xen_pgtable + phys_offset; + clean_dcache(init_mm); + return 0; +} +#else +int init_secondary_mm(int cpu) +{ + lpae_t *first; + + first =3D alloc_xenheap_page(); /* root =3D=3D first level on 32-bit 3= -level trie */ + + if ( !first ) + { + printk("CPU%u: Unable to allocate the first page-table\n", cpu); + return -ENOMEM; + } + + /* Initialise root pagetable from root of boot tables */ + memcpy(first, cpu0_pgtable, PAGE_SIZE); + per_cpu(xen_pgtable, cpu) =3D first; + + if ( !init_domheap_mappings(cpu) ) + { + printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu); + per_cpu(xen_pgtable, cpu) =3D NULL; + free_xenheap_page(first); + return -ENOMEM; + } + + clear_boot_pagetables(); + + /* Set init_mm for this CPU coming up */ + init_mm =3D __pa(first); + clean_dcache(init_mm); + + return 0; +} +#endif + +/* MMU setup for secondary CPUS (which already have paging enabled) */ +void mm_init_secondary_cpu(void) +{ + xen_pt_enforce_wnx(); +} + +#ifdef CONFIG_ARM_32 +/* + * Set up the direct-mapped xenheap: + * up to 1GB of contiguous, always-mapped memory. + */ +void __init setup_directmap_mappings(unsigned long base_mfn, + unsigned long nr_mfns) +{ + int rc; + + rc =3D map_pages_to_xen(XENHEAP_VIRT_START, _mfn(base_mfn), nr_mfns, + PAGE_HYPERVISOR_RW | _PAGE_BLOCK); + if ( rc ) + panic("Unable to setup the directmap mappings.\n"); + + /* Record where the directmap is, for translation routines. */ + directmap_virt_end =3D XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE; +} +#else /* CONFIG_ARM_64 */ +/* Map the region in the directmap area. */ +void __init setup_directmap_mappings(unsigned long base_mfn, + unsigned long nr_mfns) +{ + int rc; + + /* First call sets the directmap physical and virtual offset. */ + if ( mfn_eq(directmap_mfn_start, INVALID_MFN) ) + { + unsigned long mfn_gb =3D base_mfn & ~((FIRST_SIZE >> PAGE_SHIFT) -= 1); + + directmap_mfn_start =3D _mfn(base_mfn); + directmap_base_pdx =3D mfn_to_pdx(_mfn(base_mfn)); + /* + * The base address may not be aligned to the first level + * size (e.g. 1GB when using 4KB pages). This would prevent + * superpage mappings for all the regions because the virtual + * address and machine address should both be suitably aligned. + * + * Prevent that by offsetting the start of the directmap virtual + * address. + */ + directmap_virt_start =3D DIRECTMAP_VIRT_START + + (base_mfn - mfn_gb) * PAGE_SIZE; + } + + if ( base_mfn < mfn_x(directmap_mfn_start) ) + panic("cannot add directmap mapping at %lx below heap start %lx\n", + base_mfn, mfn_x(directmap_mfn_start)); + + rc =3D map_pages_to_xen((vaddr_t)__mfn_to_virt(base_mfn), + _mfn(base_mfn), nr_mfns, + PAGE_HYPERVISOR_RW | _PAGE_BLOCK); + if ( rc ) + panic("Unable to setup the directmap mappings.\n"); +} +#endif + +/* Map a frame table to cover physical addresses ps through pe */ +void __init setup_frametable_mappings(paddr_t ps, paddr_t pe) +{ + unsigned long nr_pdxs =3D mfn_to_pdx(mfn_add(maddr_to_mfn(pe), -1)) - + mfn_to_pdx(maddr_to_mfn(ps)) + 1; + unsigned long frametable_size =3D nr_pdxs * sizeof(struct page_info); + mfn_t base_mfn; + const unsigned long mapping_size =3D frametable_size < MB(32) ? MB(2) = : MB(32); + int rc; + + /* + * The size of paddr_t should be sufficient for the complete range of + * physical address. + */ + BUILD_BUG_ON((sizeof(paddr_t) * BITS_PER_BYTE) < PADDR_BITS); + BUILD_BUG_ON(sizeof(struct page_info) !=3D PAGE_INFO_SIZE); + + if ( frametable_size > FRAMETABLE_SIZE ) + panic("The frametable cannot cover the physical region %#"PRIpaddr= " - %#"PRIpaddr"\n", + ps, pe); + + frametable_base_pdx =3D mfn_to_pdx(maddr_to_mfn(ps)); + /* Round up to 2M or 32M boundary, as appropriate. */ + frametable_size =3D ROUNDUP(frametable_size, mapping_size); + base_mfn =3D alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-1= 2)); + + rc =3D map_pages_to_xen(FRAMETABLE_VIRT_START, base_mfn, + frametable_size >> PAGE_SHIFT, + PAGE_HYPERVISOR_RW | _PAGE_BLOCK); + if ( rc ) + panic("Unable to setup the frametable mappings.\n"); + + memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info)); + memset(&frame_table[nr_pdxs], -1, + frametable_size - (nr_pdxs * sizeof(struct page_info))); + + frametable_virt_end =3D FRAMETABLE_VIRT_START + (nr_pdxs * sizeof(stru= ct page_info)); +} + +void *__init arch_vmap_virt_end(void) +{ + return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE); +} + +/* + * This function should only be used to remap device address ranges + * TODO: add a check to verify this assumption + */ +void *ioremap_attr(paddr_t pa, size_t len, unsigned int attributes) +{ + mfn_t mfn =3D _mfn(PFN_DOWN(pa)); + unsigned int offs =3D pa & (PAGE_SIZE - 1); + unsigned int nr =3D PFN_UP(offs + len); + void *ptr =3D __vmap(&mfn, nr, 1, 1, attributes, VMAP_DEFAULT); + + if ( ptr =3D=3D NULL ) + return NULL; + + return ptr + offs; +} + +static int create_xen_table(lpae_t *entry) +{ + mfn_t mfn; + void *p; + lpae_t pte; + + if ( system_state !=3D SYS_STATE_early_boot ) + { + struct page_info *pg =3D alloc_domheap_page(NULL, 0); + + if ( pg =3D=3D NULL ) + return -ENOMEM; + + mfn =3D page_to_mfn(pg); + } + else + mfn =3D alloc_boot_pages(1, 1); + + p =3D xen_map_table(mfn); + clear_page(p); + xen_unmap_table(p); + + pte =3D mfn_to_xen_entry(mfn, MT_NORMAL); + pte.pt.table =3D 1; + write_pte(entry, pte); + + return 0; +} + +#define XEN_TABLE_MAP_FAILED 0 +#define XEN_TABLE_SUPER_PAGE 1 +#define XEN_TABLE_NORMAL_PAGE 2 + +/* + * Take the currently mapped table, find the corresponding entry, + * and map the next table, if available. + * + * The read_only parameters indicates whether intermediate tables should + * be allocated when not present. + * + * Return values: + * XEN_TABLE_MAP_FAILED: Either read_only was set and the entry + * was empty, or allocating a new page failed. + * XEN_TABLE_NORMAL_PAGE: next level mapped normally + * XEN_TABLE_SUPER_PAGE: The next entry points to a superpage. + */ +static int xen_pt_next_level(bool read_only, unsigned int level, + lpae_t **table, unsigned int offset) +{ + lpae_t *entry; + int ret; + mfn_t mfn; + + entry =3D *table + offset; + + if ( !lpae_is_valid(*entry) ) + { + if ( read_only ) + return XEN_TABLE_MAP_FAILED; + + ret =3D create_xen_table(entry); + if ( ret ) + return XEN_TABLE_MAP_FAILED; + } + + /* The function xen_pt_next_level is never called at the 3rd level */ + if ( lpae_is_mapping(*entry, level) ) + return XEN_TABLE_SUPER_PAGE; + + mfn =3D lpae_get_mfn(*entry); + + xen_unmap_table(*table); + *table =3D xen_map_table(mfn); + + return XEN_TABLE_NORMAL_PAGE; +} + +/* Sanity check of the entry */ +static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level, + unsigned int flags) +{ + /* Sanity check when modifying an entry. */ + if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) ) + { + /* We don't allow modifying an invalid entry. */ + if ( !lpae_is_valid(entry) ) + { + mm_printk("Modifying invalid entry is not allowed.\n"); + return false; + } + + /* We don't allow modifying a table entry */ + if ( !lpae_is_mapping(entry, level) ) + { + mm_printk("Modifying a table entry is not allowed.\n"); + return false; + } + + /* We don't allow changing memory attributes. */ + if ( entry.pt.ai !=3D PAGE_AI_MASK(flags) ) + { + mm_printk("Modifying memory attributes is not allowed (0x%x ->= 0x%x).\n", + entry.pt.ai, PAGE_AI_MASK(flags)); + return false; + } + + /* We don't allow modifying entry with contiguous bit set. */ + if ( entry.pt.contig ) + { + mm_printk("Modifying entry with contiguous bit set is not allo= wed.\n"); + return false; + } + } + /* Sanity check when inserting a mapping */ + else if ( flags & _PAGE_PRESENT ) + { + /* We should be here with a valid MFN. */ + ASSERT(!mfn_eq(mfn, INVALID_MFN)); + + /* + * We don't allow replacing any valid entry. + * + * Note that the function xen_pt_update() relies on this + * assumption and will skip the TLB flush. The function will need + * to be updated if the check is relaxed. + */ + if ( lpae_is_valid(entry) ) + { + if ( lpae_is_mapping(entry, level) ) + mm_printk("Changing MFN for a valid entry is not allowed (= %#"PRI_mfn" -> %#"PRI_mfn").\n", + mfn_x(lpae_get_mfn(entry)), mfn_x(mfn)); + else + mm_printk("Trying to replace a table with a mapping.\n"); + return false; + } + } + /* Sanity check when removing a mapping. */ + else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) =3D=3D 0 ) + { + /* We should be here with an invalid MFN. */ + ASSERT(mfn_eq(mfn, INVALID_MFN)); + + /* We don't allow removing a table */ + if ( lpae_is_table(entry, level) ) + { + mm_printk("Removing a table is not allowed.\n"); + return false; + } + + /* We don't allow removing a mapping with contiguous bit set. */ + if ( entry.pt.contig ) + { + mm_printk("Removing entry with contiguous bit set is not allow= ed.\n"); + return false; + } + } + /* Sanity check when populating the page-table. No check so far. */ + else + { + ASSERT(flags & _PAGE_POPULATE); + /* We should be here with an invalid MFN */ + ASSERT(mfn_eq(mfn, INVALID_MFN)); + } + + return true; +} + +/* Update an entry at the level @target. */ +static int xen_pt_update_entry(mfn_t root, unsigned long virt, + mfn_t mfn, unsigned int target, + unsigned int flags) +{ + int rc; + unsigned int level; + lpae_t *table; + /* + * The intermediate page tables are read-only when the MFN is not valid + * and we are not populating page table. + * This means we either modify permissions or remove an entry. + */ + bool read_only =3D mfn_eq(mfn, INVALID_MFN) && !(flags & _PAGE_POPULAT= E); + lpae_t pte, *entry; + + /* convenience aliases */ + DECLARE_OFFSETS(offsets, (paddr_t)virt); + + /* _PAGE_POPULATE and _PAGE_PRESENT should never be set together. */ + ASSERT((flags & (_PAGE_POPULATE|_PAGE_PRESENT)) !=3D (_PAGE_POPULATE|_= PAGE_PRESENT)); + + table =3D xen_map_table(root); + for ( level =3D HYP_PT_ROOT_LEVEL; level < target; level++ ) + { + rc =3D xen_pt_next_level(read_only, level, &table, offsets[level]); + if ( rc =3D=3D XEN_TABLE_MAP_FAILED ) + { + /* + * We are here because xen_pt_next_level has failed to map + * the intermediate page table (e.g the table does not exist + * and the pt is read-only). It is a valid case when + * removing a mapping as it may not exist in the page table. + * In this case, just ignore it. + */ + if ( flags & (_PAGE_PRESENT|_PAGE_POPULATE) ) + { + mm_printk("%s: Unable to map level %u\n", __func__, level); + rc =3D -ENOENT; + goto out; + } + else + { + rc =3D 0; + goto out; + } + } + else if ( rc !=3D XEN_TABLE_NORMAL_PAGE ) + break; + } + + if ( level !=3D target ) + { + mm_printk("%s: Shattering superpage is not supported\n", __func__); + rc =3D -EOPNOTSUPP; + goto out; + } + + entry =3D table + offsets[level]; + + rc =3D -EINVAL; + if ( !xen_pt_check_entry(*entry, mfn, level, flags) ) + goto out; + + /* If we are only populating page-table, then we are done. */ + rc =3D 0; + if ( flags & _PAGE_POPULATE ) + goto out; + + /* We are removing the page */ + if ( !(flags & _PAGE_PRESENT) ) + memset(&pte, 0x00, sizeof(pte)); + else + { + /* We are inserting a mapping =3D> Create new pte. */ + if ( !mfn_eq(mfn, INVALID_MFN) ) + { + pte =3D mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags)); + + /* + * First and second level pages set pte.pt.table =3D 0, but + * third level entries set pte.pt.table =3D 1. + */ + pte.pt.table =3D (level =3D=3D 3); + } + else /* We are updating the permission =3D> Copy the current pte. = */ + pte =3D *entry; + + /* Set permission */ + pte.pt.ro =3D PAGE_RO_MASK(flags); + pte.pt.xn =3D PAGE_XN_MASK(flags); + /* Set contiguous bit */ + pte.pt.contig =3D !!(flags & _PAGE_CONTIG); + } + + write_pte(entry, pte); + + rc =3D 0; + +out: + xen_unmap_table(table); + + return rc; +} + +/* Return the level where mapping should be done */ +static int xen_pt_mapping_level(unsigned long vfn, mfn_t mfn, unsigned lon= g nr, + unsigned int flags) +{ + unsigned int level; + unsigned long mask; + + /* + * Don't take into account the MFN when removing mapping (i.e + * MFN_INVALID) to calculate the correct target order. + * + * Per the Arm Arm, `vfn` and `mfn` must be both superpage aligned. + * They are or-ed together and then checked against the size of + * each level. + * + * `left` is not included and checked separately to allow + * superpage mapping even if it is not properly aligned (the + * user may have asked to map 2MB + 4k). + */ + mask =3D !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0; + mask |=3D vfn; + + /* + * Always use level 3 mapping unless the caller request block + * mapping. + */ + if ( likely(!(flags & _PAGE_BLOCK)) ) + level =3D 3; + else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) && + (nr >=3D BIT(FIRST_ORDER, UL)) ) + level =3D 1; + else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) && + (nr >=3D BIT(SECOND_ORDER, UL)) ) + level =3D 2; + else + level =3D 3; + + return level; +} + +#define XEN_PT_4K_NR_CONTIG 16 + +/* + * Check whether the contiguous bit can be set. Return the number of + * contiguous entry allowed. If not allowed, return 1. + */ +static unsigned int xen_pt_check_contig(unsigned long vfn, mfn_t mfn, + unsigned int level, unsigned long = left, + unsigned int flags) +{ + unsigned long nr_contig; + + /* + * Allow the contiguous bit to set when the caller requests block + * mapping. + */ + if ( !(flags & _PAGE_BLOCK) ) + return 1; + + /* + * We don't allow to remove mapping with the contiguous bit set. + * So shortcut the logic and directly return 1. + */ + if ( mfn_eq(mfn, INVALID_MFN) ) + return 1; + + /* + * The number of contiguous entries varies depending on the page + * granularity used. The logic below assumes 4KB. + */ + BUILD_BUG_ON(PAGE_SIZE !=3D SZ_4K); + + /* + * In order to enable the contiguous bit, we should have enough entries + * to map left and both the virtual and physical address should be + * aligned to the size of 16 translation tables entries. + */ + nr_contig =3D BIT(XEN_PT_LEVEL_ORDER(level), UL) * XEN_PT_4K_NR_CONTIG; + + if ( (left < nr_contig) || ((mfn_x(mfn) | vfn) & (nr_contig - 1)) ) + return 1; + + return XEN_PT_4K_NR_CONTIG; +} + +static DEFINE_SPINLOCK(xen_pt_lock); + +int xen_pt_update(unsigned long virt, mfn_t mfn, + /* const on purpose as it is used for TLB flush */ + const unsigned long nr_mfns, + unsigned int flags) +{ + int rc =3D 0; + unsigned long vfn =3D virt >> PAGE_SHIFT; + unsigned long left =3D nr_mfns; + + /* + * For arm32, page-tables are different on each CPUs. Yet, they share + * some common mappings. It is assumed that only common mappings + * will be modified with this function. + * + * XXX: Add a check. + */ + const mfn_t root =3D maddr_to_mfn(READ_SYSREG64(TTBR0_EL2)); + + /* + * The hardware was configured to forbid mapping both writeable and + * executable. + * When modifying/creating mapping (i.e _PAGE_PRESENT is set), + * prevent any update if this happen. + */ + if ( (flags & _PAGE_PRESENT) && !PAGE_RO_MASK(flags) && + !PAGE_XN_MASK(flags) ) + { + mm_printk("Mappings should not be both Writeable and Executable.\n= "); + return -EINVAL; + } + + if ( flags & _PAGE_CONTIG ) + { + mm_printk("_PAGE_CONTIG is an internal only flag.\n"); + return -EINVAL; + } + + if ( !IS_ALIGNED(virt, PAGE_SIZE) ) + { + mm_printk("The virtual address is not aligned to the page-size.\n"= ); + return -EINVAL; + } + + spin_lock(&xen_pt_lock); + + while ( left ) + { + unsigned int order, level, nr_contig, new_flags; + + level =3D xen_pt_mapping_level(vfn, mfn, left, flags); + order =3D XEN_PT_LEVEL_ORDER(level); + + ASSERT(left >=3D BIT(order, UL)); + + /* + * Check if we can set the contiguous mapping and update the + * flags accordingly. + */ + nr_contig =3D xen_pt_check_contig(vfn, mfn, level, left, flags); + new_flags =3D flags | ((nr_contig > 1) ? _PAGE_CONTIG : 0); + + for ( ; nr_contig > 0; nr_contig-- ) + { + rc =3D xen_pt_update_entry(root, vfn << PAGE_SHIFT, mfn, level, + new_flags); + if ( rc ) + break; + + vfn +=3D 1U << order; + if ( !mfn_eq(mfn, INVALID_MFN) ) + mfn =3D mfn_add(mfn, 1U << order); + + left -=3D (1U << order); + } + + if ( rc ) + break; + } + + /* + * The TLBs flush can be safely skipped when a mapping is inserted + * as we don't allow mapping replacement (see xen_pt_check_entry()). + * + * For all the other cases, the TLBs will be flushed unconditionally + * even if the mapping has failed. This is because we may have + * partially modified the PT. This will prevent any unexpected + * behavior afterwards. + */ + if ( !((flags & _PAGE_PRESENT) && !mfn_eq(mfn, INVALID_MFN)) ) + flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns); + + spin_unlock(&xen_pt_lock); + + return rc; +} + +int populate_pt_range(unsigned long virt, unsigned long nr_mfns) +{ + return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c index e107b86b7b..8bcdbea66c 100644 --- a/xen/arch/arm/smpboot.c +++ b/xen/arch/arm/smpboot.c @@ -359,7 +359,7 @@ void start_secondary(void) */ update_system_features(¤t_cpu_data); =20 - mmu_init_secondary_cpu(); + mm_init_secondary_cpu(); =20 gic_init_secondary_cpu(); =20 @@ -448,7 +448,7 @@ int __cpu_up(unsigned int cpu) =20 printk("Bringing up CPU%d\n", cpu); =20 - rc =3D init_secondary_pagetables(cpu); + rc =3D init_secondary_mm(cpu); if ( rc < 0 ) return rc; =20 --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750876936514.3472494033725; Sun, 25 Jun 2023 20:41:16 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555097.866872 (Exim 4.92) (envelope-from ) id 1qDd5p-0004RK-IU; Mon, 26 Jun 2023 03:40:45 +0000 Received: by outflank-mailman (output) from mailman id 555097.866872; Mon, 26 Jun 2023 03:40:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5o-0004Me-Jy; Mon, 26 Jun 2023 03:40:44 +0000 Received: by outflank-mailman (input) for mailman id 555097; Mon, 26 Jun 2023 03:40:42 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd17-0000HH-Dt for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:53 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 8b931d09-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:35:50 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A3A7F2F4; Sun, 25 Jun 2023 20:36:33 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 082083F64C; Sun, 25 Jun 2023 20:35:46 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8b931d09-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 14/52] xen/mmu: move MMU-specific setup_mm to mmu/setup.c Date: Mon, 26 Jun 2023 11:34:05 +0800 Message-Id: <20230626033443.2943270-15-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750878208100002 Content-Type: text/plain; charset="utf-8" setup_mm is used for Xen to setup memory management subsystem at boot time, like boot allocator, direct-mapping, xenheap initialization, frametab= le and static memory pages. We could inherit some components seamlessly in later MPU system like boot allocator, whilst we need to implement some components differently in MPU, like xenheap, etc. There are some components that is specific to MMU only, like direct-mapping. In the commit, we move MMU-specific components into mmu/setup.c, in prepara= tion of implementing MPU version of setup_mm later in future commit. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - adapt to the introduction of new directories: mmu/ --- xen/arch/arm/Makefile | 1 + xen/arch/arm/include/asm/setup.h | 5 + xen/arch/arm/mmu/setup.c | 352 +++++++++++++++++++++++++++++++ xen/arch/arm/setup.c | 326 +--------------------------- 4 files changed, 362 insertions(+), 322 deletions(-) create mode 100644 xen/arch/arm/mmu/setup.c diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index f825d95e29..c1babdba6a 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -38,6 +38,7 @@ obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o obj-y +=3D mem_access.o ifeq ($(CONFIG_HAS_MMU), y) obj-y +=3D mmu/mm.o +obj-y +=3D mmu/setup.o endif obj-y +=3D mm.o obj-y +=3D monitor.o diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/se= tup.h index f0f64d228c..0922549631 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -156,6 +156,11 @@ struct bootcmdline *boot_cmdline_find_by_kind(bootmodu= le_kind kind); struct bootcmdline * boot_cmdline_find_by_name(const char *name); const char *boot_module_kind_as_string(bootmodule_kind kind); =20 +extern void init_pdx(void); +extern void init_staticmem_pages(void); +extern void populate_boot_allocator(void); +extern void setup_mm(void); + extern uint32_t hyp_traps_vector[]; void init_traps(void); =20 diff --git a/xen/arch/arm/mmu/setup.c b/xen/arch/arm/mmu/setup.c new file mode 100644 index 0000000000..f4de0cb29d --- /dev/null +++ b/xen/arch/arm/mmu/setup.c @@ -0,0 +1,352 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * xen/arch/arm/mmu/setup.c + * + * Early bringup code for an ARMv7-A with virt extensions. + * + * Tim Deegan + * Copyright (c) 2011 Citrix Systems. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_ARM_32 +static unsigned long opt_xenheap_megabytes __initdata; +integer_param("xenheap_megabytes", opt_xenheap_megabytes); + +/* + * Returns the end address of the highest region in the range s..e + * with required size and alignment that does not conflict with the + * modules from first_mod to nr_modules. + * + * For non-recursive callers first_mod should normally be 0 (all + * modules and Xen itself) or 1 (all modules but not Xen). + */ +static paddr_t __init consider_modules(paddr_t s, paddr_t e, + uint32_t size, paddr_t align, + int first_mod) +{ + const struct bootmodules *mi =3D &bootinfo.modules; + int i; + int nr; + + s =3D (s+align-1) & ~(align-1); + e =3D e & ~(align-1); + + if ( s > e || e - s < size ) + return 0; + + /* First check the boot modules */ + for ( i =3D first_mod; i < mi->nr_mods; i++ ) + { + paddr_t mod_s =3D mi->module[i].start; + paddr_t mod_e =3D mod_s + mi->module[i].size; + + if ( s < mod_e && mod_s < e ) + { + mod_e =3D consider_modules(mod_e, e, size, align, i+1); + if ( mod_e ) + return mod_e; + + return consider_modules(s, mod_s, size, align, i+1); + } + } + + /* Now check any fdt reserved areas. */ + + nr =3D fdt_num_mem_rsv(device_tree_flattened); + + for ( ; i < mi->nr_mods + nr; i++ ) + { + paddr_t mod_s, mod_e; + + if ( fdt_get_mem_rsv_paddr(device_tree_flattened, + i - mi->nr_mods, + &mod_s, &mod_e ) < 0 ) + /* If we can't read it, pretend it doesn't exist... */ + continue; + + /* fdt_get_mem_rsv_paddr returns length */ + mod_e +=3D mod_s; + + if ( s < mod_e && mod_s < e ) + { + mod_e =3D consider_modules(mod_e, e, size, align, i+1); + if ( mod_e ) + return mod_e; + + return consider_modules(s, mod_s, size, align, i+1); + } + } + + /* + * i is the current bootmodule we are evaluating, across all + * possible kinds of bootmodules. + * + * When retrieving the corresponding reserved-memory addresses, we + * need to index the bootinfo.reserved_mem bank starting from 0, and + * only counting the reserved-memory modules. Hence, we need to use + * i - nr. + */ + nr +=3D mi->nr_mods; + for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ ) + { + paddr_t r_s =3D bootinfo.reserved_mem.bank[i - nr].start; + paddr_t r_e =3D r_s + bootinfo.reserved_mem.bank[i - nr].size; + + if ( s < r_e && r_s < e ) + { + r_e =3D consider_modules(r_e, e, size, align, i + 1); + if ( r_e ) + return r_e; + + return consider_modules(s, r_s, size, align, i + 1); + } + } + return e; +} + +/* + * Find a contiguous region that fits in the static heap region with + * required size and alignment, and return the end address of the region + * if found otherwise 0. + */ +static paddr_t __init fit_xenheap_in_static_heap(uint32_t size, paddr_t al= ign) +{ + unsigned int i; + paddr_t end =3D 0, aligned_start, aligned_end; + paddr_t bank_start, bank_size, bank_end; + + for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) + { + if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HEAP ) + continue; + + bank_start =3D bootinfo.reserved_mem.bank[i].start; + bank_size =3D bootinfo.reserved_mem.bank[i].size; + bank_end =3D bank_start + bank_size; + + if ( bank_size < size ) + continue; + + aligned_end =3D bank_end & ~(align - 1); + aligned_start =3D (aligned_end - size) & ~(align - 1); + + if ( aligned_start > bank_start ) + /* + * Allocate the xenheap as high as possible to keep low-memory + * available (assuming the admin supplied region below 4GB) + * for other use (e.g. domain memory allocation). + */ + end =3D max(end, aligned_end); + } + + return end; +} + +void __init setup_mm(void) +{ + paddr_t ram_start, ram_end, ram_size, e, bank_start, bank_end, bank_si= ze; + paddr_t static_heap_end =3D 0, static_heap_size =3D 0; + unsigned long heap_pages, xenheap_pages, domheap_pages; + unsigned int i; + const uint32_t ctr =3D READ_CP32(CTR); + + if ( !bootinfo.mem.nr_banks ) + panic("No memory bank\n"); + + /* We only supports instruction caches implementing the IVIPT extensio= n. */ + if ( ((ctr >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK) =3D=3D ICACHE_POLICY_AI= VIVT ) + panic("AIVIVT instruction cache not supported\n"); + + init_pdx(); + + ram_start =3D bootinfo.mem.bank[0].start; + ram_size =3D bootinfo.mem.bank[0].size; + ram_end =3D ram_start + ram_size; + + for ( i =3D 1; i < bootinfo.mem.nr_banks; i++ ) + { + bank_start =3D bootinfo.mem.bank[i].start; + bank_size =3D bootinfo.mem.bank[i].size; + bank_end =3D bank_start + bank_size; + + ram_size =3D ram_size + bank_size; + ram_start =3D min(ram_start,bank_start); + ram_end =3D max(ram_end,bank_end); + } + + total_pages =3D ram_size >> PAGE_SHIFT; + + if ( bootinfo.static_heap ) + { + for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) + { + if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HE= AP ) + continue; + + bank_start =3D bootinfo.reserved_mem.bank[i].start; + bank_size =3D bootinfo.reserved_mem.bank[i].size; + bank_end =3D bank_start + bank_size; + + static_heap_size +=3D bank_size; + static_heap_end =3D max(static_heap_end, bank_end); + } + + heap_pages =3D static_heap_size >> PAGE_SHIFT; + } + else + heap_pages =3D total_pages; + + /* + * If the user has not requested otherwise via the command line + * then locate the xenheap using these constraints: + * + * - must be contiguous + * - must be 32 MiB aligned + * - must not include Xen itself or the boot modules + * - must be at most 1GB or 1/32 the total RAM in the system (or stat= ic + heap if enabled) if less + * - must be at least 32M + * + * We try to allocate the largest xenheap possible within these + * constraints. + */ + if ( opt_xenheap_megabytes ) + xenheap_pages =3D opt_xenheap_megabytes << (20-PAGE_SHIFT); + else + { + xenheap_pages =3D (heap_pages/32 + 0x1fffUL) & ~0x1fffUL; + xenheap_pages =3D max(xenheap_pages, 32UL<<(20-PAGE_SHIFT)); + xenheap_pages =3D min(xenheap_pages, 1UL<<(30-PAGE_SHIFT)); + } + + do + { + e =3D bootinfo.static_heap ? + fit_xenheap_in_static_heap(pfn_to_paddr(xenheap_pages), MB(32)= ) : + consider_modules(ram_start, ram_end, + pfn_to_paddr(xenheap_pages), + 32<<20, 0); + if ( e ) + break; + + xenheap_pages >>=3D 1; + } while ( !opt_xenheap_megabytes && xenheap_pages > 32<<(20-PAGE_SHIFT= ) ); + + if ( ! e ) + panic("Not enough space for xenheap\n"); + + domheap_pages =3D heap_pages - xenheap_pages; + + printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages%s)\n", + e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages, + opt_xenheap_megabytes ? ", from command-line" : ""); + printk("Dom heap: %lu pages\n", domheap_pages); + + /* + * We need some memory to allocate the page-tables used for the + * directmap mappings. So populate the boot allocator first. + * + * This requires us to set directmap_mfn_{start, end} first so the + * direct-mapped Xenheap region can be avoided. + */ + directmap_mfn_start =3D _mfn((e >> PAGE_SHIFT) - xenheap_pages); + directmap_mfn_end =3D mfn_add(directmap_mfn_start, xenheap_pages); + + populate_boot_allocator(); + + setup_directmap_mappings(mfn_x(directmap_mfn_start), xenheap_pages); + + /* Frame table covers all of RAM region, including holes */ + setup_frametable_mappings(ram_start, ram_end); + max_page =3D PFN_DOWN(ram_end); + + /* + * The allocators may need to use map_domain_page() (such as for + * scrubbing pages). So we need to prepare the domheap area first. + */ + if ( !init_domheap_mappings(smp_processor_id()) ) + panic("CPU%u: Unable to prepare the domheap page-tables\n", + smp_processor_id()); + + /* Add xenheap memory that was not already added to the boot allocator= . */ + init_xenheap_pages(mfn_to_maddr(directmap_mfn_start), + mfn_to_maddr(directmap_mfn_end)); + + init_staticmem_pages(); +} +#else /* CONFIG_ARM_64 */ +void __init setup_mm(void) +{ + const struct meminfo *banks =3D &bootinfo.mem; + paddr_t ram_start =3D INVALID_PADDR; + paddr_t ram_end =3D 0; + paddr_t ram_size =3D 0; + unsigned int i; + + init_pdx(); + + /* + * We need some memory to allocate the page-tables used for the direct= map + * mappings. But some regions may contain memory already allocated + * for other uses (e.g. modules, reserved-memory...). + * + * For simplicity, add all the free regions in the boot allocator. + */ + populate_boot_allocator(); + + total_pages =3D 0; + + for ( i =3D 0; i < banks->nr_banks; i++ ) + { + const struct membank *bank =3D &banks->bank[i]; + paddr_t bank_end =3D bank->start + bank->size; + + ram_size =3D ram_size + bank->size; + ram_start =3D min(ram_start, bank->start); + ram_end =3D max(ram_end, bank_end); + + setup_directmap_mappings(PFN_DOWN(bank->start), + PFN_DOWN(bank->size)); + } + + total_pages +=3D ram_size >> PAGE_SHIFT; + + directmap_virt_end =3D XENHEAP_VIRT_START + ram_end - ram_start; + directmap_mfn_start =3D maddr_to_mfn(ram_start); + directmap_mfn_end =3D maddr_to_mfn(ram_end); + + setup_frametable_mappings(ram_start, ram_end); + max_page =3D PFN_DOWN(ram_end); + + init_staticmem_pages(); +} +#endif + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index bbf72b69aa..50259552a0 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -2,7 +2,7 @@ /* * xen/arch/arm/setup.c * - * Early bringup code for an ARMv7-A with virt extensions. + * Early bringup code for an ARMv7-A/ARM64v8R with virt extensions. * * Tim Deegan * Copyright (c) 2011 Citrix Systems. @@ -57,11 +57,6 @@ struct cpuinfo_arm __read_mostly system_cpuinfo; bool __read_mostly acpi_disabled; #endif =20 -#ifdef CONFIG_ARM_32 -static unsigned long opt_xenheap_megabytes __initdata; -integer_param("xenheap_megabytes", opt_xenheap_megabytes); -#endif - domid_t __read_mostly max_init_domid; =20 static __used void init_done(void) @@ -546,138 +541,6 @@ static void * __init relocate_fdt(paddr_t dtb_paddr, = size_t dtb_size) return fdt; } =20 -#ifdef CONFIG_ARM_32 -/* - * Returns the end address of the highest region in the range s..e - * with required size and alignment that does not conflict with the - * modules from first_mod to nr_modules. - * - * For non-recursive callers first_mod should normally be 0 (all - * modules and Xen itself) or 1 (all modules but not Xen). - */ -static paddr_t __init consider_modules(paddr_t s, paddr_t e, - uint32_t size, paddr_t align, - int first_mod) -{ - const struct bootmodules *mi =3D &bootinfo.modules; - int i; - int nr; - - s =3D (s+align-1) & ~(align-1); - e =3D e & ~(align-1); - - if ( s > e || e - s < size ) - return 0; - - /* First check the boot modules */ - for ( i =3D first_mod; i < mi->nr_mods; i++ ) - { - paddr_t mod_s =3D mi->module[i].start; - paddr_t mod_e =3D mod_s + mi->module[i].size; - - if ( s < mod_e && mod_s < e ) - { - mod_e =3D consider_modules(mod_e, e, size, align, i+1); - if ( mod_e ) - return mod_e; - - return consider_modules(s, mod_s, size, align, i+1); - } - } - - /* Now check any fdt reserved areas. */ - - nr =3D fdt_num_mem_rsv(device_tree_flattened); - - for ( ; i < mi->nr_mods + nr; i++ ) - { - paddr_t mod_s, mod_e; - - if ( fdt_get_mem_rsv_paddr(device_tree_flattened, - i - mi->nr_mods, - &mod_s, &mod_e ) < 0 ) - /* If we can't read it, pretend it doesn't exist... */ - continue; - - /* fdt_get_mem_rsv_paddr returns length */ - mod_e +=3D mod_s; - - if ( s < mod_e && mod_s < e ) - { - mod_e =3D consider_modules(mod_e, e, size, align, i+1); - if ( mod_e ) - return mod_e; - - return consider_modules(s, mod_s, size, align, i+1); - } - } - - /* - * i is the current bootmodule we are evaluating, across all - * possible kinds of bootmodules. - * - * When retrieving the corresponding reserved-memory addresses, we - * need to index the bootinfo.reserved_mem bank starting from 0, and - * only counting the reserved-memory modules. Hence, we need to use - * i - nr. - */ - nr +=3D mi->nr_mods; - for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ ) - { - paddr_t r_s =3D bootinfo.reserved_mem.bank[i - nr].start; - paddr_t r_e =3D r_s + bootinfo.reserved_mem.bank[i - nr].size; - - if ( s < r_e && r_s < e ) - { - r_e =3D consider_modules(r_e, e, size, align, i + 1); - if ( r_e ) - return r_e; - - return consider_modules(s, r_s, size, align, i + 1); - } - } - return e; -} - -/* - * Find a contiguous region that fits in the static heap region with - * required size and alignment, and return the end address of the region - * if found otherwise 0. - */ -static paddr_t __init fit_xenheap_in_static_heap(uint32_t size, paddr_t al= ign) -{ - unsigned int i; - paddr_t end =3D 0, aligned_start, aligned_end; - paddr_t bank_start, bank_size, bank_end; - - for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) - { - if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HEAP ) - continue; - - bank_start =3D bootinfo.reserved_mem.bank[i].start; - bank_size =3D bootinfo.reserved_mem.bank[i].size; - bank_end =3D bank_start + bank_size; - - if ( bank_size < size ) - continue; - - aligned_end =3D bank_end & ~(align - 1); - aligned_start =3D (aligned_end - size) & ~(align - 1); - - if ( aligned_start > bank_start ) - /* - * Allocate the xenheap as high as possible to keep low-memory - * available (assuming the admin supplied region below 4GB) - * for other use (e.g. domain memory allocation). - */ - end =3D max(end, aligned_end); - } - - return end; -} -#endif - /* * Return the end of the non-module region starting at s. In other * words return s the start of the next modules after s. @@ -712,7 +575,7 @@ static paddr_t __init next_module(paddr_t s, paddr_t *e= nd) return lowest; } =20 -static void __init init_pdx(void) +void __init init_pdx(void) { paddr_t bank_start, bank_size, bank_end; =20 @@ -757,7 +620,7 @@ static void __init init_pdx(void) } =20 /* Static memory initialization */ -static void __init init_staticmem_pages(void) +void __init init_staticmem_pages(void) { #ifdef CONFIG_STATIC_MEMORY unsigned int bank; @@ -791,7 +654,7 @@ static void __init init_staticmem_pages(void) * allocator with the corresponding regions only, but with Xenheap excluded * on arm32. */ -static void __init populate_boot_allocator(void) +void __init populate_boot_allocator(void) { unsigned int i; const struct meminfo *banks =3D &bootinfo.mem; @@ -860,187 +723,6 @@ static void __init populate_boot_allocator(void) } } =20 -#ifdef CONFIG_ARM_32 -static void __init setup_mm(void) -{ - paddr_t ram_start, ram_end, ram_size, e, bank_start, bank_end, bank_si= ze; - paddr_t static_heap_end =3D 0, static_heap_size =3D 0; - unsigned long heap_pages, xenheap_pages, domheap_pages; - unsigned int i; - const uint32_t ctr =3D READ_CP32(CTR); - - if ( !bootinfo.mem.nr_banks ) - panic("No memory bank\n"); - - /* We only supports instruction caches implementing the IVIPT extensio= n. */ - if ( ((ctr >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK) =3D=3D ICACHE_POLICY_AI= VIVT ) - panic("AIVIVT instruction cache not supported\n"); - - init_pdx(); - - ram_start =3D bootinfo.mem.bank[0].start; - ram_size =3D bootinfo.mem.bank[0].size; - ram_end =3D ram_start + ram_size; - - for ( i =3D 1; i < bootinfo.mem.nr_banks; i++ ) - { - bank_start =3D bootinfo.mem.bank[i].start; - bank_size =3D bootinfo.mem.bank[i].size; - bank_end =3D bank_start + bank_size; - - ram_size =3D ram_size + bank_size; - ram_start =3D min(ram_start,bank_start); - ram_end =3D max(ram_end,bank_end); - } - - total_pages =3D ram_size >> PAGE_SHIFT; - - if ( bootinfo.static_heap ) - { - for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) - { - if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HE= AP ) - continue; - - bank_start =3D bootinfo.reserved_mem.bank[i].start; - bank_size =3D bootinfo.reserved_mem.bank[i].size; - bank_end =3D bank_start + bank_size; - - static_heap_size +=3D bank_size; - static_heap_end =3D max(static_heap_end, bank_end); - } - - heap_pages =3D static_heap_size >> PAGE_SHIFT; - } - else - heap_pages =3D total_pages; - - /* - * If the user has not requested otherwise via the command line - * then locate the xenheap using these constraints: - * - * - must be contiguous - * - must be 32 MiB aligned - * - must not include Xen itself or the boot modules - * - must be at most 1GB or 1/32 the total RAM in the system (or stat= ic - heap if enabled) if less - * - must be at least 32M - * - * We try to allocate the largest xenheap possible within these - * constraints. - */ - if ( opt_xenheap_megabytes ) - xenheap_pages =3D opt_xenheap_megabytes << (20-PAGE_SHIFT); - else - { - xenheap_pages =3D (heap_pages/32 + 0x1fffUL) & ~0x1fffUL; - xenheap_pages =3D max(xenheap_pages, 32UL<<(20-PAGE_SHIFT)); - xenheap_pages =3D min(xenheap_pages, 1UL<<(30-PAGE_SHIFT)); - } - - do - { - e =3D bootinfo.static_heap ? - fit_xenheap_in_static_heap(pfn_to_paddr(xenheap_pages), MB(32)= ) : - consider_modules(ram_start, ram_end, - pfn_to_paddr(xenheap_pages), - 32<<20, 0); - if ( e ) - break; - - xenheap_pages >>=3D 1; - } while ( !opt_xenheap_megabytes && xenheap_pages > 32<<(20-PAGE_SHIFT= ) ); - - if ( ! e ) - panic("Not enough space for xenheap\n"); - - domheap_pages =3D heap_pages - xenheap_pages; - - printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages%s)\n", - e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages, - opt_xenheap_megabytes ? ", from command-line" : ""); - printk("Dom heap: %lu pages\n", domheap_pages); - - /* - * We need some memory to allocate the page-tables used for the - * directmap mappings. So populate the boot allocator first. - * - * This requires us to set directmap_mfn_{start, end} first so the - * direct-mapped Xenheap region can be avoided. - */ - directmap_mfn_start =3D _mfn((e >> PAGE_SHIFT) - xenheap_pages); - directmap_mfn_end =3D mfn_add(directmap_mfn_start, xenheap_pages); - - populate_boot_allocator(); - - setup_directmap_mappings(mfn_x(directmap_mfn_start), xenheap_pages); - - /* Frame table covers all of RAM region, including holes */ - setup_frametable_mappings(ram_start, ram_end); - max_page =3D PFN_DOWN(ram_end); - - /* - * The allocators may need to use map_domain_page() (such as for - * scrubbing pages). So we need to prepare the domheap area first. - */ - if ( !init_domheap_mappings(smp_processor_id()) ) - panic("CPU%u: Unable to prepare the domheap page-tables\n", - smp_processor_id()); - - /* Add xenheap memory that was not already added to the boot allocator= . */ - init_xenheap_pages(mfn_to_maddr(directmap_mfn_start), - mfn_to_maddr(directmap_mfn_end)); - - init_staticmem_pages(); -} -#else /* CONFIG_ARM_64 */ -static void __init setup_mm(void) -{ - const struct meminfo *banks =3D &bootinfo.mem; - paddr_t ram_start =3D INVALID_PADDR; - paddr_t ram_end =3D 0; - paddr_t ram_size =3D 0; - unsigned int i; - - init_pdx(); - - /* - * We need some memory to allocate the page-tables used for the direct= map - * mappings. But some regions may contain memory already allocated - * for other uses (e.g. modules, reserved-memory...). - * - * For simplicity, add all the free regions in the boot allocator. - */ - populate_boot_allocator(); - - total_pages =3D 0; - - for ( i =3D 0; i < banks->nr_banks; i++ ) - { - const struct membank *bank =3D &banks->bank[i]; - paddr_t bank_end =3D bank->start + bank->size; - - ram_size =3D ram_size + bank->size; - ram_start =3D min(ram_start, bank->start); - ram_end =3D max(ram_end, bank_end); - - setup_directmap_mappings(PFN_DOWN(bank->start), - PFN_DOWN(bank->size)); - } - - total_pages +=3D ram_size >> PAGE_SHIFT; - - directmap_virt_end =3D XENHEAP_VIRT_START + ram_end - ram_start; - directmap_mfn_start =3D maddr_to_mfn(ram_start); - directmap_mfn_end =3D maddr_to_mfn(ram_end); - - setup_frametable_mappings(ram_start, ram_end); - max_page =3D PFN_DOWN(ram_end); - - init_staticmem_pages(); -} -#endif - static bool __init is_dom0less_mode(void) { struct bootmodules *mods =3D &bootinfo.modules; --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750874799134.5020494401872; Sun, 25 Jun 2023 20:41:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555077.866802 (Exim 4.92) (envelope-from ) id 1qDd5Y-0008Q9-MF; Mon, 26 Jun 2023 03:40:28 +0000 Received: by outflank-mailman (output) from mailman id 555077.866802; Mon, 26 Jun 2023 03:40:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5X-0008Kr-CF; Mon, 26 Jun 2023 03:40:27 +0000 Received: by outflank-mailman (input) for mailman id 555077; Mon, 26 Jun 2023 03:40:22 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1A-0007ej-3s for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:56 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 8e148bdf-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:35:54 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 180BB1FB; Sun, 25 Jun 2023 20:36:38 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 20FCE3F64C; Sun, 25 Jun 2023 20:35:49 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8e148bdf-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Penny Zheng , Wei Chen Subject: [PATCH v3 15/52] xen: make VMAP only support in MMU system Date: Mon, 26 Jun 2023 11:34:06 +0800 Message-Id: <20230626033443.2943270-16-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750875828100003 Content-Type: text/plain; charset="utf-8" VMAP is widely used in ALTERNATIVE feature, CPUERRATA feature, Grant Table feature, LIVEPATCH feature etc, to remap a range of memory with new memory attributes. Since this is highly dependent on virtual address translation, we choose to fold VMAP in MMU system. In this patch, we introduce a new Kconfig CONFIG_HAS_VMAP, and make it only support in MMU system on ARM architecture. And we make features like ALTERNATIVE, CPUERRATA, LIVEPATCH, Grant Table, etc, now depend on VMAP. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v2: - new commit --- v3: - make LIVEPATCH/ALTERNATIVE/CPUERRATA/Grant Table/LIVEPATCH depend on HAS_= VMAP - function call should be wrapped in context, then we could remove inline s= tubs --- xen/arch/arm/Kconfig | 3 ++- xen/arch/arm/Makefile | 2 +- xen/arch/arm/setup.c | 7 +++++++ xen/arch/arm/smpboot.c | 2 ++ xen/arch/x86/Kconfig | 1 + xen/arch/x86/setup.c | 2 ++ xen/common/Kconfig | 5 +++++ xen/common/Makefile | 2 +- xen/common/vmap.c | 7 +++++++ xen/include/xen/vmap.h | 11 ++++------- 10 files changed, 32 insertions(+), 10 deletions(-) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 22b28b8ba2..a88500fb50 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -11,7 +11,7 @@ config ARM_64 =20 config ARM def_bool y - select HAS_ALTERNATIVE + select HAS_ALTERNATIVE if HAS_VMAP select HAS_DEVICE_TREE select HAS_PASSTHROUGH select HAS_PDX @@ -63,6 +63,7 @@ config HAS_MMU bool "Memory Management Unit support in a VMSA system" default y select HAS_PMAP + select HAS_VMAP help In a VMSA system, a Memory Management Unit (MMU) provides fine-grained = control of a memory system through a set of virtual to physical address mappings a= nd associated memory diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index c1babdba6a..d01528cac6 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_HAS_VPCI) +=3D vpci.o =20 obj-$(CONFIG_HAS_ALTERNATIVE) +=3D alternative.o obj-y +=3D bootfdt.init.o -obj-y +=3D cpuerrata.o +obj-$(CONFIG_HAS_VMAP) +=3D cpuerrata.o obj-y +=3D cpufeature.o obj-y +=3D decode.o obj-y +=3D device.o diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 50259552a0..34923d9984 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -812,7 +812,9 @@ void __init start_xen(unsigned long boot_phys_offset, */ system_state =3D SYS_STATE_boot; =20 +#ifdef CONFIG_HAS_VMAP vm_init(); +#endif =20 if ( acpi_disabled ) { @@ -844,11 +846,13 @@ void __init start_xen(unsigned long boot_phys_offset, nr_cpu_ids =3D smp_get_max_cpus(); printk(XENLOG_INFO "SMP: Allowing %u CPUs\n", nr_cpu_ids); =20 +#ifdef CONFIG_HAS_VMAP /* * Some errata relies on SMCCC version which is detected by psci_init() * (called from smp_init_cpus()). */ check_local_cpu_errata(); +#endif =20 check_local_cpu_features(); =20 @@ -915,12 +919,15 @@ void __init start_xen(unsigned long boot_phys_offset, =20 do_initcalls(); =20 + +#ifdef CONFIG_HAS_VMAP /* * It needs to be called after do_initcalls to be able to use * stop_machine (tasklets initialized via an initcall). */ apply_alternatives_all(); enable_errata_workarounds(); +#endif enable_cpu_features(); =20 /* Create initial domain 0. */ diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c index 8bcdbea66c..0796e534ec 100644 --- a/xen/arch/arm/smpboot.c +++ b/xen/arch/arm/smpboot.c @@ -388,7 +388,9 @@ void start_secondary(void) =20 local_abort_enable(); =20 +#ifdef CONFIG_HAS_VMAP check_local_cpu_errata(); +#endif check_local_cpu_features(); =20 printk(XENLOG_DEBUG "CPU %u booted.\n", smp_processor_id()); diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 406445a358..033cc2332e 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -27,6 +27,7 @@ config X86 select HAS_PDX select HAS_SCHED_GRANULARITY select HAS_UBSAN + select HAS_VMAP select HAS_VPCI if HVM select NEEDS_LIBELF =20 diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index 74e3915a4d..9f06879225 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -1750,12 +1750,14 @@ void __init noreturn __start_xen(unsigned long mbi_= p) end_boot_allocator(); =20 system_state =3D SYS_STATE_boot; +#ifdef CONFIG_HAS_VMAP /* * No calls involving ACPI code should go between the setting of * SYS_STATE_boot and vm_init() (or else acpi_os_{,un}map_memory() * will break). */ vm_init(); +#endif =20 bsp_stack =3D cpu_alloc_stack(0); if ( !bsp_stack ) diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 3d2123a783..2c29e89b75 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -15,6 +15,7 @@ config CORE_PARKING config GRANT_TABLE bool "Grant table support" if EXPERT default y + depends on HAS_VMAP ---help--- Grant table provides a generic mechanism to memory sharing between domains. This shared memory interface underpins the @@ -65,6 +66,9 @@ config HAS_SCHED_GRANULARITY config HAS_UBSAN bool =20 +config HAS_VMAP + bool + config MEM_ACCESS_ALWAYS_ON bool =20 @@ -367,6 +371,7 @@ config LIVEPATCH bool "Live patching support" default X86 depends on "$(XEN_HAS_BUILD_ID)" =3D "y" + depends on HAS_VMAP select CC_SPLIT_SECTIONS ---help--- Allows a running Xen hypervisor to be dynamically patched using diff --git a/xen/common/Makefile b/xen/common/Makefile index 46049eac35..4803282d62 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -51,7 +51,7 @@ obj-$(CONFIG_TRACEBUFFER) +=3D trace.o obj-y +=3D version.o obj-y +=3D virtual_region.o obj-y +=3D vm_event.o -obj-y +=3D vmap.o +obj-$(CONFIG_HAS_VMAP) +=3D vmap.o obj-y +=3D vsprintf.o obj-y +=3D wait.o obj-bin-y +=3D warning.init.o diff --git a/xen/common/vmap.c b/xen/common/vmap.c index 4fd6b3067e..51e13e17ed 100644 --- a/xen/common/vmap.c +++ b/xen/common/vmap.c @@ -331,4 +331,11 @@ void vfree(void *va) while ( (pg =3D page_list_remove_head(&pg_list)) !=3D NULL ) free_domheap_page(pg); } + +void iounmap(void __iomem *va) +{ + unsigned long addr =3D (unsigned long)(void __force *)va; + + vunmap((void *)(addr & PAGE_MASK)); +} #endif diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h index b0f7632e89..d7ef4df452 100644 --- a/xen/include/xen/vmap.h +++ b/xen/include/xen/vmap.h @@ -1,4 +1,4 @@ -#if !defined(__XEN_VMAP_H__) && defined(VMAP_VIRT_START) +#if !defined(__XEN_VMAP_H__) && (defined(VMAP_VIRT_START) || !defined(CONF= IG_HAS_VMAP)) #define __XEN_VMAP_H__ =20 #include @@ -25,17 +25,14 @@ void vfree(void *va); =20 void __iomem *ioremap(paddr_t, size_t); =20 -static inline void iounmap(void __iomem *va) -{ - unsigned long addr =3D (unsigned long)(void __force *)va; - - vunmap((void *)(addr & PAGE_MASK)); -} +void iounmap(void __iomem *va); =20 void *arch_vmap_virt_end(void); static inline void vm_init(void) { +#if defined(VMAP_VIRT_START) vm_init_type(VMAP_DEFAULT, (void *)VMAP_VIRT_START, arch_vmap_virt_end= ()); +#endif } =20 #endif /* __XEN_VMAP_H__ */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750873017665.6640895584989; Sun, 25 Jun 2023 20:41:13 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555075.866785 (Exim 4.92) (envelope-from ) id 1qDd5V-0007sT-QJ; Mon, 26 Jun 2023 03:40:25 +0000 Received: by outflank-mailman (output) from mailman id 555075.866785; Mon, 26 Jun 2023 03:40:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5U-0007oh-V6; Mon, 26 Jun 2023 03:40:24 +0000 Received: by outflank-mailman (input) for mailman id 555075; Mon, 26 Jun 2023 03:40:22 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1D-0000HH-Em for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:35:59 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 9006f57d-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:35:57 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4CA722F4; Sun, 25 Jun 2023 20:36:41 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8C04B3F64C; Sun, 25 Jun 2023 20:35:54 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9006f57d-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 16/52] xen/mmu: relocate copy_from_paddr into setup.c Date: Mon, 26 Jun 2023 11:34:07 +0800 Message-Id: <20230626033443.2943270-17-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750874209100001 Content-Type: text/plain; charset="utf-8" Function copy_from_paddr() is defined in asm/setup.h, so it is better to be implemented in setup.c. Current copy_from_paddr() implementation is mmu-specific, so this commit mo= ves copy_from_paddr() into mmu/setup.c, and it is also benefical for us to implement MPU version of copy_from_paddr() in later commit. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/kernel.c | 27 --------------------------- xen/arch/arm/mmu/setup.c | 27 +++++++++++++++++++++++++++ 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c index ca5318515e..2e64612ab3 100644 --- a/xen/arch/arm/kernel.c +++ b/xen/arch/arm/kernel.c @@ -41,33 +41,6 @@ struct minimal_dtb_header { =20 #define DTB_MAGIC 0xd00dfeed =20 -/** - * copy_from_paddr - copy data from a physical address - * @dst: destination virtual address - * @paddr: source physical address - * @len: length to copy - */ -void __init copy_from_paddr(void *dst, paddr_t paddr, unsigned long len) -{ - void *src =3D (void *)FIXMAP_ADDR(FIXMAP_MISC); - - while (len) { - unsigned long l, s; - - s =3D paddr & (PAGE_SIZE-1); - l =3D min(PAGE_SIZE - s, len); - - set_fixmap(FIXMAP_MISC, maddr_to_mfn(paddr), PAGE_HYPERVISOR_WC); - memcpy(dst, src + s, l); - clean_dcache_va_range(dst, l); - clear_fixmap(FIXMAP_MISC); - - paddr +=3D l; - dst +=3D l; - len -=3D l; - } -} - static void __init place_modules(struct kernel_info *info, paddr_t kernbase, paddr_t kernend) { diff --git a/xen/arch/arm/mmu/setup.c b/xen/arch/arm/mmu/setup.c index f4de0cb29d..a7590a2443 100644 --- a/xen/arch/arm/mmu/setup.c +++ b/xen/arch/arm/mmu/setup.c @@ -342,6 +342,33 @@ void __init setup_mm(void) } #endif =20 +/* + * copy_from_paddr - copy data from a physical address + * @dst: destination virtual address + * @paddr: source physical address + * @len: length to copy + */ +void __init copy_from_paddr(void *dst, paddr_t paddr, unsigned long len) +{ + void *src =3D (void *)FIXMAP_ADDR(FIXMAP_MISC); + + while (len) { + unsigned long l, s; + + s =3D paddr & (PAGE_SIZE-1); + l =3D min(PAGE_SIZE - s, len); + + set_fixmap(FIXMAP_MISC, maddr_to_mfn(paddr), PAGE_HYPERVISOR_WC); + memcpy(dst, src + s, l); + clean_dcache_va_range(dst, l); + clear_fixmap(FIXMAP_MISC); + + paddr +=3D l; + dst +=3D l; + len -=3D l; + } +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750876691371.51895646342507; Sun, 25 Jun 2023 20:41:16 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555082.866820 (Exim 4.92) (envelope-from ) id 1qDd5d-0001Gu-Ec; Mon, 26 Jun 2023 03:40:33 +0000 Received: by outflank-mailman (output) from mailman id 555082.866820; Mon, 26 Jun 2023 03:40:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5c-0001C3-Bc; Mon, 26 Jun 2023 03:40:32 +0000 Received: by outflank-mailman (input) for mailman id 555082; Mon, 26 Jun 2023 03:40:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1G-0007ej-38 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:02 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 91d915d0-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:00 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 65C7A1FB; Sun, 25 Jun 2023 20:36:44 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BCC283F64C; Sun, 25 Jun 2023 20:35:57 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 91d915d0-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 17/52] xen/arm: do not give memory back to static heap Date: Mon, 26 Jun 2023 11:34:08 +0800 Message-Id: <20230626033443.2943270-18-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750878197100001 Content-Type: text/plain; charset="utf-8" If Xenheap is statically configured in Device Tree, its size is definite. So, we shall not give memory back into static heap, like we normally do in free_init_memory, etc, once it finishes initialization. We extract static_heap flag from init data bootinfo, as we also need it after we destroy init data section. we introduce a new helper xen_is_using_staticheap to tell whether Xenheap is statically configured in Device Tree. It is always returning false when !CONFIG_STATIC_MEMORY, since static heap depends on static memory feature. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/bootfdt.c | 2 +- xen/arch/arm/include/asm/setup.h | 8 +++++++- xen/arch/arm/kernel.c | 3 ++- xen/arch/arm/mm.c | 8 ++++++-- xen/arch/arm/mmu/setup.c | 4 ++-- xen/arch/arm/setup.c | 29 +++++++++++++++++------------ 6 files changed, 35 insertions(+), 19 deletions(-) diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c index 2673ad17a1..c4497e3b31 100644 --- a/xen/arch/arm/bootfdt.c +++ b/xen/arch/arm/bootfdt.c @@ -341,7 +341,7 @@ static int __init process_chosen_node(const void *fdt, = int node, if ( rc ) return rc; =20 - bootinfo.static_heap =3D true; + static_heap =3D true; } =20 printk("Checking for initrd in /chosen\n"); diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/se= tup.h index 0922549631..d691f6bf93 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -104,9 +104,15 @@ struct bootinfo { #ifdef CONFIG_ACPI struct meminfo acpi; #endif - bool static_heap; }; =20 +extern bool static_heap; +#ifdef CONFIG_STATIC_MEMORY +#define xen_is_using_staticheap() (static_heap) +#else +#define xen_is_using_staticheap() (false) +#endif + struct map_range_data { struct domain *d; diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c index 2e64612ab3..d13ef0330b 100644 --- a/xen/arch/arm/kernel.c +++ b/xen/arch/arm/kernel.c @@ -246,7 +246,8 @@ static __init int kernel_decompress(struct bootmodule *= mod, uint32_t offset) * Free the original kernel, update the pointers to the * decompressed kernel */ - fw_unreserved_regions(addr, addr + size, init_domheap_pages, 0); + if ( !xen_is_using_staticheap() ) + fw_unreserved_regions(addr, addr + size, init_domheap_pages, 0); =20 return 0; } diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index e665d1f97a..4b174f4d08 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -177,8 +177,12 @@ void free_init_memory(void) if ( rc ) panic("Unable to remove the init section (rc =3D %d)\n", rc); =20 - init_domheap_pages(pa, pa + len); - printk("Freed %ldkB init memory.\n", (long)(__init_end-__init_begin)>>= 10); + if ( !xen_is_using_staticheap() ) + { + init_domheap_pages(pa, pa + len); + printk("Freed %ldkB init memory.\n", + (long)(__init_end-__init_begin)>>10); + } } =20 void arch_dump_shared_mem_info(void) diff --git a/xen/arch/arm/mmu/setup.c b/xen/arch/arm/mmu/setup.c index a7590a2443..cf7018b190 100644 --- a/xen/arch/arm/mmu/setup.c +++ b/xen/arch/arm/mmu/setup.c @@ -196,7 +196,7 @@ void __init setup_mm(void) =20 total_pages =3D ram_size >> PAGE_SHIFT; =20 - if ( bootinfo.static_heap ) + if ( xen_is_using_staticheap() ) { for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) { @@ -241,7 +241,7 @@ void __init setup_mm(void) =20 do { - e =3D bootinfo.static_heap ? + e =3D xen_is_using_staticheap() ? fit_xenheap_in_static_heap(pfn_to_paddr(xenheap_pages), MB(32)= ) : consider_modules(ram_start, ram_end, pfn_to_paddr(xenheap_pages), diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 34923d9984..6f8dd98d6b 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -59,6 +59,8 @@ bool __read_mostly acpi_disabled; =20 domid_t __read_mostly max_init_domid; =20 +bool __read_mostly static_heap; + static __used void init_done(void) { int rc; @@ -508,22 +510,25 @@ void __init discard_initial_modules(void) struct bootmodules *mi =3D &bootinfo.modules; int i; =20 - for ( i =3D 0; i < mi->nr_mods; i++ ) + if ( !xen_is_using_staticheap() ) { - paddr_t s =3D mi->module[i].start; - paddr_t e =3D s + PAGE_ALIGN(mi->module[i].size); + for ( i =3D 0; i < mi->nr_mods; i++ ) + { + paddr_t s =3D mi->module[i].start; + paddr_t e =3D s + PAGE_ALIGN(mi->module[i].size); =20 - if ( mi->module[i].kind =3D=3D BOOTMOD_XEN ) - continue; + if ( mi->module[i].kind =3D=3D BOOTMOD_XEN ) + continue; =20 - if ( !mfn_valid(maddr_to_mfn(s)) || - !mfn_valid(maddr_to_mfn(e)) ) - continue; + if ( !mfn_valid(maddr_to_mfn(s)) || + !mfn_valid(maddr_to_mfn(e)) ) + continue; =20 - fw_unreserved_regions(s, e, init_domheap_pages, 0); - } + fw_unreserved_regions(s, e, init_domheap_pages, 0); + } =20 - mi->nr_mods =3D 0; + mi->nr_mods =3D 0; + } =20 remove_early_mappings(); } @@ -660,7 +665,7 @@ void __init populate_boot_allocator(void) const struct meminfo *banks =3D &bootinfo.mem; paddr_t s, e; =20 - if ( bootinfo.static_heap ) + if ( xen_is_using_staticheap() ) { for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) { --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750782719182.1248878636817; Sun, 25 Jun 2023 20:39:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554987.866511 (Exim 4.92) (envelope-from ) id 1qDd4W-0004Ik-Sw; Mon, 26 Jun 2023 03:39:24 +0000 Received: by outflank-mailman (output) from mailman id 554987.866511; Mon, 26 Jun 2023 03:39:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4W-0004Ib-PJ; Mon, 26 Jun 2023 03:39:24 +0000 Received: by outflank-mailman (input) for mailman id 554987; Mon, 26 Jun 2023 03:39:24 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1J-0000HH-V3 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:05 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 93c363e1-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:36:04 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 814041FB; Sun, 25 Jun 2023 20:36:47 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D4E8A3F64C; Sun, 25 Jun 2023 20:36:00 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 93c363e1-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 18/52] xen/arm: only map the init text section RW in free_init_memory Date: Mon, 26 Jun 2023 11:34:09 +0800 Message-Id: <20230626033443.2943270-19-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750783979100001 Content-Type: text/plain; charset="utf-8" In free_init_memory, we do not need to map the whole init section RW, as only init text section is mapped RO in boot time. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/mm.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 4b174f4d08..97642f35d3 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -145,16 +145,17 @@ int modify_xen_mappings(unsigned long s, unsigned lon= g e, unsigned int flags) void free_init_memory(void) { paddr_t pa =3D virt_to_maddr(__init_begin); + unsigned long inittext_end =3D round_pgup((unsigned long)_einittext); unsigned long len =3D __init_end - __init_begin; uint32_t insn; unsigned int i, nr =3D len / sizeof(insn); uint32_t *p; int rc; =20 - rc =3D modify_xen_mappings((unsigned long)__init_begin, - (unsigned long)__init_end, PAGE_HYPERVISOR_RW= ); + rc =3D modify_xen_mappings((unsigned long)__init_begin, inittext_end, + PAGE_HYPERVISOR_RW); if ( rc ) - panic("Unable to map RW the init section (rc =3D %d)\n", rc); + panic("Unable to map RW the init text section (rc =3D %d)\n", rc); =20 /* * From now on, init will not be used for execution anymore, --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750860201725.2855157528511; Sun, 25 Jun 2023 20:41:00 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555073.866773 (Exim 4.92) (envelope-from ) id 1qDd5T-0007Th-Cv; Mon, 26 Jun 2023 03:40:23 +0000 Received: by outflank-mailman (output) from mailman id 555073.866773; Mon, 26 Jun 2023 03:40:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5T-0007QM-0R; Mon, 26 Jun 2023 03:40:23 +0000 Received: by outflank-mailman (input) for mailman id 555073; Mon, 26 Jun 2023 03:40:21 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1L-0007ej-Vr for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:07 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 95a48964-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:07 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BBB071FB; Sun, 25 Jun 2023 20:36:50 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F381B3F64C; Sun, 25 Jun 2023 20:36:03 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 95a48964-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 19/52] xen/arm: switch to use ioremap_xxx in common file Date: Mon, 26 Jun 2023 11:34:10 +0800 Message-Id: <20230626033443.2943270-20-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750862166100001 Content-Type: text/plain; charset="utf-8" In arm, with the introduction of MPU system, VMAP scheme, taking advantage of virtual translation, will become a MMU-only feature. So we want to avoid using direct access to all vmap-related functions, like __vmap(), in common files, and switch to use more generic eoremap_xxx instead. Then later, we also just need to implement MPU version of ioremap_xxx. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/kernel.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c index d13ef0330b..30f8bc5923 100644 --- a/xen/arch/arm/kernel.c +++ b/xen/arch/arm/kernel.c @@ -172,7 +172,6 @@ static __init int kernel_decompress(struct bootmodule *= mod, uint32_t offset) unsigned int kernel_order_out; paddr_t output_size; struct page_info *pages; - mfn_t mfn; int i; paddr_t addr =3D mod->start; paddr_t size =3D mod->size; @@ -209,13 +208,18 @@ static __init int kernel_decompress(struct bootmodule= *mod, uint32_t offset) iounmap(input); return -ENOMEM; } - mfn =3D page_to_mfn(pages); - output =3D __vmap(&mfn, 1 << kernel_order_out, 1, 1, PAGE_HYPERVISOR, = VMAP_DEFAULT); + output =3D ioremap_cache(page_to_maddr(pages), + pfn_to_paddr(1UL << kernel_order_out)); + if ( output =3D=3D NULL ) + { + iounmap(output); + return -EFAULT; + } =20 rc =3D perform_gunzip(output, input, size); clean_dcache_va_range(output, output_size); iounmap(input); - vunmap(output); + iounmap(output); =20 if ( rc ) { --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750844866854.7280822757594; Sun, 25 Jun 2023 20:40:44 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555046.866698 (Exim 4.92) (envelope-from ) id 1qDd5E-0003Ge-6P; Mon, 26 Jun 2023 03:40:08 +0000 Received: by outflank-mailman (output) from mailman id 555046.866698; Mon, 26 Jun 2023 03:40:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5D-00039N-Aq; Mon, 26 Jun 2023 03:40:07 +0000 Received: by outflank-mailman (input) for mailman id 555046; Mon, 26 Jun 2023 03:40:01 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1S-0000HH-Gj for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:14 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 97e28437-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:36:11 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6210E1FB; Sun, 25 Jun 2023 20:36:54 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3C0033F64C; Sun, 25 Jun 2023 20:36:06 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 97e28437-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 20/52] xen/mmu: move MMU specific P2M code to mmu/p2m.c and mmu/p2m.h Date: Mon, 26 Jun 2023 11:34:11 +0800 Message-Id: <20230626033443.2943270-21-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750846211100001 Content-Type: text/plain; charset="utf-8" Current P2M implementation is designed for MMU system only. We move the MMU-specific codes into mmu/p2m.c, and only keep generic codes in p2m.c, like VMID allocator, etc We also move MMU-specific definitions and declarations to mmu/p2m.h, like function p2m_tlb_flush_sync, etc Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v2: - new commit --- v3: - remove MPU stubs - adapt to the introduction of new directories: mmu/ --- xen/arch/arm/Makefile | 1 + xen/arch/arm/include/asm/mmu/p2m.h | 18 + xen/arch/arm/include/asm/p2m.h | 30 +- xen/arch/arm/mmu/p2m.c | 1612 +++++++++++++++++++++++++ xen/arch/arm/p2m.c | 1770 ++-------------------------- 5 files changed, 1744 insertions(+), 1687 deletions(-) create mode 100644 xen/arch/arm/include/asm/mmu/p2m.h create mode 100644 xen/arch/arm/mmu/p2m.c diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index d01528cac6..a83a535cd7 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -39,6 +39,7 @@ obj-y +=3D mem_access.o ifeq ($(CONFIG_HAS_MMU), y) obj-y +=3D mmu/mm.o obj-y +=3D mmu/setup.o +obj-y +=3D mmu/p2m.o endif obj-y +=3D mm.o obj-y +=3D monitor.o diff --git a/xen/arch/arm/include/asm/mmu/p2m.h b/xen/arch/arm/include/asm/= mmu/p2m.h new file mode 100644 index 0000000000..bc108bdc4b --- /dev/null +++ b/xen/arch/arm/include/asm/mmu/p2m.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _XEN_P2M_MMU_H +#define _XEN_P2M_MMU_H + +struct p2m_domain; +void p2m_force_tlb_flush_sync(struct p2m_domain *p2m); +void p2m_tlb_flush_sync(struct p2m_domain *p2m); + +#endif /* _XEN_P2M_MMU_H */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index 940495d42b..f62d632830 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -19,6 +19,20 @@ extern unsigned int p2m_root_level; #define P2M_ROOT_ORDER p2m_root_order #define P2M_ROOT_LEVEL p2m_root_level =20 +#define MAX_VMID_8_BIT (1UL << 8) +#define MAX_VMID_16_BIT (1UL << 16) + +#define INVALID_VMID 0 /* VMID 0 is reserved */ + +#ifdef CONFIG_ARM_64 +extern unsigned int max_vmid; +/* VMID is by default 8 bit width on AArch64 */ +#define MAX_VMID max_vmid +#else +/* VMID is always 8 bit width on AArch32 */ +#define MAX_VMID MAX_VMID_8_BIT +#endif + struct domain; =20 extern void memory_type_changed(struct domain *); @@ -156,6 +170,10 @@ typedef enum { #endif #include =20 +#ifdef CONFIG_HAS_MMU +#include +#endif + static inline bool arch_acquire_resource_check(struct domain *d) { /* @@ -180,7 +198,11 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx) */ void p2m_restrict_ipa_bits(unsigned int ipa_bits); =20 +void p2m_vmid_allocator_init(void); +int p2m_alloc_vmid(struct domain *d); + /* Second stage paging setup, to be called on all CPUs */ +void setup_virt_paging_one(void *data); void setup_virt_paging(void); =20 /* Init the datastructures for later use by the p2m code */ @@ -242,8 +264,6 @@ static inline int p2m_is_write_locked(struct p2m_domain= *p2m) return rw_is_write_locked(&p2m->lock); } =20 -void p2m_tlb_flush_sync(struct p2m_domain *p2m); - /* Look up the MFN corresponding to a domain's GFN. */ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t); =20 @@ -268,6 +288,12 @@ int p2m_set_entry(struct p2m_domain *p2m, mfn_t smfn, p2m_type_t t, p2m_access_t a); +int __p2m_set_entry(struct p2m_domain *p2m, + gfn_t sgfn, + unsigned int page_order, + mfn_t smfn, + p2m_type_t t, + p2m_access_t a); =20 bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn); =20 diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c new file mode 100644 index 0000000000..ad0c7fa30e --- /dev/null +++ b/xen/arch/arm/mmu/p2m.c @@ -0,0 +1,1612 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#include +#include +#include +#include + +#include +#include +#include +#include + +unsigned int __read_mostly p2m_root_order; +unsigned int __read_mostly p2m_root_level; + +#define P2M_ROOT_PAGES (1<arch.paging.p2m_total_pages) =3D + d->arch.paging.p2m_total_pages + 1; + page_list_add_tail(pg, &d->arch.paging.p2m_freelist); + } + else if ( d->arch.paging.p2m_total_pages > pages ) + { + /* Need to return memory to domheap */ + pg =3D page_list_remove_head(&d->arch.paging.p2m_freelist); + if( pg ) + { + ACCESS_ONCE(d->arch.paging.p2m_total_pages) =3D + d->arch.paging.p2m_total_pages - 1; + free_domheap_page(pg); + } + else + { + printk(XENLOG_ERR + "Failed to free P2M pages, P2M freelist is empty.\n= "); + return -ENOMEM; + } + } + else + break; + + /* Check to see if we need to yield and try again */ + if ( preempted && general_preempt_check() ) + { + *preempted =3D true; + return -ERESTART; + } + } + + return 0; +} + +int arch_set_paging_mempool_size(struct domain *d, uint64_t size) +{ + unsigned long pages =3D size >> PAGE_SHIFT; + bool preempted =3D false; + int rc; + + if ( (size & ~PAGE_MASK) || /* Non page-sized request? */ + pages !=3D (size >> PAGE_SHIFT) ) /* 32-bit overflow? */ + return -EINVAL; + + spin_lock(&d->arch.paging.lock); + rc =3D p2m_set_allocation(d, pages, &preempted); + spin_unlock(&d->arch.paging.lock); + + ASSERT(preempted =3D=3D (rc =3D=3D -ERESTART)); + + return rc; +} + +int p2m_teardown_allocation(struct domain *d) +{ + int ret =3D 0; + bool preempted =3D false; + + spin_lock(&d->arch.paging.lock); + if ( d->arch.paging.p2m_total_pages !=3D 0 ) + { + ret =3D p2m_set_allocation(d, 0, &preempted); + if ( preempted ) + { + spin_unlock(&d->arch.paging.lock); + return -ERESTART; + } + ASSERT(d->arch.paging.p2m_total_pages =3D=3D 0); + } + spin_unlock(&d->arch.paging.lock); + + return ret; +} + +int p2m_teardown(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + unsigned long count =3D 0; + struct page_info *pg; + int rc =3D 0; + + p2m_write_lock(p2m); + + while ( (pg =3D page_list_remove_head(&p2m->pages)) ) + { + p2m_free_page(p2m->domain, pg); + count++; + /* Arbitrarily preempt every 512 iterations */ + if ( !(count % 512) && hypercall_preempt_check() ) + { + rc =3D -ERESTART; + break; + } + } + + p2m_write_unlock(p2m); + + return rc; +} + +void p2m_dump_info(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + p2m_read_lock(p2m); + printk("p2m mappings for domain %d (vmid %d):\n", + d->domain_id, p2m->vmid); + BUG_ON(p2m->stats.mappings[0] || p2m->stats.shattered[0]); + printk(" 1G mappings: %ld (shattered %ld)\n", + p2m->stats.mappings[1], p2m->stats.shattered[1]); + printk(" 2M mappings: %ld (shattered %ld)\n", + p2m->stats.mappings[2], p2m->stats.shattered[2]); + printk(" 4K mappings: %ld\n", p2m->stats.mappings[3]); + p2m_read_unlock(p2m); +} + +/* + * p2m_save_state and p2m_restore_state work in pair to workaround + * ARM64_WORKAROUND_AT_SPECULATE. p2m_save_state will set-up VTTBR to + * point to the empty page-tables to stop allocating TLB entries. + */ +void p2m_save_state(struct vcpu *p) +{ + p->arch.sctlr =3D READ_SYSREG(SCTLR_EL1); + + if ( cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) ) + { + WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR= _EL2); + /* + * Ensure VTTBR_EL2 is correctly synchronized so we can restore + * the next vCPU context without worrying about AT instruction + * speculation. + */ + isb(); + } +} + +void p2m_restore_state(struct vcpu *n) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(n->domain); + uint8_t *last_vcpu_ran; + + if ( is_idle_vcpu(n) ) + return; + + WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1); + WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2); + + /* + * ARM64_WORKAROUND_AT_SPECULATE: VTTBR_EL2 should be restored after a= ll + * registers associated to EL1/EL0 translations regime have been + * synchronized. + */ + asm volatile(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_AT_SPECULATE)); + WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2); + + last_vcpu_ran =3D &p2m->last_vcpu_ran[smp_processor_id()]; + + /* + * While we are restoring an out-of-context translation regime + * we still need to ensure: + * - VTTBR_EL2 is synchronized before flushing the TLBs + * - All registers for EL1 are synchronized before executing an AT + * instructions targeting S1/S2. + */ + isb(); + + /* + * Flush local TLB for the domain to prevent wrong TLB translation + * when running multiple vCPU of the same domain on a single pCPU. + */ + if ( *last_vcpu_ran !=3D INVALID_VCPU_ID && *last_vcpu_ran !=3D n->vcp= u_id ) + flush_guest_tlb_local(); + + *last_vcpu_ran =3D n->vcpu_id; +} + +/* + * Force a synchronous P2M TLB flush. + * + * Must be called with the p2m lock held. + */ +void p2m_force_tlb_flush_sync(struct p2m_domain *p2m) +{ + unsigned long flags =3D 0; + uint64_t ovttbr; + + ASSERT(p2m_is_write_locked(p2m)); + + /* + * ARM only provides an instruction to flush TLBs for the current + * VMID. So switch to the VTTBR of a given P2M if different. + */ + ovttbr =3D READ_SYSREG64(VTTBR_EL2); + if ( ovttbr !=3D p2m->vttbr ) + { + uint64_t vttbr; + + local_irq_save(flags); + + /* + * ARM64_WORKAROUND_AT_SPECULATE: We need to stop AT to allocate + * TLBs entries because the context is partially modified. We + * only need the VMID for flushing the TLBs, so we can generate + * a new VTTBR with the VMID to flush and the empty root table. + */ + if ( !cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) ) + vttbr =3D p2m->vttbr; + else + vttbr =3D generate_vttbr(p2m->vmid, empty_root_mfn); + + WRITE_SYSREG64(vttbr, VTTBR_EL2); + + /* Ensure VTTBR_EL2 is synchronized before flushing the TLBs */ + isb(); + } + + flush_guest_tlb(); + + if ( ovttbr !=3D READ_SYSREG64(VTTBR_EL2) ) + { + WRITE_SYSREG64(ovttbr, VTTBR_EL2); + /* Ensure VTTBR_EL2 is back in place before continuing. */ + isb(); + local_irq_restore(flags); + } + + p2m->need_flush =3D false; +} + +void p2m_tlb_flush_sync(struct p2m_domain *p2m) +{ + if ( p2m->need_flush ) + p2m_force_tlb_flush_sync(p2m); +} + +/* + * Find and map the root page table. The caller is responsible for + * unmapping the table. + * + * The function will return NULL if the offset of the root table is + * invalid. + */ +static lpae_t *p2m_get_root_pointer(struct p2m_domain *p2m, + gfn_t gfn) +{ + unsigned long root_table; + + /* + * While the root table index is the offset from the previous level, + * we can't use (P2M_ROOT_LEVEL - 1) because the root level might be + * 0. Yet we still want to check if all the unused bits are zeroed. + */ + root_table =3D gfn_x(gfn) >> (XEN_PT_LEVEL_ORDER(P2M_ROOT_LEVEL) + + XEN_PT_LPAE_SHIFT); + if ( root_table >=3D P2M_ROOT_PAGES ) + return NULL; + + return __map_domain_page(p2m->root + root_table); +} + +/* + * Lookup the MFN corresponding to a domain's GFN. + * Lookup mem access in the ratrix tree. + * The entries associated to the GFN is considered valid. + */ +static p2m_access_t p2m_mem_access_radix_get(struct p2m_domain *p2m, gfn_t= gfn) +{ + void *ptr; + + if ( !p2m->mem_access_enabled ) + return p2m->default_access; + + ptr =3D radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn)); + if ( !ptr ) + return p2m_access_rwx; + else + return radix_tree_ptr_to_int(ptr); +} + +/* + * In the case of the P2M, the valid bit is used for other purpose. Use + * the type to check whether an entry is valid. + */ +static inline bool p2m_is_valid(lpae_t pte) +{ + return pte.p2m.type !=3D p2m_invalid; +} + +/* + * lpae_is_* helpers don't check whether the valid bit is set in the + * PTE. Provide our own overlay to check the valid bit. + */ +static inline bool p2m_is_mapping(lpae_t pte, unsigned int level) +{ + return p2m_is_valid(pte) && lpae_is_mapping(pte, level); +} + +static inline bool p2m_is_superpage(lpae_t pte, unsigned int level) +{ + return p2m_is_valid(pte) && lpae_is_superpage(pte, level); +} + +#define GUEST_TABLE_MAP_FAILED 0 +#define GUEST_TABLE_SUPER_PAGE 1 +#define GUEST_TABLE_NORMAL_PAGE 2 + +static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry); + +/* + * Take the currently mapped table, find the corresponding GFN entry, + * and map the next table, if available. The previous table will be + * unmapped if the next level was mapped (e.g GUEST_TABLE_NORMAL_PAGE + * returned). + * + * The read_only parameters indicates whether intermediate tables should + * be allocated when not present. + * + * Return values: + * GUEST_TABLE_MAP_FAILED: Either read_only was set and the entry + * was empty, or allocating a new page failed. + * GUEST_TABLE_NORMAL_PAGE: next level mapped normally + * GUEST_TABLE_SUPER_PAGE: The next entry points to a superpage. + */ +static int p2m_next_level(struct p2m_domain *p2m, bool read_only, + unsigned int level, lpae_t **table, + unsigned int offset) +{ + lpae_t *entry; + int ret; + mfn_t mfn; + + entry =3D *table + offset; + + if ( !p2m_is_valid(*entry) ) + { + if ( read_only ) + return GUEST_TABLE_MAP_FAILED; + + ret =3D p2m_create_table(p2m, entry); + if ( ret ) + return GUEST_TABLE_MAP_FAILED; + } + + /* The function p2m_next_level is never called at the 3rd level */ + ASSERT(level < 3); + if ( p2m_is_mapping(*entry, level) ) + return GUEST_TABLE_SUPER_PAGE; + + mfn =3D lpae_get_mfn(*entry); + + unmap_domain_page(*table); + *table =3D map_domain_page(mfn); + + return GUEST_TABLE_NORMAL_PAGE; +} + +/* + * Get the details of a given gfn. + * + * If the entry is present, the associated MFN will be returned and the + * access and type filled up. The page_order will correspond to the + * order of the mapping in the page table (i.e it could be a superpage). + * + * If the entry is not present, INVALID_MFN will be returned and the + * page_order will be set according to the order of the invalid range. + * + * valid will contain the value of bit[0] (e.g valid bit) of the + * entry. + */ +mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t, p2m_access_t *a, + unsigned int *page_order, + bool *valid) +{ + paddr_t addr =3D gfn_to_gaddr(gfn); + unsigned int level =3D 0; + lpae_t entry, *table; + int rc; + mfn_t mfn =3D INVALID_MFN; + p2m_type_t _t; + DECLARE_OFFSETS(offsets, addr); + + ASSERT(p2m_is_locked(p2m)); + BUILD_BUG_ON(THIRD_MASK !=3D PAGE_MASK); + + /* Allow t to be NULL */ + t =3D t ?: &_t; + + *t =3D p2m_invalid; + + if ( valid ) + *valid =3D false; + + /* XXX: Check if the mapping is lower than the mapped gfn */ + + /* This gfn is higher than the highest the p2m map currently holds */ + if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) ) + { + for ( level =3D P2M_ROOT_LEVEL; level < 3; level++ ) + if ( (gfn_x(gfn) & (XEN_PT_LEVEL_MASK(level) >> PAGE_SHIFT)) > + gfn_x(p2m->max_mapped_gfn) ) + break; + + goto out; + } + + table =3D p2m_get_root_pointer(p2m, gfn); + + /* + * the table should always be non-NULL because the gfn is below + * p2m->max_mapped_gfn and the root table pages are always present. + */ + if ( !table ) + { + ASSERT_UNREACHABLE(); + level =3D P2M_ROOT_LEVEL; + goto out; + } + + for ( level =3D P2M_ROOT_LEVEL; level < 3; level++ ) + { + rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]); + if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) + goto out_unmap; + else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) + break; + } + + entry =3D table[offsets[level]]; + + if ( p2m_is_valid(entry) ) + { + *t =3D entry.p2m.type; + + if ( a ) + *a =3D p2m_mem_access_radix_get(p2m, gfn); + + mfn =3D lpae_get_mfn(entry); + /* + * The entry may point to a superpage. Find the MFN associated + * to the GFN. + */ + mfn =3D mfn_add(mfn, + gfn_x(gfn) & ((1UL << XEN_PT_LEVEL_ORDER(level)) - 1= )); + + if ( valid ) + *valid =3D lpae_is_valid(entry); + } + +out_unmap: + unmap_domain_page(table); + +out: + if ( page_order ) + *page_order =3D XEN_PT_LEVEL_ORDER(level); + + return mfn; +} + +static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a) +{ + /* First apply type permissions */ + switch ( t ) + { + case p2m_ram_rw: + e->p2m.xn =3D 0; + e->p2m.write =3D 1; + break; + + case p2m_ram_ro: + e->p2m.xn =3D 0; + e->p2m.write =3D 0; + break; + + case p2m_iommu_map_rw: + case p2m_map_foreign_rw: + case p2m_grant_map_rw: + case p2m_mmio_direct_dev: + case p2m_mmio_direct_nc: + case p2m_mmio_direct_c: + e->p2m.xn =3D 1; + e->p2m.write =3D 1; + break; + + case p2m_iommu_map_ro: + case p2m_map_foreign_ro: + case p2m_grant_map_ro: + case p2m_invalid: + e->p2m.xn =3D 1; + e->p2m.write =3D 0; + break; + + case p2m_max_real_type: + BUG(); + break; + } + + /* Then restrict with access permissions */ + switch ( a ) + { + case p2m_access_rwx: + break; + case p2m_access_wx: + e->p2m.read =3D 0; + break; + case p2m_access_rw: + e->p2m.xn =3D 1; + break; + case p2m_access_w: + e->p2m.read =3D 0; + e->p2m.xn =3D 1; + break; + case p2m_access_rx: + case p2m_access_rx2rw: + e->p2m.write =3D 0; + break; + case p2m_access_x: + e->p2m.write =3D 0; + e->p2m.read =3D 0; + break; + case p2m_access_r: + e->p2m.write =3D 0; + e->p2m.xn =3D 1; + break; + case p2m_access_n: + case p2m_access_n2rwx: + e->p2m.read =3D e->p2m.write =3D 0; + e->p2m.xn =3D 1; + break; + } +} + +static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a) +{ + /* + * sh, xn and write bit will be defined in the following switches + * based on mattr and t. + */ + lpae_t e =3D (lpae_t) { + .p2m.af =3D 1, + .p2m.read =3D 1, + .p2m.table =3D 1, + .p2m.valid =3D 1, + .p2m.type =3D t, + }; + + BUILD_BUG_ON(p2m_max_real_type > (1 << 4)); + + switch ( t ) + { + case p2m_mmio_direct_dev: + e.p2m.mattr =3D MATTR_DEV; + e.p2m.sh =3D LPAE_SH_OUTER; + break; + + case p2m_mmio_direct_c: + e.p2m.mattr =3D MATTR_MEM; + e.p2m.sh =3D LPAE_SH_OUTER; + break; + + /* + * ARM ARM: Overlaying the shareability attribute (DDI + * 0406C.b B3-1376 to 1377) + * + * A memory region with a resultant memory type attribute of Normal, + * and a resultant cacheability attribute of Inner Non-cacheable, + * Outer Non-cacheable, must have a resultant shareability attribute + * of Outer Shareable, otherwise shareability is UNPREDICTABLE. + * + * On ARMv8 shareability is ignored and explicitly treated as Outer + * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable. + * See the note for table D4-40, in page 1788 of the ARM DDI 0487A.j. + */ + case p2m_mmio_direct_nc: + e.p2m.mattr =3D MATTR_MEM_NC; + e.p2m.sh =3D LPAE_SH_OUTER; + break; + + default: + e.p2m.mattr =3D MATTR_MEM; + e.p2m.sh =3D LPAE_SH_INNER; + } + + p2m_set_permission(&e, t, a); + + ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK)); + + lpae_set_mfn(e, mfn); + + return e; +} + +/* Generate table entry with correct attributes. */ +static lpae_t page_to_p2m_table(struct page_info *page) +{ + /* + * The access value does not matter because the hardware will ignore + * the permission fields for table entry. + * + * We use p2m_ram_rw so the entry has a valid type. This is important + * for p2m_is_valid() to return valid on table entries. + */ + return mfn_to_p2m_entry(page_to_mfn(page), p2m_ram_rw, p2m_access_rwx); +} + +static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool clean_pte) +{ + write_pte(p, pte); + if ( clean_pte ) + clean_dcache(*p); +} + +static inline void p2m_remove_pte(lpae_t *p, bool clean_pte) +{ + lpae_t pte; + + memset(&pte, 0x00, sizeof(pte)); + p2m_write_pte(p, pte, clean_pte); +} + +/* Allocate a new page table page and hook it in via the given entry. */ +static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry) +{ + struct page_info *page; + lpae_t *p; + + ASSERT(!p2m_is_valid(*entry)); + + page =3D p2m_alloc_page(p2m->domain); + if ( page =3D=3D NULL ) + return -ENOMEM; + + page_list_add(page, &p2m->pages); + + p =3D __map_domain_page(page); + clear_page(p); + + if ( p2m->clean_pte ) + clean_dcache_va_range(p, PAGE_SIZE); + + unmap_domain_page(p); + + p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte); + + return 0; +} + +static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn, + p2m_access_t a) +{ + int rc; + + if ( !p2m->mem_access_enabled ) + return 0; + + if ( p2m_access_rwx =3D=3D a ) + { + radix_tree_delete(&p2m->mem_access_settings, gfn_x(gfn)); + return 0; + } + + rc =3D radix_tree_insert(&p2m->mem_access_settings, gfn_x(gfn), + radix_tree_int_to_ptr(a)); + if ( rc =3D=3D -EEXIST ) + { + /* If a setting already exists, change it to the new one */ + radix_tree_replace_slot( + radix_tree_lookup_slot( + &p2m->mem_access_settings, gfn_x(gfn)), + radix_tree_int_to_ptr(a)); + rc =3D 0; + } + + return rc; +} + +/* + * Put any references on the single 4K page referenced by pte. + * TODO: Handle superpages, for now we only take special references for le= af + * pages (specifically foreign ones, which can't be super mapped today). + */ +static void p2m_put_l3_page(const lpae_t pte) +{ + mfn_t mfn =3D lpae_get_mfn(pte); + + ASSERT(p2m_is_valid(pte)); + + /* + * TODO: Handle other p2m types + * + * It's safe to do the put_page here because page_alloc will + * flush the TLBs if the page is reallocated before the end of + * this loop. + */ + if ( p2m_is_foreign(pte.p2m.type) ) + { + ASSERT(mfn_valid(mfn)); + put_page(mfn_to_page(mfn)); + } + /* Detect the xenheap page and mark the stored GFN as invalid. */ + else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) + page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); +} + +/* Free lpae sub-tree behind an entry */ +static void p2m_free_entry(struct p2m_domain *p2m, + lpae_t entry, unsigned int level) +{ + unsigned int i; + lpae_t *table; + mfn_t mfn; + struct page_info *pg; + + /* Nothing to do if the entry is invalid. */ + if ( !p2m_is_valid(entry) ) + return; + + if ( p2m_is_superpage(entry, level) || (level =3D=3D 3) ) + { +#ifdef CONFIG_IOREQ_SERVER + /* + * If this gets called then either the entry was replaced by an en= try + * with a different base (valid case) or the shattering of a super= page + * has failed (error case). + * So, at worst, the spurious mapcache invalidation might be sent. + */ + if ( p2m_is_ram(entry.p2m.type) && + domain_has_ioreq_server(p2m->domain) ) + ioreq_request_mapcache_invalidate(p2m->domain); +#endif + + p2m->stats.mappings[level]--; + /* Nothing to do if the entry is a super-page. */ + if ( level =3D=3D 3 ) + p2m_put_l3_page(entry); + return; + } + + table =3D map_domain_page(lpae_get_mfn(entry)); + for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + p2m_free_entry(p2m, *(table + i), level + 1); + + unmap_domain_page(table); + + /* + * Make sure all the references in the TLB have been removed before + * freing the intermediate page table. + * XXX: Should we defer the free of the page table to avoid the + * flush? + */ + p2m_tlb_flush_sync(p2m); + + mfn =3D lpae_get_mfn(entry); + ASSERT(mfn_valid(mfn)); + + pg =3D mfn_to_page(mfn); + + page_list_del(pg, &p2m->pages); + p2m_free_page(p2m->domain, pg); +} + +static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry, + unsigned int level, unsigned int target, + const unsigned int *offsets) +{ + struct page_info *page; + unsigned int i; + lpae_t pte, *table; + bool rv =3D true; + + /* Convenience aliases */ + mfn_t mfn =3D lpae_get_mfn(*entry); + unsigned int next_level =3D level + 1; + unsigned int level_order =3D XEN_PT_LEVEL_ORDER(next_level); + + /* + * This should only be called with target !=3D level and the entry is + * a superpage. + */ + ASSERT(level < target); + ASSERT(p2m_is_superpage(*entry, level)); + + page =3D p2m_alloc_page(p2m->domain); + if ( !page ) + return false; + + page_list_add(page, &p2m->pages); + table =3D __map_domain_page(page); + + /* + * We are either splitting a first level 1G page into 512 second level + * 2M pages, or a second level 2M page into 512 third level 4K pages. + */ + for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + lpae_t *new_entry =3D table + i; + + /* + * Use the content of the superpage entry and override + * the necessary fields. So the correct permission are kept. + */ + pte =3D *entry; + lpae_set_mfn(pte, mfn_add(mfn, i << level_order)); + + /* + * First and second level pages set p2m.table =3D 0, but third + * level entries set p2m.table =3D 1. + */ + pte.p2m.table =3D (next_level =3D=3D 3); + + write_pte(new_entry, pte); + } + + /* Update stats */ + p2m->stats.shattered[level]++; + p2m->stats.mappings[level]--; + p2m->stats.mappings[next_level] +=3D XEN_PT_LPAE_ENTRIES; + + /* + * Shatter superpage in the page to the level we want to make the + * changes. + * This is done outside the loop to avoid checking the offset to + * know whether the entry should be shattered for every entry. + */ + if ( next_level !=3D target ) + rv =3D p2m_split_superpage(p2m, table + offsets[next_level], + level + 1, target, offsets); + + if ( p2m->clean_pte ) + clean_dcache_va_range(table, PAGE_SIZE); + + unmap_domain_page(table); + + /* + * Even if we failed, we should install the newly allocated LPAE + * entry. The caller will be in charge to free the sub-tree. + */ + p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte); + + return rv; +} + +/* + * Insert an entry in the p2m. This should be called with a mapping + * equal to a page/superpage (4K, 2M, 1G). + */ +int __p2m_set_entry(struct p2m_domain *p2m, + gfn_t sgfn, + unsigned int page_order, + mfn_t smfn, + p2m_type_t t, + p2m_access_t a) +{ + unsigned int level =3D 0; + unsigned int target =3D 3 - (page_order / XEN_PT_LPAE_SHIFT); + lpae_t *entry, *table, orig_pte; + int rc; + /* A mapping is removed if the MFN is invalid. */ + bool removing_mapping =3D mfn_eq(smfn, INVALID_MFN); + DECLARE_OFFSETS(offsets, gfn_to_gaddr(sgfn)); + + ASSERT(p2m_is_write_locked(p2m)); + + /* + * Check if the level target is valid: we only support + * 4K - 2M - 1G mapping. + */ + ASSERT(target > 0 && target <=3D 3); + + table =3D p2m_get_root_pointer(p2m, sgfn); + if ( !table ) + return -EINVAL; + + for ( level =3D P2M_ROOT_LEVEL; level < target; level++ ) + { + /* + * Don't try to allocate intermediate page table if the mapping + * is about to be removed. + */ + rc =3D p2m_next_level(p2m, removing_mapping, + level, &table, offsets[level]); + if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) + { + /* + * We are here because p2m_next_level has failed to map + * the intermediate page table (e.g the table does not exist + * and they p2m tree is read-only). It is a valid case + * when removing a mapping as it may not exist in the + * page table. In this case, just ignore it. + */ + rc =3D removing_mapping ? 0 : -ENOENT; + goto out; + } + else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) + break; + } + + entry =3D table + offsets[level]; + + /* + * If we are here with level < target, we must be at a leaf node, + * and we need to break up the superpage. + */ + if ( level < target ) + { + /* We need to split the original page. */ + lpae_t split_pte =3D *entry; + + ASSERT(p2m_is_superpage(*entry, level)); + + if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets)= ) + { + /* + * The current super-page is still in-place, so re-increment + * the stats. + */ + p2m->stats.mappings[level]++; + + /* Free the allocated sub-tree */ + p2m_free_entry(p2m, split_pte, level); + + rc =3D -ENOMEM; + goto out; + } + + /* + * Follow the break-before-sequence to update the entry. + * For more details see (D4.7.1 in ARM DDI 0487A.j). + */ + p2m_remove_pte(entry, p2m->clean_pte); + p2m_force_tlb_flush_sync(p2m); + + p2m_write_pte(entry, split_pte, p2m->clean_pte); + + /* then move to the level we want to make real changes */ + for ( ; level < target; level++ ) + { + rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]= ); + + /* + * The entry should be found and either be a table + * or a superpage if level 3 is not targeted + */ + ASSERT(rc =3D=3D GUEST_TABLE_NORMAL_PAGE || + (rc =3D=3D GUEST_TABLE_SUPER_PAGE && target < 3)); + } + + entry =3D table + offsets[level]; + } + + /* + * We should always be there with the correct level because + * all the intermediate tables have been installed if necessary. + */ + ASSERT(level =3D=3D target); + + orig_pte =3D *entry; + + /* + * The radix-tree can only work on 4KB. This is only used when + * memaccess is enabled and during shutdown. + */ + ASSERT(!p2m->mem_access_enabled || page_order =3D=3D 0 || + p2m->domain->is_dying); + /* + * The access type should always be p2m_access_rwx when the mapping + * is removed. + */ + ASSERT(!mfn_eq(INVALID_MFN, smfn) || (a =3D=3D p2m_access_rwx)); + /* + * Update the mem access permission before update the P2M. So we + * don't have to revert the mapping if it has failed. + */ + rc =3D p2m_mem_access_radix_set(p2m, sgfn, a); + if ( rc ) + goto out; + + /* + * Always remove the entry in order to follow the break-before-make + * sequence when updating the translation table (D4.7.1 in ARM DDI + * 0487A.j). + */ + if ( lpae_is_valid(orig_pte) || removing_mapping ) + p2m_remove_pte(entry, p2m->clean_pte); + + if ( removing_mapping ) + /* Flush can be deferred if the entry is removed */ + p2m->need_flush |=3D !!lpae_is_valid(orig_pte); + else + { + lpae_t pte =3D mfn_to_p2m_entry(smfn, t, a); + + if ( level < 3 ) + pte.p2m.table =3D 0; /* Superpage entry */ + + /* + * It is necessary to flush the TLB before writing the new entry + * to keep coherency when the previous entry was valid. + * + * Although, it could be defered when only the permissions are + * changed (e.g in case of memaccess). + */ + if ( lpae_is_valid(orig_pte) ) + { + if ( likely(!p2m->mem_access_enabled) || + P2M_CLEAR_PERM(pte) !=3D P2M_CLEAR_PERM(orig_pte) ) + p2m_force_tlb_flush_sync(p2m); + else + p2m->need_flush =3D true; + } + else if ( !p2m_is_valid(orig_pte) ) /* new mapping */ + p2m->stats.mappings[level]++; + + p2m_write_pte(entry, pte, p2m->clean_pte); + + p2m->max_mapped_gfn =3D gfn_max(p2m->max_mapped_gfn, + gfn_add(sgfn, (1UL << page_order) - = 1)); + p2m->lowest_mapped_gfn =3D gfn_min(p2m->lowest_mapped_gfn, sgfn); + } + + if ( is_iommu_enabled(p2m->domain) && + (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) ) + { + unsigned int flush_flags =3D 0; + + if ( lpae_is_valid(orig_pte) ) + flush_flags |=3D IOMMU_FLUSHF_modified; + if ( lpae_is_valid(*entry) ) + flush_flags |=3D IOMMU_FLUSHF_added; + + rc =3D iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)), + 1UL << page_order, flush_flags); + } + else + rc =3D 0; + + /* + * Free the entry only if the original pte was valid and the base + * is different (to avoid freeing when permission is changed). + */ + if ( p2m_is_valid(orig_pte) && + !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) ) + p2m_free_entry(p2m, orig_pte, level); + +out: + unmap_domain_page(table); + + return rc; +} + +int p2m_set_entry(struct p2m_domain *p2m, + gfn_t sgfn, + unsigned long nr, + mfn_t smfn, + p2m_type_t t, + p2m_access_t a) +{ + int rc =3D 0; + + /* + * Any reference taken by the P2M mappings (e.g. foreign mapping) will + * be dropped in relinquish_p2m_mapping(). As the P2M will still + * be accessible after, we need to prevent mapping to be added when the + * domain is dying. + */ + if ( unlikely(p2m->domain->is_dying) ) + return -ENOMEM; + + while ( nr ) + { + unsigned long mask; + unsigned long order; + + /* + * Don't take into account the MFN when removing mapping (i.e + * MFN_INVALID) to calculate the correct target order. + * + * XXX: Support superpage mappings if nr is not aligned to a + * superpage size. + */ + mask =3D !mfn_eq(smfn, INVALID_MFN) ? mfn_x(smfn) : 0; + mask |=3D gfn_x(sgfn) | nr; + + /* Always map 4k by 4k when memaccess is enabled */ + if ( unlikely(p2m->mem_access_enabled) ) + order =3D THIRD_ORDER; + else if ( !(mask & ((1UL << FIRST_ORDER) - 1)) ) + order =3D FIRST_ORDER; + else if ( !(mask & ((1UL << SECOND_ORDER) - 1)) ) + order =3D SECOND_ORDER; + else + order =3D THIRD_ORDER; + + rc =3D __p2m_set_entry(p2m, sgfn, order, smfn, t, a); + if ( rc ) + break; + + sgfn =3D gfn_add(sgfn, (1 << order)); + if ( !mfn_eq(smfn, INVALID_MFN) ) + smfn =3D mfn_add(smfn, (1 << order)); + + nr -=3D (1 << order); + } + + return rc; +} + +/* Invalidate all entries in the table. The p2m should be write locked. */ +static void p2m_invalidate_table(struct p2m_domain *p2m, mfn_t mfn) +{ + lpae_t *table; + unsigned int i; + + ASSERT(p2m_is_write_locked(p2m)); + + table =3D map_domain_page(mfn); + + for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + lpae_t pte =3D table[i]; + + /* + * Writing an entry can be expensive because it may involve + * cleaning the cache. So avoid updating the entry if the valid + * bit is already cleared. + */ + if ( !pte.p2m.valid ) + continue; + + pte.p2m.valid =3D 0; + + p2m_write_pte(&table[i], pte, p2m->clean_pte); + } + + unmap_domain_page(table); + + p2m->need_flush =3D true; +} + +/* + * Invalidate all entries in the root page-tables. This is + * useful to get fault on entry and do an action. + * + * p2m_invalid_root() should not be called when the P2M is shared with + * the IOMMU because it will cause IOMMU fault. + */ +void p2m_invalidate_root(struct p2m_domain *p2m) +{ + unsigned int i; + + ASSERT(!iommu_use_hap_pt(p2m->domain)); + + p2m_write_lock(p2m); + + for ( i =3D 0; i < P2M_ROOT_LEVEL; i++ ) + p2m_invalidate_table(p2m, page_to_mfn(p2m->root + i)); + + p2m_write_unlock(p2m); +} + +/* + * Resolve any translation fault due to change in the p2m. This + * includes break-before-make and valid bit cleared. + */ +bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + unsigned int level =3D 0; + bool resolved =3D false; + lpae_t entry, *table; + + /* Convenience aliases */ + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn)); + + p2m_write_lock(p2m); + + /* This gfn is higher than the highest the p2m map currently holds */ + if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) ) + goto out; + + table =3D p2m_get_root_pointer(p2m, gfn); + /* + * The table should always be non-NULL because the gfn is below + * p2m->max_mapped_gfn and the root table pages are always present. + */ + if ( !table ) + { + ASSERT_UNREACHABLE(); + goto out; + } + + /* + * Go down the page-tables until an entry has the valid bit unset or + * a block/page entry has been hit. + */ + for ( level =3D P2M_ROOT_LEVEL; level <=3D 3; level++ ) + { + int rc; + + entry =3D table[offsets[level]]; + + if ( level =3D=3D 3 ) + break; + + /* Stop as soon as we hit an entry with the valid bit unset. */ + if ( !lpae_is_valid(entry) ) + break; + + rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]); + if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) + goto out_unmap; + else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) + break; + } + + /* + * If the valid bit of the entry is set, it means someone was playing = with + * the Stage-2 page table. Nothing to do and mark the fault as resolve= d. + */ + if ( lpae_is_valid(entry) ) + { + resolved =3D true; + goto out_unmap; + } + + /* + * The valid bit is unset. If the entry is still not valid then the fa= ult + * cannot be resolved, exit and report it. + */ + if ( !p2m_is_valid(entry) ) + goto out_unmap; + + /* + * Now we have an entry with valid bit unset, but still valid from + * the P2M point of view. + * + * If an entry is pointing to a table, each entry of the table will + * have there valid bit cleared. This allows a function to clear the + * full p2m with just a couple of write. The valid bit will then be + * propagated on the fault. + * If an entry is pointing to a block/page, no work to do for now. + */ + if ( lpae_is_table(entry, level) ) + p2m_invalidate_table(p2m, lpae_get_mfn(entry)); + + /* + * Now that the work on the entry is done, set the valid bit to prevent + * another fault on that entry. + */ + resolved =3D true; + entry.p2m.valid =3D 1; + + p2m_write_pte(table + offsets[level], entry, p2m->clean_pte); + + /* + * No need to flush the TLBs as the modified entry had the valid bit + * unset. + */ + +out_unmap: + unmap_domain_page(table); + +out: + p2m_write_unlock(p2m); + + return resolved; +} + +static struct page_info *p2m_allocate_root(void) +{ + struct page_info *page; + unsigned int i; + + page =3D alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0); + if ( page =3D=3D NULL ) + return NULL; + + /* Clear both first level pages */ + for ( i =3D 0; i < P2M_ROOT_PAGES; i++ ) + clear_and_clean_page(page + i); + + return page; +} + +static int p2m_alloc_table(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + p2m->root =3D p2m_allocate_root(); + if ( !p2m->root ) + return -ENOMEM; + + p2m->vttbr =3D generate_vttbr(p2m->vmid, page_to_mfn(p2m->root)); + + /* + * Make sure that all TLBs corresponding to the new VMID are flushed + * before using it + */ + p2m_write_lock(p2m); + p2m_force_tlb_flush_sync(p2m); + p2m_write_unlock(p2m); + + return 0; +} + +int p2m_init(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + int rc; + unsigned int cpu; + + rwlock_init(&p2m->lock); + spin_lock_init(&d->arch.paging.lock); + INIT_PAGE_LIST_HEAD(&p2m->pages); + INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist); + + p2m->vmid =3D INVALID_VMID; + p2m->max_mapped_gfn =3D _gfn(0); + p2m->lowest_mapped_gfn =3D _gfn(ULONG_MAX); + + p2m->default_access =3D p2m_access_rwx; + p2m->mem_access_enabled =3D false; + radix_tree_init(&p2m->mem_access_settings); + + /* + * Some IOMMUs don't support coherent PT walk. When the p2m is + * shared with the CPU, Xen has to make sure that the PT changes have + * reached the memory + */ + p2m->clean_pte =3D is_iommu_enabled(d) && + !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK); + + /* + * Make sure that the type chosen to is able to store the an vCPU ID + * between 0 and the maximum of virtual CPUS supported as long as + * the INVALID_VCPU_ID. + */ + BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < MAX_VIRT_CPU= S); + BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0])* 8)) < INVALID_VCPU_= ID); + + for_each_possible_cpu(cpu) + p2m->last_vcpu_ran[cpu] =3D INVALID_VCPU_ID; + + /* + * "Trivial" initialisation is now complete. Set the backpointer so + * p2m_teardown() and friends know to do something. + */ + p2m->domain =3D d; + + rc =3D p2m_alloc_vmid(d); + if ( rc ) + return rc; + + rc =3D p2m_alloc_table(d); + if ( rc ) + return rc; + + return 0; +} + +/* VTCR value to be configured by all CPUs. Set only once by the boot CPU = */ +static register_t __read_mostly vtcr; + +void setup_virt_paging_one(void *data) +{ + WRITE_SYSREG(vtcr, VTCR_EL2); + + /* + * ARM64_WORKAROUND_AT_SPECULATE: We want to keep the TLBs free from + * entries related to EL1/EL0 translation regime until a guest vCPU + * is running. For that, we need to set-up VTTBR to point to an empty + * page-table and turn on stage-2 translation. The TLB entries + * associated with EL1/EL0 translation regime will also be flushed in = case + * an AT instruction was speculated before hand. + */ + if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) ) + { + WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR= _EL2); + WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2); + isb(); + + flush_all_guests_tlb_local(); + } +} + +void __init setup_virt_paging(void) +{ + /* Setup Stage 2 address translation */ + register_t val =3D VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WB= WA; + + static const struct { + unsigned int pabits; /* Physical Address Size */ + unsigned int t0sz; /* Desired T0SZ, minimum in comment */ + unsigned int root_order; /* Page order of the root of the p2m */ + unsigned int sl0; /* Desired SL0, maximum in comment */ + } pa_range_info[] __initconst =3D { + /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */ + /* PA size, t0sz(min), root-order, sl0(max) */ +#ifdef CONFIG_ARM_64 + [0] =3D { 32, 32/*32*/, 0, 1 }, + [1] =3D { 36, 28/*28*/, 0, 1 }, + [2] =3D { 40, 24/*24*/, 1, 1 }, + [3] =3D { 42, 22/*22*/, 3, 1 }, + [4] =3D { 44, 20/*20*/, 0, 2 }, + [5] =3D { 48, 16/*16*/, 0, 2 }, + [6] =3D { 52, 12/*12*/, 4, 2 }, + [7] =3D { 0 } /* Invalid */ +#else + { 32, 0/*0*/, 0, 1 }, + { 40, 24/*24*/, 1, 1 } +#endif + }; + + unsigned int i; + unsigned int pa_range =3D 0x10; /* Larger than any possible value */ + +#ifdef CONFIG_ARM_32 + /* + * Typecast pa_range_info[].t0sz into arm32 bit variant. + * + * VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for arm322. + * Thus, pa_range_info[].t0sz is translated to its arm32 variant using + * struct bitfields. + */ + struct + { + signed int val:5; + } t0sz_32; +#else + /* + * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured + * with IPA bits =3D=3D PA bits, compare against "pabits". + */ + if ( pa_range_info[system_cpuinfo.mm64.pa_range].pabits < p2m_ipa_bits= ) + p2m_ipa_bits =3D pa_range_info[system_cpuinfo.mm64.pa_range].pabit= s; + + /* + * cpu info sanitization made sure we support 16bits VMID only if all + * cores are supporting it. + */ + if ( system_cpuinfo.mm64.vmid_bits =3D=3D MM64_VMID_16_BITS_SUPPORT ) + max_vmid =3D MAX_VMID_16_BIT; +#endif + + /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits"= . */ + for ( i =3D 0; i < ARRAY_SIZE(pa_range_info); i++ ) + { + if ( p2m_ipa_bits =3D=3D pa_range_info[i].pabits ) + { + pa_range =3D i; + break; + } + } + + /* Check if we found the associated entry in the array */ + if ( pa_range >=3D ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_rang= e].pabits ) + panic("%u-bit P2M is not supported\n", p2m_ipa_bits); + +#ifdef CONFIG_ARM_64 + val |=3D VTCR_PS(pa_range); + val |=3D VTCR_TG0_4K; + + /* Set the VS bit only if 16 bit VMID is supported. */ + if ( MAX_VMID =3D=3D MAX_VMID_16_BIT ) + val |=3D VTCR_VS; +#endif + + val |=3D VTCR_SL0(pa_range_info[pa_range].sl0); + val |=3D VTCR_T0SZ(pa_range_info[pa_range].t0sz); + + p2m_root_order =3D pa_range_info[pa_range].root_order; + p2m_root_level =3D 2 - pa_range_info[pa_range].sl0; + +#ifdef CONFIG_ARM_64 + p2m_ipa_bits =3D 64 - pa_range_info[pa_range].t0sz; +#else + t0sz_32.val =3D pa_range_info[pa_range].t0sz; + p2m_ipa_bits =3D 32 - t0sz_32.val; +#endif + + printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n", + p2m_ipa_bits, + pa_range_info[pa_range].pabits, + ( MAX_VMID =3D=3D MAX_VMID_16_BIT ) ? 16 : 8); + + printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n", + 4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val); + + p2m_vmid_allocator_init(); + + /* It is not allowed to concatenate a level zero root */ + BUG_ON( P2M_ROOT_LEVEL =3D=3D 0 && P2M_ROOT_ORDER > 0 ); + vtcr =3D val; + + /* + * ARM64_WORKAROUND_AT_SPECULATE requires to allocate root table + * with all entries zeroed. + */ + if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) ) + { + struct page_info *root; + + root =3D p2m_allocate_root(); + if ( !root ) + panic("Unable to allocate root table for ARM64_WORKAROUND_AT_S= PECULATE\n"); + + empty_root_mfn =3D page_to_mfn(root); + } + + setup_virt_paging_one(NULL); + smp_call_function(setup_virt_paging_one, NULL, 1); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index de32a2d638..b2771e0bed 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1,1466 +1,138 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include -#include #include -#include #include #include #include =20 -#include #include #include #include #include #include =20 -#define MAX_VMID_8_BIT (1UL << 8) -#define MAX_VMID_16_BIT (1UL << 16) - -#define INVALID_VMID 0 /* VMID 0 is reserved */ - -unsigned int __read_mostly p2m_root_order; -unsigned int __read_mostly p2m_root_level; #ifdef CONFIG_ARM_64 -static unsigned int __read_mostly max_vmid =3D MAX_VMID_8_BIT; -/* VMID is by default 8 bit width on AArch64 */ -#define MAX_VMID max_vmid -#else -/* VMID is always 8 bit width on AArch32 */ -#define MAX_VMID MAX_VMID_8_BIT +unsigned int __read_mostly max_vmid =3D MAX_VMID_8_BIT; #endif =20 -#define P2M_ROOT_PAGES (1<arch.paging.p2m_total_pages) =3D - d->arch.paging.p2m_total_pages + 1; - page_list_add_tail(pg, &d->arch.paging.p2m_freelist); - } - else if ( d->arch.paging.p2m_total_pages > pages ) - { - /* Need to return memory to domheap */ - pg =3D page_list_remove_head(&d->arch.paging.p2m_freelist); - if( pg ) - { - ACCESS_ONCE(d->arch.paging.p2m_total_pages) =3D - d->arch.paging.p2m_total_pages - 1; - free_domheap_page(pg); - } - else - { - printk(XENLOG_ERR - "Failed to free P2M pages, P2M freelist is empty.\n= "); - return -ENOMEM; - } - } - else - break; - - /* Check to see if we need to yield and try again */ - if ( preempted && general_preempt_check() ) - { - *preempted =3D true; - return -ERESTART; - } - } - - return 0; -} - -int arch_set_paging_mempool_size(struct domain *d, uint64_t size) -{ - unsigned long pages =3D size >> PAGE_SHIFT; - bool preempted =3D false; - int rc; - - if ( (size & ~PAGE_MASK) || /* Non page-sized request? */ - pages !=3D (size >> PAGE_SHIFT) ) /* 32-bit overflow? */ - return -EINVAL; - - spin_lock(&d->arch.paging.lock); - rc =3D p2m_set_allocation(d, pages, &preempted); - spin_unlock(&d->arch.paging.lock); - - ASSERT(preempted =3D=3D (rc =3D=3D -ERESTART)); - - return rc; -} - -int p2m_teardown_allocation(struct domain *d) -{ - int ret =3D 0; - bool preempted =3D false; - - spin_lock(&d->arch.paging.lock); - if ( d->arch.paging.p2m_total_pages !=3D 0 ) - { - ret =3D p2m_set_allocation(d, 0, &preempted); - if ( preempted ) - { - spin_unlock(&d->arch.paging.lock); - return -ERESTART; - } - ASSERT(d->arch.paging.p2m_total_pages =3D=3D 0); - } - spin_unlock(&d->arch.paging.lock); - - return ret; -} - -/* Unlock the flush and do a P2M TLB flush if necessary */ -void p2m_write_unlock(struct p2m_domain *p2m) -{ - /* - * The final flush is done with the P2M write lock taken to avoid - * someone else modifying the P2M wbefore the TLB invalidation has - * completed. - */ - p2m_tlb_flush_sync(p2m); - - write_unlock(&p2m->lock); -} - -void p2m_dump_info(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - p2m_read_lock(p2m); - printk("p2m mappings for domain %d (vmid %d):\n", - d->domain_id, p2m->vmid); - BUG_ON(p2m->stats.mappings[0] || p2m->stats.shattered[0]); - printk(" 1G mappings: %ld (shattered %ld)\n", - p2m->stats.mappings[1], p2m->stats.shattered[1]); - printk(" 2M mappings: %ld (shattered %ld)\n", - p2m->stats.mappings[2], p2m->stats.shattered[2]); - printk(" 4K mappings: %ld\n", p2m->stats.mappings[3]); - p2m_read_unlock(p2m); -} - -void memory_type_changed(struct domain *d) -{ -} - -void dump_p2m_lookup(struct domain *d, paddr_t addr) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr); - - printk("P2M @ %p mfn:%#"PRI_mfn"\n", - p2m->root, mfn_x(page_to_mfn(p2m->root))); - - dump_pt_walk(page_to_maddr(p2m->root), addr, - P2M_ROOT_LEVEL, P2M_ROOT_PAGES); -} - -/* - * p2m_save_state and p2m_restore_state work in pair to workaround - * ARM64_WORKAROUND_AT_SPECULATE. p2m_save_state will set-up VTTBR to - * point to the empty page-tables to stop allocating TLB entries. - */ -void p2m_save_state(struct vcpu *p) -{ - p->arch.sctlr =3D READ_SYSREG(SCTLR_EL1); - - if ( cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) ) - { - WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR= _EL2); - /* - * Ensure VTTBR_EL2 is correctly synchronized so we can restore - * the next vCPU context without worrying about AT instruction - * speculation. - */ - isb(); - } -} - -void p2m_restore_state(struct vcpu *n) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(n->domain); - uint8_t *last_vcpu_ran; - - if ( is_idle_vcpu(n) ) - return; - - WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1); - WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2); - - /* - * ARM64_WORKAROUND_AT_SPECULATE: VTTBR_EL2 should be restored after a= ll - * registers associated to EL1/EL0 translations regime have been - * synchronized. - */ - asm volatile(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_AT_SPECULATE)); - WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2); - - last_vcpu_ran =3D &p2m->last_vcpu_ran[smp_processor_id()]; - - /* - * While we are restoring an out-of-context translation regime - * we still need to ensure: - * - VTTBR_EL2 is synchronized before flushing the TLBs - * - All registers for EL1 are synchronized before executing an AT - * instructions targeting S1/S2. - */ - isb(); - - /* - * Flush local TLB for the domain to prevent wrong TLB translation - * when running multiple vCPU of the same domain on a single pCPU. - */ - if ( *last_vcpu_ran !=3D INVALID_VCPU_ID && *last_vcpu_ran !=3D n->vcp= u_id ) - flush_guest_tlb_local(); - - *last_vcpu_ran =3D n->vcpu_id; -} - -/* - * Force a synchronous P2M TLB flush. - * - * Must be called with the p2m lock held. - */ -static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m) -{ - unsigned long flags =3D 0; - uint64_t ovttbr; - - ASSERT(p2m_is_write_locked(p2m)); - - /* - * ARM only provides an instruction to flush TLBs for the current - * VMID. So switch to the VTTBR of a given P2M if different. - */ - ovttbr =3D READ_SYSREG64(VTTBR_EL2); - if ( ovttbr !=3D p2m->vttbr ) - { - uint64_t vttbr; - - local_irq_save(flags); - - /* - * ARM64_WORKAROUND_AT_SPECULATE: We need to stop AT to allocate - * TLBs entries because the context is partially modified. We - * only need the VMID for flushing the TLBs, so we can generate - * a new VTTBR with the VMID to flush and the empty root table. - */ - if ( !cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) ) - vttbr =3D p2m->vttbr; - else - vttbr =3D generate_vttbr(p2m->vmid, empty_root_mfn); - - WRITE_SYSREG64(vttbr, VTTBR_EL2); - - /* Ensure VTTBR_EL2 is synchronized before flushing the TLBs */ - isb(); - } - - flush_guest_tlb(); - - if ( ovttbr !=3D READ_SYSREG64(VTTBR_EL2) ) - { - WRITE_SYSREG64(ovttbr, VTTBR_EL2); - /* Ensure VTTBR_EL2 is back in place before continuing. */ - isb(); - local_irq_restore(flags); - } - - p2m->need_flush =3D false; -} - -void p2m_tlb_flush_sync(struct p2m_domain *p2m) -{ - if ( p2m->need_flush ) - p2m_force_tlb_flush_sync(p2m); -} - -/* - * Find and map the root page table. The caller is responsible for - * unmapping the table. - * - * The function will return NULL if the offset of the root table is - * invalid. - */ -static lpae_t *p2m_get_root_pointer(struct p2m_domain *p2m, - gfn_t gfn) -{ - unsigned long root_table; - - /* - * While the root table index is the offset from the previous level, - * we can't use (P2M_ROOT_LEVEL - 1) because the root level might be - * 0. Yet we still want to check if all the unused bits are zeroed. - */ - root_table =3D gfn_x(gfn) >> (XEN_PT_LEVEL_ORDER(P2M_ROOT_LEVEL) + - XEN_PT_LPAE_SHIFT); - if ( root_table >=3D P2M_ROOT_PAGES ) - return NULL; - - return __map_domain_page(p2m->root + root_table); -} - -/* - * Lookup the MFN corresponding to a domain's GFN. - * Lookup mem access in the ratrix tree. - * The entries associated to the GFN is considered valid. - */ -static p2m_access_t p2m_mem_access_radix_get(struct p2m_domain *p2m, gfn_t= gfn) -{ - void *ptr; - - if ( !p2m->mem_access_enabled ) - return p2m->default_access; - - ptr =3D radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn)); - if ( !ptr ) - return p2m_access_rwx; - else - return radix_tree_ptr_to_int(ptr); -} - -/* - * In the case of the P2M, the valid bit is used for other purpose. Use - * the type to check whether an entry is valid. - */ -static inline bool p2m_is_valid(lpae_t pte) -{ - return pte.p2m.type !=3D p2m_invalid; -} - -/* - * lpae_is_* helpers don't check whether the valid bit is set in the - * PTE. Provide our own overlay to check the valid bit. - */ -static inline bool p2m_is_mapping(lpae_t pte, unsigned int level) -{ - return p2m_is_valid(pte) && lpae_is_mapping(pte, level); -} - -static inline bool p2m_is_superpage(lpae_t pte, unsigned int level) -{ - return p2m_is_valid(pte) && lpae_is_superpage(pte, level); -} - -#define GUEST_TABLE_MAP_FAILED 0 -#define GUEST_TABLE_SUPER_PAGE 1 -#define GUEST_TABLE_NORMAL_PAGE 2 - -static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry); - -/* - * Take the currently mapped table, find the corresponding GFN entry, - * and map the next table, if available. The previous table will be - * unmapped if the next level was mapped (e.g GUEST_TABLE_NORMAL_PAGE - * returned). - * - * The read_only parameters indicates whether intermediate tables should - * be allocated when not present. - * - * Return values: - * GUEST_TABLE_MAP_FAILED: Either read_only was set and the entry - * was empty, or allocating a new page failed. - * GUEST_TABLE_NORMAL_PAGE: next level mapped normally - * GUEST_TABLE_SUPER_PAGE: The next entry points to a superpage. - */ -static int p2m_next_level(struct p2m_domain *p2m, bool read_only, - unsigned int level, lpae_t **table, - unsigned int offset) -{ - lpae_t *entry; - int ret; - mfn_t mfn; - - entry =3D *table + offset; - - if ( !p2m_is_valid(*entry) ) - { - if ( read_only ) - return GUEST_TABLE_MAP_FAILED; - - ret =3D p2m_create_table(p2m, entry); - if ( ret ) - return GUEST_TABLE_MAP_FAILED; - } - - /* The function p2m_next_level is never called at the 3rd level */ - ASSERT(level < 3); - if ( p2m_is_mapping(*entry, level) ) - return GUEST_TABLE_SUPER_PAGE; - - mfn =3D lpae_get_mfn(*entry); - - unmap_domain_page(*table); - *table =3D map_domain_page(mfn); - - return GUEST_TABLE_NORMAL_PAGE; -} - -/* - * Get the details of a given gfn. - * - * If the entry is present, the associated MFN will be returned and the - * access and type filled up. The page_order will correspond to the - * order of the mapping in the page table (i.e it could be a superpage). - * - * If the entry is not present, INVALID_MFN will be returned and the - * page_order will be set according to the order of the invalid range. - * - * valid will contain the value of bit[0] (e.g valid bit) of the - * entry. - */ -mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, - p2m_type_t *t, p2m_access_t *a, - unsigned int *page_order, - bool *valid) -{ - paddr_t addr =3D gfn_to_gaddr(gfn); - unsigned int level =3D 0; - lpae_t entry, *table; - int rc; - mfn_t mfn =3D INVALID_MFN; - p2m_type_t _t; - DECLARE_OFFSETS(offsets, addr); - - ASSERT(p2m_is_locked(p2m)); - BUILD_BUG_ON(THIRD_MASK !=3D PAGE_MASK); - - /* Allow t to be NULL */ - t =3D t ?: &_t; - - *t =3D p2m_invalid; - - if ( valid ) - *valid =3D false; - - /* XXX: Check if the mapping is lower than the mapped gfn */ - - /* This gfn is higher than the highest the p2m map currently holds */ - if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) ) - { - for ( level =3D P2M_ROOT_LEVEL; level < 3; level++ ) - if ( (gfn_x(gfn) & (XEN_PT_LEVEL_MASK(level) >> PAGE_SHIFT)) > - gfn_x(p2m->max_mapped_gfn) ) - break; - - goto out; - } - - table =3D p2m_get_root_pointer(p2m, gfn); - - /* - * the table should always be non-NULL because the gfn is below - * p2m->max_mapped_gfn and the root table pages are always present. - */ - if ( !table ) - { - ASSERT_UNREACHABLE(); - level =3D P2M_ROOT_LEVEL; - goto out; - } - - for ( level =3D P2M_ROOT_LEVEL; level < 3; level++ ) - { - rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]); - if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) - goto out_unmap; - else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) - break; - } - - entry =3D table[offsets[level]]; - - if ( p2m_is_valid(entry) ) - { - *t =3D entry.p2m.type; - - if ( a ) - *a =3D p2m_mem_access_radix_get(p2m, gfn); - - mfn =3D lpae_get_mfn(entry); - /* - * The entry may point to a superpage. Find the MFN associated - * to the GFN. - */ - mfn =3D mfn_add(mfn, - gfn_x(gfn) & ((1UL << XEN_PT_LEVEL_ORDER(level)) - 1= )); - - if ( valid ) - *valid =3D lpae_is_valid(entry); - } - -out_unmap: - unmap_domain_page(table); - -out: - if ( page_order ) - *page_order =3D XEN_PT_LEVEL_ORDER(level); - - return mfn; -} - -mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t) -{ - mfn_t mfn; - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - p2m_read_lock(p2m); - mfn =3D p2m_get_entry(p2m, gfn, t, NULL, NULL, NULL); - p2m_read_unlock(p2m); - - return mfn; -} - -struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn, - p2m_type_t *t) -{ - struct page_info *page; - p2m_type_t p2mt; - mfn_t mfn =3D p2m_lookup(d, gfn, &p2mt); - - if ( t ) - *t =3D p2mt; - - if ( !p2m_is_any_ram(p2mt) ) - return NULL; - - if ( !mfn_valid(mfn) ) - return NULL; - - page =3D mfn_to_page(mfn); - - /* - * get_page won't work on foreign mapping because the page doesn't - * belong to the current domain. - */ - if ( p2m_is_foreign(p2mt) ) - { - struct domain *fdom =3D page_get_owner_and_reference(page); - ASSERT(fdom !=3D NULL); - ASSERT(fdom !=3D d); - return page; - } - - return get_page(page, d) ? page : NULL; -} - -int guest_physmap_mark_populate_on_demand(struct domain *d, - unsigned long gfn, - unsigned int order) -{ - return -ENOSYS; -} - -unsigned long p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, - unsigned int order) -{ - return 0; -} - -static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a) -{ - /* First apply type permissions */ - switch ( t ) - { - case p2m_ram_rw: - e->p2m.xn =3D 0; - e->p2m.write =3D 1; - break; - - case p2m_ram_ro: - e->p2m.xn =3D 0; - e->p2m.write =3D 0; - break; - - case p2m_iommu_map_rw: - case p2m_map_foreign_rw: - case p2m_grant_map_rw: - case p2m_mmio_direct_dev: - case p2m_mmio_direct_nc: - case p2m_mmio_direct_c: - e->p2m.xn =3D 1; - e->p2m.write =3D 1; - break; - - case p2m_iommu_map_ro: - case p2m_map_foreign_ro: - case p2m_grant_map_ro: - case p2m_invalid: - e->p2m.xn =3D 1; - e->p2m.write =3D 0; - break; - - case p2m_max_real_type: - BUG(); - break; - } - - /* Then restrict with access permissions */ - switch ( a ) - { - case p2m_access_rwx: - break; - case p2m_access_wx: - e->p2m.read =3D 0; - break; - case p2m_access_rw: - e->p2m.xn =3D 1; - break; - case p2m_access_w: - e->p2m.read =3D 0; - e->p2m.xn =3D 1; - break; - case p2m_access_rx: - case p2m_access_rx2rw: - e->p2m.write =3D 0; - break; - case p2m_access_x: - e->p2m.write =3D 0; - e->p2m.read =3D 0; - break; - case p2m_access_r: - e->p2m.write =3D 0; - e->p2m.xn =3D 1; - break; - case p2m_access_n: - case p2m_access_n2rwx: - e->p2m.read =3D e->p2m.write =3D 0; - e->p2m.xn =3D 1; - break; - } -} - -static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a) -{ - /* - * sh, xn and write bit will be defined in the following switches - * based on mattr and t. - */ - lpae_t e =3D (lpae_t) { - .p2m.af =3D 1, - .p2m.read =3D 1, - .p2m.table =3D 1, - .p2m.valid =3D 1, - .p2m.type =3D t, - }; - - BUILD_BUG_ON(p2m_max_real_type > (1 << 4)); - - switch ( t ) - { - case p2m_mmio_direct_dev: - e.p2m.mattr =3D MATTR_DEV; - e.p2m.sh =3D LPAE_SH_OUTER; - break; - - case p2m_mmio_direct_c: - e.p2m.mattr =3D MATTR_MEM; - e.p2m.sh =3D LPAE_SH_OUTER; - break; - - /* - * ARM ARM: Overlaying the shareability attribute (DDI - * 0406C.b B3-1376 to 1377) - * - * A memory region with a resultant memory type attribute of Normal, - * and a resultant cacheability attribute of Inner Non-cacheable, - * Outer Non-cacheable, must have a resultant shareability attribute - * of Outer Shareable, otherwise shareability is UNPREDICTABLE. - * - * On ARMv8 shareability is ignored and explicitly treated as Outer - * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable. - * See the note for table D4-40, in page 1788 of the ARM DDI 0487A.j. - */ - case p2m_mmio_direct_nc: - e.p2m.mattr =3D MATTR_MEM_NC; - e.p2m.sh =3D LPAE_SH_OUTER; - break; - - default: - e.p2m.mattr =3D MATTR_MEM; - e.p2m.sh =3D LPAE_SH_INNER; - } - - p2m_set_permission(&e, t, a); - - ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK)); - - lpae_set_mfn(e, mfn); - - return e; -} - -/* Generate table entry with correct attributes. */ -static lpae_t page_to_p2m_table(struct page_info *page) -{ - /* - * The access value does not matter because the hardware will ignore - * the permission fields for table entry. - * - * We use p2m_ram_rw so the entry has a valid type. This is important - * for p2m_is_valid() to return valid on table entries. - */ - return mfn_to_p2m_entry(page_to_mfn(page), p2m_ram_rw, p2m_access_rwx); -} - -static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool clean_pte) -{ - write_pte(p, pte); - if ( clean_pte ) - clean_dcache(*p); -} - -static inline void p2m_remove_pte(lpae_t *p, bool clean_pte) -{ - lpae_t pte; - - memset(&pte, 0x00, sizeof(pte)); - p2m_write_pte(p, pte, clean_pte); -} - -/* Allocate a new page table page and hook it in via the given entry. */ -static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry) -{ - struct page_info *page; - lpae_t *p; - - ASSERT(!p2m_is_valid(*entry)); - - page =3D p2m_alloc_page(p2m->domain); - if ( page =3D=3D NULL ) - return -ENOMEM; - - page_list_add(page, &p2m->pages); - - p =3D __map_domain_page(page); - clear_page(p); - - if ( p2m->clean_pte ) - clean_dcache_va_range(p, PAGE_SIZE); - - unmap_domain_page(p); - - p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte); - - return 0; -} - -static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn, - p2m_access_t a) -{ - int rc; - - if ( !p2m->mem_access_enabled ) - return 0; - - if ( p2m_access_rwx =3D=3D a ) - { - radix_tree_delete(&p2m->mem_access_settings, gfn_x(gfn)); - return 0; - } - - rc =3D radix_tree_insert(&p2m->mem_access_settings, gfn_x(gfn), - radix_tree_int_to_ptr(a)); - if ( rc =3D=3D -EEXIST ) - { - /* If a setting already exists, change it to the new one */ - radix_tree_replace_slot( - radix_tree_lookup_slot( - &p2m->mem_access_settings, gfn_x(gfn)), - radix_tree_int_to_ptr(a)); - rc =3D 0; - } - - return rc; -} - -/* - * Put any references on the single 4K page referenced by pte. - * TODO: Handle superpages, for now we only take special references for le= af - * pages (specifically foreign ones, which can't be super mapped today). - */ -static void p2m_put_l3_page(const lpae_t pte) -{ - mfn_t mfn =3D lpae_get_mfn(pte); - - ASSERT(p2m_is_valid(pte)); - - /* - * TODO: Handle other p2m types - * - * It's safe to do the put_page here because page_alloc will - * flush the TLBs if the page is reallocated before the end of - * this loop. - */ - if ( p2m_is_foreign(pte.p2m.type) ) - { - ASSERT(mfn_valid(mfn)); - put_page(mfn_to_page(mfn)); - } - /* Detect the xenheap page and mark the stored GFN as invalid. */ - else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) - page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); -} - -/* Free lpae sub-tree behind an entry */ -static void p2m_free_entry(struct p2m_domain *p2m, - lpae_t entry, unsigned int level) -{ - unsigned int i; - lpae_t *table; - mfn_t mfn; - struct page_info *pg; - - /* Nothing to do if the entry is invalid. */ - if ( !p2m_is_valid(entry) ) - return; - - if ( p2m_is_superpage(entry, level) || (level =3D=3D 3) ) - { -#ifdef CONFIG_IOREQ_SERVER - /* - * If this gets called then either the entry was replaced by an en= try - * with a different base (valid case) or the shattering of a super= page - * has failed (error case). - * So, at worst, the spurious mapcache invalidation might be sent. - */ - if ( p2m_is_ram(entry.p2m.type) && - domain_has_ioreq_server(p2m->domain) ) - ioreq_request_mapcache_invalidate(p2m->domain); -#endif - - p2m->stats.mappings[level]--; - /* Nothing to do if the entry is a super-page. */ - if ( level =3D=3D 3 ) - p2m_put_l3_page(entry); - return; - } - - table =3D map_domain_page(lpae_get_mfn(entry)); - for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) - p2m_free_entry(p2m, *(table + i), level + 1); - - unmap_domain_page(table); - - /* - * Make sure all the references in the TLB have been removed before - * freing the intermediate page table. - * XXX: Should we defer the free of the page table to avoid the - * flush? - */ - p2m_tlb_flush_sync(p2m); - - mfn =3D lpae_get_mfn(entry); - ASSERT(mfn_valid(mfn)); - - pg =3D mfn_to_page(mfn); - - page_list_del(pg, &p2m->pages); - p2m_free_page(p2m->domain, pg); -} - -static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry, - unsigned int level, unsigned int target, - const unsigned int *offsets) -{ - struct page_info *page; - unsigned int i; - lpae_t pte, *table; - bool rv =3D true; - - /* Convenience aliases */ - mfn_t mfn =3D lpae_get_mfn(*entry); - unsigned int next_level =3D level + 1; - unsigned int level_order =3D XEN_PT_LEVEL_ORDER(next_level); - - /* - * This should only be called with target !=3D level and the entry is - * a superpage. - */ - ASSERT(level < target); - ASSERT(p2m_is_superpage(*entry, level)); - - page =3D p2m_alloc_page(p2m->domain); - if ( !page ) - return false; - - page_list_add(page, &p2m->pages); - table =3D __map_domain_page(page); - - /* - * We are either splitting a first level 1G page into 512 second level - * 2M pages, or a second level 2M page into 512 third level 4K pages. - */ - for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) - { - lpae_t *new_entry =3D table + i; - - /* - * Use the content of the superpage entry and override - * the necessary fields. So the correct permission are kept. - */ - pte =3D *entry; - lpae_set_mfn(pte, mfn_add(mfn, i << level_order)); - - /* - * First and second level pages set p2m.table =3D 0, but third - * level entries set p2m.table =3D 1. - */ - pte.p2m.table =3D (next_level =3D=3D 3); - - write_pte(new_entry, pte); - } - - /* Update stats */ - p2m->stats.shattered[level]++; - p2m->stats.mappings[level]--; - p2m->stats.mappings[next_level] +=3D XEN_PT_LPAE_ENTRIES; - - /* - * Shatter superpage in the page to the level we want to make the - * changes. - * This is done outside the loop to avoid checking the offset to - * know whether the entry should be shattered for every entry. - */ - if ( next_level !=3D target ) - rv =3D p2m_split_superpage(p2m, table + offsets[next_level], - level + 1, target, offsets); - - if ( p2m->clean_pte ) - clean_dcache_va_range(table, PAGE_SIZE); - - unmap_domain_page(table); - - /* - * Even if we failed, we should install the newly allocated LPAE - * entry. The caller will be in charge to free the sub-tree. - */ - p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte); - - return rv; -} - -/* - * Insert an entry in the p2m. This should be called with a mapping - * equal to a page/superpage (4K, 2M, 1G). - */ -static int __p2m_set_entry(struct p2m_domain *p2m, - gfn_t sgfn, - unsigned int page_order, - mfn_t smfn, - p2m_type_t t, - p2m_access_t a) -{ - unsigned int level =3D 0; - unsigned int target =3D 3 - (page_order / XEN_PT_LPAE_SHIFT); - lpae_t *entry, *table, orig_pte; - int rc; - /* A mapping is removed if the MFN is invalid. */ - bool removing_mapping =3D mfn_eq(smfn, INVALID_MFN); - DECLARE_OFFSETS(offsets, gfn_to_gaddr(sgfn)); - - ASSERT(p2m_is_write_locked(p2m)); - - /* - * Check if the level target is valid: we only support - * 4K - 2M - 1G mapping. - */ - ASSERT(target > 0 && target <=3D 3); - - table =3D p2m_get_root_pointer(p2m, sgfn); - if ( !table ) - return -EINVAL; - - for ( level =3D P2M_ROOT_LEVEL; level < target; level++ ) - { - /* - * Don't try to allocate intermediate page table if the mapping - * is about to be removed. - */ - rc =3D p2m_next_level(p2m, removing_mapping, - level, &table, offsets[level]); - if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) - { - /* - * We are here because p2m_next_level has failed to map - * the intermediate page table (e.g the table does not exist - * and they p2m tree is read-only). It is a valid case - * when removing a mapping as it may not exist in the - * page table. In this case, just ignore it. - */ - rc =3D removing_mapping ? 0 : -ENOENT; - goto out; - } - else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) - break; - } - - entry =3D table + offsets[level]; - - /* - * If we are here with level < target, we must be at a leaf node, - * and we need to break up the superpage. - */ - if ( level < target ) - { - /* We need to split the original page. */ - lpae_t split_pte =3D *entry; - - ASSERT(p2m_is_superpage(*entry, level)); - - if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets)= ) - { - /* - * The current super-page is still in-place, so re-increment - * the stats. - */ - p2m->stats.mappings[level]++; - - /* Free the allocated sub-tree */ - p2m_free_entry(p2m, split_pte, level); - - rc =3D -ENOMEM; - goto out; - } - - /* - * Follow the break-before-sequence to update the entry. - * For more details see (D4.7.1 in ARM DDI 0487A.j). - */ - p2m_remove_pte(entry, p2m->clean_pte); - p2m_force_tlb_flush_sync(p2m); - - p2m_write_pte(entry, split_pte, p2m->clean_pte); - - /* then move to the level we want to make real changes */ - for ( ; level < target; level++ ) - { - rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]= ); - - /* - * The entry should be found and either be a table - * or a superpage if level 3 is not targeted - */ - ASSERT(rc =3D=3D GUEST_TABLE_NORMAL_PAGE || - (rc =3D=3D GUEST_TABLE_SUPER_PAGE && target < 3)); - } - - entry =3D table + offsets[level]; - } - - /* - * We should always be there with the correct level because - * all the intermediate tables have been installed if necessary. - */ - ASSERT(level =3D=3D target); - - orig_pte =3D *entry; - - /* - * The radix-tree can only work on 4KB. This is only used when - * memaccess is enabled and during shutdown. - */ - ASSERT(!p2m->mem_access_enabled || page_order =3D=3D 0 || - p2m->domain->is_dying); - /* - * The access type should always be p2m_access_rwx when the mapping - * is removed. - */ - ASSERT(!mfn_eq(INVALID_MFN, smfn) || (a =3D=3D p2m_access_rwx)); - /* - * Update the mem access permission before update the P2M. So we - * don't have to revert the mapping if it has failed. - */ - rc =3D p2m_mem_access_radix_set(p2m, sgfn, a); - if ( rc ) - goto out; - - /* - * Always remove the entry in order to follow the break-before-make - * sequence when updating the translation table (D4.7.1 in ARM DDI - * 0487A.j). - */ - if ( lpae_is_valid(orig_pte) || removing_mapping ) - p2m_remove_pte(entry, p2m->clean_pte); - - if ( removing_mapping ) - /* Flush can be deferred if the entry is removed */ - p2m->need_flush |=3D !!lpae_is_valid(orig_pte); - else - { - lpae_t pte =3D mfn_to_p2m_entry(smfn, t, a); - - if ( level < 3 ) - pte.p2m.table =3D 0; /* Superpage entry */ - - /* - * It is necessary to flush the TLB before writing the new entry - * to keep coherency when the previous entry was valid. - * - * Although, it could be defered when only the permissions are - * changed (e.g in case of memaccess). - */ - if ( lpae_is_valid(orig_pte) ) - { - if ( likely(!p2m->mem_access_enabled) || - P2M_CLEAR_PERM(pte) !=3D P2M_CLEAR_PERM(orig_pte) ) - p2m_force_tlb_flush_sync(p2m); - else - p2m->need_flush =3D true; - } - else if ( !p2m_is_valid(orig_pte) ) /* new mapping */ - p2m->stats.mappings[level]++; - - p2m_write_pte(entry, pte, p2m->clean_pte); - - p2m->max_mapped_gfn =3D gfn_max(p2m->max_mapped_gfn, - gfn_add(sgfn, (1UL << page_order) - = 1)); - p2m->lowest_mapped_gfn =3D gfn_min(p2m->lowest_mapped_gfn, sgfn); - } - - if ( is_iommu_enabled(p2m->domain) && - (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) ) - { - unsigned int flush_flags =3D 0; - - if ( lpae_is_valid(orig_pte) ) - flush_flags |=3D IOMMU_FLUSHF_modified; - if ( lpae_is_valid(*entry) ) - flush_flags |=3D IOMMU_FLUSHF_added; - - rc =3D iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)), - 1UL << page_order, flush_flags); - } - else - rc =3D 0; - - /* - * Free the entry only if the original pte was valid and the base - * is different (to avoid freeing when permission is changed). - */ - if ( p2m_is_valid(orig_pte) && - !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) ) - p2m_free_entry(p2m, orig_pte, level); - -out: - unmap_domain_page(table); - - return rc; -} - -int p2m_set_entry(struct p2m_domain *p2m, - gfn_t sgfn, - unsigned long nr, - mfn_t smfn, - p2m_type_t t, - p2m_access_t a) -{ - int rc =3D 0; - - /* - * Any reference taken by the P2M mappings (e.g. foreign mapping) will - * be dropped in relinquish_p2m_mapping(). As the P2M will still - * be accessible after, we need to prevent mapping to be added when the - * domain is dying. - */ - if ( unlikely(p2m->domain->is_dying) ) - return -ENOMEM; - - while ( nr ) - { - unsigned long mask; - unsigned long order; - - /* - * Don't take into account the MFN when removing mapping (i.e - * MFN_INVALID) to calculate the correct target order. - * - * XXX: Support superpage mappings if nr is not aligned to a - * superpage size. - */ - mask =3D !mfn_eq(smfn, INVALID_MFN) ? mfn_x(smfn) : 0; - mask |=3D gfn_x(sgfn) | nr; - - /* Always map 4k by 4k when memaccess is enabled */ - if ( unlikely(p2m->mem_access_enabled) ) - order =3D THIRD_ORDER; - else if ( !(mask & ((1UL << FIRST_ORDER) - 1)) ) - order =3D FIRST_ORDER; - else if ( !(mask & ((1UL << SECOND_ORDER) - 1)) ) - order =3D SECOND_ORDER; - else - order =3D THIRD_ORDER; - - rc =3D __p2m_set_entry(p2m, sgfn, order, smfn, t, a); - if ( rc ) - break; - - sgfn =3D gfn_add(sgfn, (1 << order)); - if ( !mfn_eq(smfn, INVALID_MFN) ) - smfn =3D mfn_add(smfn, (1 << order)); - - nr -=3D (1 << order); - } - - return rc; -} - -/* Invalidate all entries in the table. The p2m should be write locked. */ -static void p2m_invalidate_table(struct p2m_domain *p2m, mfn_t mfn) -{ - lpae_t *table; - unsigned int i; - - ASSERT(p2m_is_write_locked(p2m)); - - table =3D map_domain_page(mfn); - - for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) - { - lpae_t pte =3D table[i]; - - /* - * Writing an entry can be expensive because it may involve - * cleaning the cache. So avoid updating the entry if the valid - * bit is already cleared. - */ - if ( !pte.p2m.valid ) - continue; - - pte.p2m.valid =3D 0; - - p2m_write_pte(&table[i], pte, p2m->clean_pte); - } - - unmap_domain_page(table); - - p2m->need_flush =3D true; -} +#define P2M_ROOT_PAGES (1<root + i); +unsigned int __read_mostly p2m_ipa_bits =3D PADDR_BITS; =20 - p2m_force_tlb_flush_sync(p2m); +/* Unlock the flush and do a P2M TLB flush if necessary */ +void p2m_write_unlock(struct p2m_domain *p2m) +{ +#ifdef CONFIG_HAS_MMU + /* + * The final flush is done with the P2M write lock taken to avoid + * someone else modifying the P2M wbefore the TLB invalidation has + * completed. + */ + p2m_tlb_flush_sync(p2m); +#endif =20 - p2m_write_unlock(p2m); + write_unlock(&p2m->lock); } =20 -/* - * Invalidate all entries in the root page-tables. This is - * useful to get fault on entry and do an action. - * - * p2m_invalid_root() should not be called when the P2M is shared with - * the IOMMU because it will cause IOMMU fault. - */ -void p2m_invalidate_root(struct p2m_domain *p2m) +void memory_type_changed(struct domain *d) { - unsigned int i; +} =20 - ASSERT(!iommu_use_hap_pt(p2m->domain)); +void dump_p2m_lookup(struct domain *d, paddr_t addr) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); =20 - p2m_write_lock(p2m); + printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr); =20 - for ( i =3D 0; i < P2M_ROOT_LEVEL; i++ ) - p2m_invalidate_table(p2m, page_to_mfn(p2m->root + i)); + printk("P2M @ %p mfn:%#"PRI_mfn"\n", + p2m->root, mfn_x(page_to_mfn(p2m->root))); =20 - p2m_write_unlock(p2m); + dump_pt_walk(page_to_maddr(p2m->root), addr, + P2M_ROOT_LEVEL, P2M_ROOT_PAGES); } =20 -/* - * Resolve any translation fault due to change in the p2m. This - * includes break-before-make and valid bit cleared. - */ -bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn) +mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t) { + mfn_t mfn; struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - unsigned int level =3D 0; - bool resolved =3D false; - lpae_t entry, *table; - - /* Convenience aliases */ - DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn)); - - p2m_write_lock(p2m); =20 - /* This gfn is higher than the highest the p2m map currently holds */ - if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) ) - goto out; + p2m_read_lock(p2m); + mfn =3D p2m_get_entry(p2m, gfn, t, NULL, NULL, NULL); + p2m_read_unlock(p2m); =20 - table =3D p2m_get_root_pointer(p2m, gfn); - /* - * The table should always be non-NULL because the gfn is below - * p2m->max_mapped_gfn and the root table pages are always present. - */ - if ( !table ) - { - ASSERT_UNREACHABLE(); - goto out; - } + return mfn; +} =20 - /* - * Go down the page-tables until an entry has the valid bit unset or - * a block/page entry has been hit. - */ - for ( level =3D P2M_ROOT_LEVEL; level <=3D 3; level++ ) - { - int rc; +struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn, + p2m_type_t *t) +{ + struct page_info *page; + p2m_type_t p2mt; + mfn_t mfn =3D p2m_lookup(d, gfn, &p2mt); =20 - entry =3D table[offsets[level]]; + if ( t ) + *t =3D p2mt; =20 - if ( level =3D=3D 3 ) - break; + if ( !p2m_is_any_ram(p2mt) ) + return NULL; =20 - /* Stop as soon as we hit an entry with the valid bit unset. */ - if ( !lpae_is_valid(entry) ) - break; + if ( !mfn_valid(mfn) ) + return NULL; =20 - rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]); - if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) - goto out_unmap; - else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) - break; - } + page =3D mfn_to_page(mfn); =20 /* - * If the valid bit of the entry is set, it means someone was playing = with - * the Stage-2 page table. Nothing to do and mark the fault as resolve= d. + * get_page won't work on foreign mapping because the page doesn't + * belong to the current domain. */ - if ( lpae_is_valid(entry) ) + if ( p2m_is_foreign(p2mt) ) { - resolved =3D true; - goto out_unmap; + struct domain *fdom =3D page_get_owner_and_reference(page); + ASSERT(fdom !=3D NULL); + ASSERT(fdom !=3D d); + return page; } =20 - /* - * The valid bit is unset. If the entry is still not valid then the fa= ult - * cannot be resolved, exit and report it. - */ - if ( !p2m_is_valid(entry) ) - goto out_unmap; + return get_page(page, d) ? page : NULL; +} =20 - /* - * Now we have an entry with valid bit unset, but still valid from - * the P2M point of view. - * - * If an entry is pointing to a table, each entry of the table will - * have there valid bit cleared. This allows a function to clear the - * full p2m with just a couple of write. The valid bit will then be - * propagated on the fault. - * If an entry is pointing to a block/page, no work to do for now. - */ - if ( lpae_is_table(entry, level) ) - p2m_invalidate_table(p2m, lpae_get_mfn(entry)); +int guest_physmap_mark_populate_on_demand(struct domain *d, + unsigned long gfn, + unsigned int order) +{ + return -ENOSYS; +} =20 - /* - * Now that the work on the entry is done, set the valid bit to prevent - * another fault on that entry. - */ - resolved =3D true; - entry.p2m.valid =3D 1; +unsigned long p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, + unsigned int order) +{ + return 0; +} =20 - p2m_write_pte(table + offsets[level], entry, p2m->clean_pte); +/* + * The domain will not be scheduled anymore, so in theory we should + * not need to flush the TLBs. Do it for safety purpose. + * Note that all the devices have already been de-assigned. So we don't + * need to flush the IOMMU TLB here. + */ +void p2m_clear_root_pages(struct p2m_domain *p2m) +{ + unsigned int i; =20 - /* - * No need to flush the TLBs as the modified entry had the valid bit - * unset. - */ + p2m_write_lock(p2m); =20 -out_unmap: - unmap_domain_page(table); + for ( i =3D 0; i < P2M_ROOT_PAGES; i++ ) + clear_and_clean_page(p2m->root + i); =20 -out: - p2m_write_unlock(p2m); +#ifdef CONFIG_HAS_MMU + p2m_force_tlb_flush_sync(p2m); +#endif =20 - return resolved; + p2m_write_unlock(p2m); } =20 int p2m_insert_mapping(struct domain *d, gfn_t start_gfn, unsigned long nr, @@ -1612,44 +284,6 @@ int set_foreign_p2m_entry(struct domain *d, const str= uct domain *fd, return rc; } =20 -static struct page_info *p2m_allocate_root(void) -{ - struct page_info *page; - unsigned int i; - - page =3D alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0); - if ( page =3D=3D NULL ) - return NULL; - - /* Clear both first level pages */ - for ( i =3D 0; i < P2M_ROOT_PAGES; i++ ) - clear_and_clean_page(page + i); - - return page; -} - -static int p2m_alloc_table(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - p2m->root =3D p2m_allocate_root(); - if ( !p2m->root ) - return -ENOMEM; - - p2m->vttbr =3D generate_vttbr(p2m->vmid, page_to_mfn(p2m->root)); - - /* - * Make sure that all TLBs corresponding to the new VMID are flushed - * before using it - */ - p2m_write_lock(p2m); - p2m_force_tlb_flush_sync(p2m); - p2m_write_unlock(p2m); - - return 0; -} - - static spinlock_t vmid_alloc_lock =3D SPIN_LOCK_UNLOCKED; =20 /* @@ -1660,7 +294,7 @@ static spinlock_t vmid_alloc_lock =3D SPIN_LOCK_UNLOCK= ED; */ static unsigned long *vmid_mask; =20 -static void p2m_vmid_allocator_init(void) +void p2m_vmid_allocator_init(void) { /* * allocate space for vmid_mask based on MAX_VMID @@ -1673,7 +307,7 @@ static void p2m_vmid_allocator_init(void) set_bit(INVALID_VMID, vmid_mask); } =20 -static int p2m_alloc_vmid(struct domain *d) +int p2m_alloc_vmid(struct domain *d) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); =20 @@ -1713,32 +347,6 @@ static void p2m_free_vmid(struct domain *d) spin_unlock(&vmid_alloc_lock); } =20 -int p2m_teardown(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - unsigned long count =3D 0; - struct page_info *pg; - int rc =3D 0; - - p2m_write_lock(p2m); - - while ( (pg =3D page_list_remove_head(&p2m->pages)) ) - { - p2m_free_page(p2m->domain, pg); - count++; - /* Arbitrarily preempt every 512 iterations */ - if ( !(count % 512) && hypercall_preempt_check() ) - { - rc =3D -ERESTART; - break; - } - } - - p2m_write_unlock(p2m); - - return rc; -} - void p2m_final_teardown(struct domain *d) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); @@ -1771,61 +379,6 @@ void p2m_final_teardown(struct domain *d) p2m->domain =3D NULL; } =20 -int p2m_init(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - int rc; - unsigned int cpu; - - rwlock_init(&p2m->lock); - spin_lock_init(&d->arch.paging.lock); - INIT_PAGE_LIST_HEAD(&p2m->pages); - INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist); - - p2m->vmid =3D INVALID_VMID; - p2m->max_mapped_gfn =3D _gfn(0); - p2m->lowest_mapped_gfn =3D _gfn(ULONG_MAX); - - p2m->default_access =3D p2m_access_rwx; - p2m->mem_access_enabled =3D false; - radix_tree_init(&p2m->mem_access_settings); - - /* - * Some IOMMUs don't support coherent PT walk. When the p2m is - * shared with the CPU, Xen has to make sure that the PT changes have - * reached the memory - */ - p2m->clean_pte =3D is_iommu_enabled(d) && - !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK); - - /* - * Make sure that the type chosen to is able to store the an vCPU ID - * between 0 and the maximum of virtual CPUS supported as long as - * the INVALID_VCPU_ID. - */ - BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < MAX_VIRT_CPU= S); - BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0])* 8)) < INVALID_VCPU_= ID); - - for_each_possible_cpu(cpu) - p2m->last_vcpu_ran[cpu] =3D INVALID_VCPU_ID; - - /* - * "Trivial" initialisation is now complete. Set the backpointer so - * p2m_teardown() and friends know to do something. - */ - p2m->domain =3D d; - - rc =3D p2m_alloc_vmid(d); - if ( rc ) - return rc; - - rc =3D p2m_alloc_table(d); - if ( rc ) - return rc; - - return 0; -} - /* * The function will go through the p2m and remove page reference when it * is required. The mapping will be removed from the p2m. @@ -2217,159 +770,6 @@ void __init p2m_restrict_ipa_bits(unsigned int ipa_b= its) p2m_ipa_bits =3D ipa_bits; } =20 -/* VTCR value to be configured by all CPUs. Set only once by the boot CPU = */ -static register_t __read_mostly vtcr; - -static void setup_virt_paging_one(void *data) -{ - WRITE_SYSREG(vtcr, VTCR_EL2); - - /* - * ARM64_WORKAROUND_AT_SPECULATE: We want to keep the TLBs free from - * entries related to EL1/EL0 translation regime until a guest vCPU - * is running. For that, we need to set-up VTTBR to point to an empty - * page-table and turn on stage-2 translation. The TLB entries - * associated with EL1/EL0 translation regime will also be flushed in = case - * an AT instruction was speculated before hand. - */ - if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) ) - { - WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR= _EL2); - WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2); - isb(); - - flush_all_guests_tlb_local(); - } -} - -void __init setup_virt_paging(void) -{ - /* Setup Stage 2 address translation */ - register_t val =3D VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WB= WA; - - static const struct { - unsigned int pabits; /* Physical Address Size */ - unsigned int t0sz; /* Desired T0SZ, minimum in comment */ - unsigned int root_order; /* Page order of the root of the p2m */ - unsigned int sl0; /* Desired SL0, maximum in comment */ - } pa_range_info[] __initconst =3D { - /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */ - /* PA size, t0sz(min), root-order, sl0(max) */ -#ifdef CONFIG_ARM_64 - [0] =3D { 32, 32/*32*/, 0, 1 }, - [1] =3D { 36, 28/*28*/, 0, 1 }, - [2] =3D { 40, 24/*24*/, 1, 1 }, - [3] =3D { 42, 22/*22*/, 3, 1 }, - [4] =3D { 44, 20/*20*/, 0, 2 }, - [5] =3D { 48, 16/*16*/, 0, 2 }, - [6] =3D { 52, 12/*12*/, 4, 2 }, - [7] =3D { 0 } /* Invalid */ -#else - { 32, 0/*0*/, 0, 1 }, - { 40, 24/*24*/, 1, 1 } -#endif - }; - - unsigned int i; - unsigned int pa_range =3D 0x10; /* Larger than any possible value */ - -#ifdef CONFIG_ARM_32 - /* - * Typecast pa_range_info[].t0sz into arm32 bit variant. - * - * VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for arm322. - * Thus, pa_range_info[].t0sz is translated to its arm32 variant using - * struct bitfields. - */ - struct - { - signed int val:5; - } t0sz_32; -#else - /* - * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured - * with IPA bits =3D=3D PA bits, compare against "pabits". - */ - if ( pa_range_info[system_cpuinfo.mm64.pa_range].pabits < p2m_ipa_bits= ) - p2m_ipa_bits =3D pa_range_info[system_cpuinfo.mm64.pa_range].pabit= s; - - /* - * cpu info sanitization made sure we support 16bits VMID only if all - * cores are supporting it. - */ - if ( system_cpuinfo.mm64.vmid_bits =3D=3D MM64_VMID_16_BITS_SUPPORT ) - max_vmid =3D MAX_VMID_16_BIT; -#endif - - /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits"= . */ - for ( i =3D 0; i < ARRAY_SIZE(pa_range_info); i++ ) - { - if ( p2m_ipa_bits =3D=3D pa_range_info[i].pabits ) - { - pa_range =3D i; - break; - } - } - - /* Check if we found the associated entry in the array */ - if ( pa_range >=3D ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_rang= e].pabits ) - panic("%u-bit P2M is not supported\n", p2m_ipa_bits); - -#ifdef CONFIG_ARM_64 - val |=3D VTCR_PS(pa_range); - val |=3D VTCR_TG0_4K; - - /* Set the VS bit only if 16 bit VMID is supported. */ - if ( MAX_VMID =3D=3D MAX_VMID_16_BIT ) - val |=3D VTCR_VS; -#endif - - val |=3D VTCR_SL0(pa_range_info[pa_range].sl0); - val |=3D VTCR_T0SZ(pa_range_info[pa_range].t0sz); - - p2m_root_order =3D pa_range_info[pa_range].root_order; - p2m_root_level =3D 2 - pa_range_info[pa_range].sl0; - -#ifdef CONFIG_ARM_64 - p2m_ipa_bits =3D 64 - pa_range_info[pa_range].t0sz; -#else - t0sz_32.val =3D pa_range_info[pa_range].t0sz; - p2m_ipa_bits =3D 32 - t0sz_32.val; -#endif - - printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n", - p2m_ipa_bits, - pa_range_info[pa_range].pabits, - ( MAX_VMID =3D=3D MAX_VMID_16_BIT ) ? 16 : 8); - - printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n", - 4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val); - - p2m_vmid_allocator_init(); - - /* It is not allowed to concatenate a level zero root */ - BUG_ON( P2M_ROOT_LEVEL =3D=3D 0 && P2M_ROOT_ORDER > 0 ); - vtcr =3D val; - - /* - * ARM64_WORKAROUND_AT_SPECULATE requires to allocate root table - * with all entries zeroed. - */ - if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) ) - { - struct page_info *root; - - root =3D p2m_allocate_root(); - if ( !root ) - panic("Unable to allocate root table for ARM64_WORKAROUND_AT_S= PECULATE\n"); - - empty_root_mfn =3D page_to_mfn(root); - } - - setup_virt_paging_one(NULL); - smp_call_function(setup_virt_paging_one, NULL, 1); -} - static int cpu_virt_paging_callback(struct notifier_block *nfb, unsigned long action, void *hcpu) --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750829797933.3967829523899; Sun, 25 Jun 2023 20:40:29 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555031.866646 (Exim 4.92) (envelope-from ) id 1qDd52-0000El-7S; Mon, 26 Jun 2023 03:39:56 +0000 Received: by outflank-mailman (output) from mailman id 555031.866646; Mon, 26 Jun 2023 03:39:56 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd51-0000Ah-DJ; Mon, 26 Jun 2023 03:39:55 +0000 Received: by outflank-mailman (input) for mailman id 555031; Mon, 26 Jun 2023 03:39:53 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1U-0007ej-2x for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:16 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 9a697aa3-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:15 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C0F352F4; Sun, 25 Jun 2023 20:36:58 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D28403F64C; Sun, 25 Jun 2023 20:36:10 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9a697aa3-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Penny Zheng , Wei Chen Subject: [PATCH v3 21/52] xen: introduce CONFIG_HAS_PAGING_MEMPOOL Date: Mon, 26 Jun 2023 11:34:12 +0800 Message-Id: <20230626033443.2943270-22-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750832047100005 Content-Type: text/plain; charset="utf-8" ARM MPU system doesn't need to use paging memory pool, as MPU memory mapping table(xen_mpumap) at most takes only one 4KB page, which is enough to manage the maximum 255 MPU memory regions, for all EL2 stage 1 translation and EL1 stage 2 translation. We wrap all paging-memory-pool-related codes with new Kconfig CONFIG_HAS_PAGING_MEMPOOL in common codes. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new patch --- xen/arch/arm/Kconfig | 1 + xen/arch/arm/domain.c | 2 ++ xen/arch/arm/domain_build.c | 2 ++ xen/arch/arm/p2m.c | 2 ++ xen/arch/x86/Kconfig | 1 + xen/common/Kconfig | 3 +++ xen/common/domctl.c | 2 ++ 7 files changed, 13 insertions(+) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index a88500fb50..b2710c1c31 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -62,6 +62,7 @@ source "arch/Kconfig" config HAS_MMU bool "Memory Management Unit support in a VMSA system" default y + select HAS_PAGING_MEMPOOL select HAS_PMAP select HAS_VMAP help diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index add9929b79..7993cefceb 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -1072,6 +1072,7 @@ int domain_relinquish_resources(struct domain *d) */ p2m_clear_root_pages(&d->arch.p2m); =20 +#ifdef CONFIG_HAS_PAGING_MEMPOOL PROGRESS(p2m): ret =3D p2m_teardown(d); if ( ret ) @@ -1081,6 +1082,7 @@ int domain_relinquish_resources(struct domain *d) ret =3D p2m_teardown_allocation(d); if( ret ) return ret; +#endif =20 PROGRESS(done): break; diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index d0d6be922d..260ef9ba6f 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -3830,11 +3830,13 @@ static int __init construct_domU(struct domain *d, p2m_mem_mb << (20 - PAGE_SHIFT) : domain_p2m_pages(mem, d->max_vcpus); =20 +#ifdef CONFIG_PAGING_MEMPOOL spin_lock(&d->arch.paging.lock); rc =3D p2m_set_allocation(d, p2m_pages, NULL); spin_unlock(&d->arch.paging.lock); if ( rc !=3D 0 ) return rc; +#endif =20 printk("*** LOADING DOMU cpus=3D%u memory=3D%#"PRIx64"KB ***\n", d->max_vcpus, mem); diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index b2771e0bed..e29b11334e 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -361,11 +361,13 @@ void p2m_final_teardown(struct domain *d) * where relinquish_p2m_mapping() has been called. */ =20 +#ifdef CONFIG_HAS_PAGING_MEMPOOL ASSERT(page_list_empty(&p2m->pages)); =20 while ( p2m_teardown_allocation(d) =3D=3D -ERESTART ) continue; /* No preemption support here */ ASSERT(page_list_empty(&d->arch.paging.p2m_freelist)); +#endif =20 if ( p2m->root ) free_domheap_pages(p2m->root, P2M_ROOT_ORDER); diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 033cc2332e..082069f1cc 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -21,6 +21,7 @@ config X86 select HAS_IOPORTS select HAS_KEXEC select HAS_NS16550 + select HAS_PAGING_MEMPOOL select HAS_PASSTHROUGH select HAS_PCI select HAS_PCI_MSI diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 2c29e89b75..019a123320 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -54,6 +54,9 @@ config HAS_IOPORTS config HAS_KEXEC bool =20 +config HAS_PAGING_MEMPOOL + bool + config HAS_PDX bool =20 diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 505e29c0dc..c5442992b9 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -844,6 +844,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_d= omctl) ret =3D iommu_do_domctl(op, d, u_domctl); break; =20 +#ifdef CONFIG_HAS_PAGING_MEMPOOL case XEN_DOMCTL_get_paging_mempool_size: ret =3D arch_get_paging_mempool_size(d, &op->u.paging_mempool.size= ); if ( !ret ) @@ -857,6 +858,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_d= omctl) ret =3D hypercall_create_continuation( __HYPERVISOR_domctl, "h", u_domctl); break; +#endif =20 default: ret =3D arch_do_domctl(op, d, u_domctl); --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750814897173.81911851000598; Sun, 25 Jun 2023 20:40:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555015.866590 (Exim 4.92) (envelope-from ) id 1qDd4t-0006X3-3Q; Mon, 26 Jun 2023 03:39:47 +0000 Received: by outflank-mailman (output) from mailman id 555015.866590; Mon, 26 Jun 2023 03:39:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4s-0006U4-Mf; Mon, 26 Jun 2023 03:39:46 +0000 Received: by outflank-mailman (input) for mailman id 555015; Mon, 26 Jun 2023 03:39:45 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1W-0007ej-QY for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:18 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 9c2175bb-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:18 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A46D61FB; Sun, 25 Jun 2023 20:37:01 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3D9133F64C; Sun, 25 Jun 2023 20:36:14 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9c2175bb-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Jan Beulich , Paul Durrant , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Penny Zheng , Wei Chen Subject: [PATCH v3 22/52] xen/mmu: enable SMMU subsystem only in MMU Date: Mon, 26 Jun 2023 11:34:13 +0800 Message-Id: <20230626033443.2943270-23-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750815907100003 Content-Type: text/plain; charset="utf-8" SMMU subsystem is only supported in MMU system, so we make it dependent on CONFIG_HAS_MMU. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new patch --- xen/drivers/passthrough/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kcon= fig index 864fcf3b0c..5a8d666829 100644 --- a/xen/drivers/passthrough/Kconfig +++ b/xen/drivers/passthrough/Kconfig @@ -5,6 +5,7 @@ config HAS_PASSTHROUGH if ARM config ARM_SMMU bool "ARM SMMUv1 and v2 driver" + depends on HAS_MMU default y ---help--- Support for implementations of the ARM System MMU architecture @@ -15,7 +16,7 @@ config ARM_SMMU =20 config ARM_SMMU_V3 bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" if EXPERT - depends on ARM_64 && (!ACPI || BROKEN) + depends on ARM_64 && (!ACPI || BROKEN) && HAS_MMU ---help--- Support for implementations of the ARM System MMU architecture version 3. Driver is in experimental stage and should not be used in --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750842837887.4545861459133; Sun, 25 Jun 2023 20:40:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555053.866722 (Exim 4.92) (envelope-from ) id 1qDd5I-0004XV-Vt; Mon, 26 Jun 2023 03:40:12 +0000 Received: by outflank-mailman (output) from mailman id 555053.866722; Mon, 26 Jun 2023 03:40:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5H-0004R4-Vj; Mon, 26 Jun 2023 03:40:11 +0000 Received: by outflank-mailman (input) for mailman id 555053; Mon, 26 Jun 2023 03:40:07 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1Z-0007ej-T6 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:21 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 9dd5d6f8-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:20 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7E8EB1FB; Sun, 25 Jun 2023 20:37:04 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 218053F64C; Sun, 25 Jun 2023 20:36:17 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9dd5d6f8-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Wei Chen , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 23/52] xen/arm: create mpu/layout.h for MPU related address definitions Date: Mon, 26 Jun 2023 11:34:14 +0800 Message-Id: <20230626033443.2943270-24-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750843502100003 Content-Type: text/plain; charset="utf-8" From: Wei Chen As we have done for MMU systems, we instroduce mpu/layout.h for MPU systems to store their address layout definitions. To avoid spreading #ifdef everywhere, we keep the same definition names for MPU systems, like XEN_VIRT_START and HYPERVISOR_VIRT_START, but the definition contents are MPU specific. Signed-off-by: Wei Chen Signed-off-by: Penny Zheng --- v3: - new commit --- xen/arch/arm/include/asm/config.h | 2 ++ xen/arch/arm/include/asm/mpu/layout.h | 32 +++++++++++++++++++++++++++ 2 files changed, 34 insertions(+) create mode 100644 xen/arch/arm/include/asm/mpu/layout.h diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/c= onfig.h index 204b3dec13..bd71cc1373 100644 --- a/xen/arch/arm/include/asm/config.h +++ b/xen/arch/arm/include/asm/config.h @@ -73,6 +73,8 @@ =20 #ifndef CONFIG_HAS_MPU #include +#else +#include #endif =20 #define NR_hypercalls 64 diff --git a/xen/arch/arm/include/asm/mpu/layout.h b/xen/arch/arm/include/a= sm/mpu/layout.h new file mode 100644 index 0000000000..84c55cb2bd --- /dev/null +++ b/xen/arch/arm/include/asm/mpu/layout.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ARM_MPU_LAYOUT_H__ +#define __ARM_MPU_LAYOUT_H__ + +#define FRAMETABLE_SIZE GB(32) +#define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) + +#define XEN_START_ADDRESS CONFIG_XEN_START_ADDRESS + +/* + * All MPU platforms need to provide a XEN_START_ADDRESS for linker. This + * address indicates where Xen image will be loaded and run from. This + * address must be aligned to a PAGE_SIZE. + */ +#if (XEN_START_ADDRESS % PAGE_SIZE) !=3D 0 +#error "XEN_START_ADDRESS must be aligned to PAGE_SIZE" +#endif + +#define XEN_VIRT_START _AT(paddr_t, XEN_START_ADDRESS) + +#define HYPERVISOR_VIRT_START XEN_VIRT_START + +#endif /* __ARM_MPU_LAYOUT_H__ */ +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750878160483.9562757216331; Sun, 25 Jun 2023 20:41:18 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555107.866910 (Exim 4.92) (envelope-from ) id 1qDd5x-0006Ny-Ez; Mon, 26 Jun 2023 03:40:53 +0000 Received: by outflank-mailman (output) from mailman id 555107.866910; Mon, 26 Jun 2023 03:40:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5w-0006Ep-5B; Mon, 26 Jun 2023 03:40:52 +0000 Received: by outflank-mailman (input) for mailman id 555107; Mon, 26 Jun 2023 03:40:49 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1e-0000HH-A0 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:26 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 9fb1b15c-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:36:24 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 96FA21FB; Sun, 25 Jun 2023 20:37:07 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EF6F33F64C; Sun, 25 Jun 2023 20:36:20 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9fb1b15c-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 24/52] xen/mpu: build up start-of-day Xen MPU memory region map Date: Mon, 26 Jun 2023 11:34:15 +0800 Message-Id: <20230626033443.2943270-25-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750880193100010 Content-Type: text/plain; charset="utf-8" The start-of-day Xen MPU memory region layout shall be like as follows: xen_mpumap[0] : Xen text xen_mpumap[1] : Xen read-only data xen_mpumap[2] : Xen read-only after init data xen_mpumap[3] : Xen read-write data xen_mpumap[4] : Xen BSS xen_mpumap[5] : Xen init text xen_mpumap[6] : Xen init data The layout shall be compliant with what we describe in xen.lds.S, or the codes need adjustment. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - cache maintanence for safety when modifying MPU memory mapping table - Hardcode index for all data/text sections - To make sure that alternative instructions are included, use "_einitext" as the start of the "Init data" section. --- xen/arch/arm/Makefile | 2 + xen/arch/arm/arm64/Makefile | 2 + xen/arch/arm/arm64/mpu/head.S | 178 +++++++++++++++++++++++ xen/arch/arm/include/asm/arm64/mpu.h | 59 ++++++++ xen/arch/arm/include/asm/arm64/sysregs.h | 14 ++ xen/arch/arm/mpu/mm.c | 37 +++++ 6 files changed, 292 insertions(+) create mode 100644 xen/arch/arm/arm64/mpu/head.S create mode 100644 xen/arch/arm/include/asm/arm64/mpu.h create mode 100644 xen/arch/arm/mpu/mm.c diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index a83a535cd7..3bd193ee32 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -40,6 +40,8 @@ ifeq ($(CONFIG_HAS_MMU), y) obj-y +=3D mmu/mm.o obj-y +=3D mmu/setup.o obj-y +=3D mmu/p2m.o +else +obj-y +=3D mpu/mm.o endif obj-y +=3D mm.o obj-y +=3D monitor.o diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile index 55895ecb53..2641fb13ba 100644 --- a/xen/arch/arm/arm64/Makefile +++ b/xen/arch/arm/arm64/Makefile @@ -11,6 +11,8 @@ obj-y +=3D head.o ifeq ($(CONFIG_HAS_MMU),y) obj-y +=3D mmu/head.o obj-y +=3D mmu/mm.o +else +obj-y +=3D mpu/head.o endif obj-y +=3D insn.o obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o diff --git a/xen/arch/arm/arm64/mpu/head.S b/xen/arch/arm/arm64/mpu/head.S new file mode 100644 index 0000000000..93a7a75029 --- /dev/null +++ b/xen/arch/arm/arm64/mpu/head.S @@ -0,0 +1,178 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Start-of-day code for an Armv8-R AArch64 MPU system. + * + * Copyright (C) 2023 Arm Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see . + */ + +#include +#include + +/* + * One entry in Xen MPU memory region mapping table(xen_mpumap) is a struc= ture + * of pr_t, which is 16-bytes size, so the entry offset is the order of 4. + */ +#define MPU_ENTRY_SHIFT 0x4 + +#define REGION_TEXT_PRBAR 0x38 /* SH=3D11 AP=3D10 XN=3D00 */ +#define REGION_RO_PRBAR 0x3A /* SH=3D11 AP=3D10 XN=3D10 */ +#define REGION_DATA_PRBAR 0x32 /* SH=3D11 AP=3D00 XN=3D10 */ + +#define REGION_NORMAL_PRLAR 0x0f /* NS=3D0 ATTR=3D111 EN=3D1 */ + +/* + * Macro to round up the section address to be PAGE_SIZE aligned + * Each section(e.g. .text, .data, etc) in xen.lds.S is page-aligned, + * which is usually guarded with ". =3D ALIGN(PAGE_SIZE)" in the head, + * or in the end + */ +.macro roundup_section, xb + add \xb, \xb, #(PAGE_SIZE-1) + and \xb, \xb, #PAGE_MASK +.endm + +/* + * Macro to prepare and configure a particular EL2 MPU memory region with + * base address as \base and limit address as \limit. + * We will also create an according MPU memory region entry, which + * is a structure of pr_t, in Xen EL2 mpu memory region mapping table + * xen_mpumap. + * + * Inputs: + * base: reg storing base address (should be page-aligned) + * limit: reg storing limit address + * sel: region selector + * prbar: store computed PRBAR_EL2 value + * prlar: store computed PRLAR_EL2 value + * attr_prbar: PRBAR_EL2-related memory attributes. If not specified it w= ill be REGION_DATA_PRBAR + * attr_prlar: PRLAR_EL2-related memory attributes. If not specified it w= ill be REGION_NORMAL_PRLAR + * + * Clobber \tmp1, \tmp2 + * + */ +.macro prepare_xen_region, sel, base, limit, prbar, prlar, tmp1, tmp2, att= r_prbar=3DREGION_DATA_PRBAR, attr_prlar=3DREGION_NORMAL_PRLAR + /* Prepare value for PRBAR_EL2 reg and preserve it in \prbar.*/ + and \base, \base, #MPU_REGION_MASK + mov \prbar, #\attr_prbar + orr \prbar, \prbar, \base + + /* Prepare value for PRLAR_EL2 reg and preserve it in \prlar.*/ + /* Round up limit address to be PAGE_SIZE aligned */ + roundup_section \limit + /* Limit address should be inclusive */ + sub \limit, \limit, #1 + and \limit, \limit, #MPU_REGION_MASK + mov \prlar, #\attr_prlar + orr \prlar, \prlar, \limit + + /* + * Before accessing EL2 MPU region register PRBAR_EL2/PRLAR_EL2, + * PRSELR_EL2.REGION determines which MPU region is selected. + */ + msr PRSELR_EL2, \sel + isb + msr PRBAR_EL2, \prbar + msr PRLAR_EL2, \prlar + isb + + mov \tmp1, \sel + lsl \tmp1, \tmp1, #MPU_ENTRY_SHIFT + load_paddr \tmp2, xen_mpumap + add \tmp2, \tmp2, \tmp1 + stp \prbar, \prlar, [\tmp2] + /* Invalidate data cache for safety */ + dc ivac, \tmp2 + isb +.endm + +.section .text.idmap, "ax", %progbits + +/* + * Static start-of-day Xen EL2 MPU memory region layout: + * + * xen_mpumap[0] : Xen text + * xen_mpumap[1] : Xen read-only data + * xen_mpumap[2] : Xen read-only after init data + * xen_mpumap[3] : Xen read-write data + * xen_mpumap[4] : Xen BSS + * xen_mpumap[5] : Xen init text + * xen_mpumap[6] : Xen init data + * + * Clobbers x0 - x6 + * + * It shall be compliant with what describes in xen.lds.S, or the below + * codes need adjustment. + */ +ENTRY(prepare_early_mappings) + /* x0: region sel */ + mov x0, xzr + /* Xen text section. */ + load_paddr x1, _stext + load_paddr x2, _etext + prepare_xen_region x0, x1, x2, x3, x4, x5, x6, attr_prbar=3DREGION_TEX= T_PRBAR + + add x0, x0, #1 + /* Xen read-only data section. */ + load_paddr x1, _srodata + load_paddr x2, _erodata + prepare_xen_region x0, x1, x2, x3, x4, x5, x6, attr_prbar=3DREGION_RO_= PRBAR + + add x0, x0, #1 + /* Xen read-only after init data section. */ + load_paddr x1, __ro_after_init_start + load_paddr x2, __ro_after_init_end + prepare_xen_region x0, x1, x2, x3, x4, x5, x6 + + add x0, x0, #1 + /* Xen read-write data section. */ + load_paddr x1, __ro_after_init_end + load_paddr x2, __init_begin + prepare_xen_region x0, x1, x2, x3, x4, x5, x6 + + add x0, x0, #1 + /* Xen BSS section. */ + load_paddr x1, __bss_start + load_paddr x2, __bss_end + prepare_xen_region x0, x1, x2, x3, x4, x5, x6 + + add x0, x0, #1 + /* Xen init text section. */ + load_paddr x1, _sinittext + load_paddr x2, _einittext + prepare_xen_region x0, x1, x2, x3, x4, x5, x6, attr_prbar=3DREGION_TEX= T_PRBAR + + add x0, x0, #1 + /* Xen init data section. */ + /* + * Even though we are not using alternative instructions in MPU yet, + * we want to use "_einitext" for the start of the "Init data" section + * to make sure they are included. + */ + load_paddr x1, _einittext + roundup_section x1 + load_paddr x2, __init_end + prepare_xen_region x0, x1, x2, x3, x4, x5, x6 + + /* Ensure any MPU memory mapping table updates made above have occurre= d. */ + dsb nshst + ret +ENDPROC(prepare_early_mappings) + +/* + * Local variables: + * mode: ASM + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/as= m/arm64/mpu.h new file mode 100644 index 0000000000..0c479086f4 --- /dev/null +++ b/xen/arch/arm/include/asm/arm64/mpu.h @@ -0,0 +1,59 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * mpu.h: Arm Memory Protection Region definitions. + */ + +#ifndef __ARM64_MPU_H__ +#define __ARM64_MPU_H__ + +/* + * MPUIR_EL2.Region identifies the number of regions supported by the EL2 = MPU. + * It is a 8-bit field, so 255 MPU memory regions at most. + */ +#define ARM_MAX_MPU_MEMORY_REGIONS 255 + +#ifndef __ASSEMBLY__ + +/* Protection Region Base Address Register */ +typedef union { + struct __packed { + unsigned long xn:2; /* Execute-Never */ + unsigned long ap:2; /* Acess Permission */ + unsigned long sh:2; /* Sharebility */ + unsigned long base:42; /* Base Address */ + unsigned long pad:16; + } reg; + uint64_t bits; +} prbar_t; + +/* Protection Region Limit Address Register */ +typedef union { + struct __packed { + unsigned long en:1; /* Region enable */ + unsigned long ai:3; /* Memory Attribute Index */ + unsigned long ns:1; /* Not-Secure */ + unsigned long res:1; /* Reserved 0 by hardware */ + unsigned long limit:42; /* Limit Address */ + unsigned long pad:16; + } reg; + uint64_t bits; +} prlar_t; + +/* MPU Protection Region */ +typedef struct { + prbar_t prbar; + prlar_t prlar; +} pr_t; + +#endif /* __ASSEMBLY__ */ + +#endif /* __ARM64_MPU_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/includ= e/asm/arm64/sysregs.h index 3fdeb9d8cd..c41d805fde 100644 --- a/xen/arch/arm/include/asm/arm64/sysregs.h +++ b/xen/arch/arm/include/asm/arm64/sysregs.h @@ -462,6 +462,20 @@ #define ZCR_ELx_LEN_SIZE 9 #define ZCR_ELx_LEN_MASK 0x1ff =20 +/* System registers for Armv8-R AArch64 */ +#ifdef CONFIG_HAS_MPU + +/* EL2 MPU Protection Region Base Address Register encode */ +#define PRBAR_EL2 S3_4_C6_C8_0 + +/* EL2 MPU Protection Region Limit Address Register encode */ +#define PRLAR_EL2 S3_4_C6_C8_1 + +/* MPU Protection Region Selection Register encode */ +#define PRSELR_EL2 S3_4_C6_C2_1 + +#endif + /* Access to system registers */ =20 #define WRITE_SYSREG64(v, name) do { \ diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c new file mode 100644 index 0000000000..fb6bb721b1 --- /dev/null +++ b/xen/arch/arm/mpu/mm.c @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * xen/arch/arm/mpu/mm.c + * + * MPU-based memory managment code for Armv8-R AArch64. + * + * Copyright (C) 2023 Arm Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see . + */ + +#include +#include +#include + +/* EL2 Xen MPU memory region mapping table. */ +pr_t __aligned(PAGE_SIZE) __section(".data.page_aligned") + xen_mpumap[ARM_MAX_MPU_MEMORY_REGIONS]; + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 16877508393001010.0135254348156; Sun, 25 Jun 2023 20:40:39 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555050.866715 (Exim 4.92) (envelope-from ) id 1qDd5H-00049z-HX; Mon, 26 Jun 2023 03:40:11 +0000 Received: by outflank-mailman (output) from mailman id 555050.866715; Mon, 26 Jun 2023 03:40:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5G-00044K-Ib; Mon, 26 Jun 2023 03:40:10 +0000 Received: by outflank-mailman (input) for mailman id 555050; Mon, 26 Jun 2023 03:40:05 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1g-0007ej-37 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:28 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id a18573a0-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:27 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AF7C01FB; Sun, 25 Jun 2023 20:37:10 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 13BC73F64C; Sun, 25 Jun 2023 20:36:23 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a18573a0-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 25/52] xen/mpu: introduce helpers for MPU enablement Date: Mon, 26 Jun 2023 11:34:16 +0800 Message-Id: <20230626033443.2943270-26-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750841497100001 Content-Type: text/plain; charset="utf-8" We introduce new helpers for Xen to enable MPU in boot-time. enable_boot_mm() is implemented to be semantically consistent with the MMU version. If the Background region is enabled, then the MPU uses the default memory map as the Background region for generating the memory attributes when MPU is disabled. Since the default memory map of the Armv8-R AArch64 architecture is IMPLEMENTATION DEFINED, we always turn off the Background region. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - introduce code clearing SCTLR_EL2.BR - document the reason of isb --- xen/arch/arm/arm64/mpu/head.S | 46 ++++++++++++++++++++++++++++ xen/arch/arm/include/asm/processor.h | 1 + 2 files changed, 47 insertions(+) diff --git a/xen/arch/arm/arm64/mpu/head.S b/xen/arch/arm/arm64/mpu/head.S index 93a7a75029..3cfce126d5 100644 --- a/xen/arch/arm/arm64/mpu/head.S +++ b/xen/arch/arm/arm64/mpu/head.S @@ -170,6 +170,52 @@ ENTRY(prepare_early_mappings) ret ENDPROC(prepare_early_mappings) =20 +/* + * Enable EL2 MPU and data cache + * If the Background region is enabled, then the MPU uses the default memo= ry + * map as the Background region for generating the memory + * attributes when MPU is disabled. + * Since the default memory map of the Armv8-R AArch64 architecture is + * IMPLEMENTATION DEFINED, we intend to turn off the Background region her= e. + * + * Clobbers x0 + * + */ +ENTRY(enable_mpu) + mrs x0, SCTLR_EL2 + orr x0, x0, #SCTLR_Axx_ELx_M /* Enable MPU */ + orr x0, x0, #SCTLR_Axx_ELx_C /* Enable D-cache */ + orr x0, x0, #SCTLR_Axx_ELx_WXN /* Enable WXN */ + and x0, x0, #SCTLR_Axx_ELx_BR /* Disable Background region */ + msr SCTLR_EL2, x0 /* now mpu memory mapping is enabled= */ + isb /* Now, flush the icache */ + ret +ENDPROC(enable_mpu) + +/* + * Turn on the Data Cache and the MPU. The function will return + * to the virtual address provided in LR (e.g. the runtime mapping). + * + * Inputs: + * lr : Virtual address to return to. + * + * Clobbers x0 - x7 + */ +ENTRY(enable_boot_mm) + /* save return address */ + mov x7, lr + + bl prepare_early_mappings + bl enable_mpu + + mov lr, x7 + /* + * The "ret" here will use the return address in LR to + * return to primary_switched + */ + ret +ENDPROC(enable_boot_mm) + /* * Local variables: * mode: ASM diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/as= m/processor.h index 7e42ff8811..685f9b18fd 100644 --- a/xen/arch/arm/include/asm/processor.h +++ b/xen/arch/arm/include/asm/processor.h @@ -167,6 +167,7 @@ /* Common bits for SCTLR_ELx on all architectures */ #define SCTLR_Axx_ELx_EE BIT(25, UL) #define SCTLR_Axx_ELx_WXN BIT(19, UL) +#define SCTLR_Axx_ELx_BR (~BIT(17, UL)) #define SCTLR_Axx_ELx_I BIT(12, UL) #define SCTLR_Axx_ELx_C BIT(2, UL) #define SCTLR_Axx_ELx_A BIT(1, UL) --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750828006176.80137984482963; Sun, 25 Jun 2023 20:40:28 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555024.866622 (Exim 4.92) (envelope-from ) id 1qDd4y-0007WH-5C; Mon, 26 Jun 2023 03:39:52 +0000 Received: by outflank-mailman (output) from mailman id 555024.866622; Mon, 26 Jun 2023 03:39:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4w-0007Qy-Rc; Mon, 26 Jun 2023 03:39:50 +0000 Received: by outflank-mailman (input) for mailman id 555024; Mon, 26 Jun 2023 03:39:49 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1k-0000HH-NE for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:32 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id a36624b4-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:36:30 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C80F51FB; Sun, 25 Jun 2023 20:37:13 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2C3F73F64C; Sun, 25 Jun 2023 20:36:26 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a36624b4-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 26/52] xen/mpu: map early uart when earlyprintk on Date: Mon, 26 Jun 2023 11:34:17 +0800 Message-Id: <20230626033443.2943270-27-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750829960100001 Content-Type: text/plain; charset="utf-8" We map the early UART with a hardcoded MPU memory region at slot(#REGION_DEVICE_SEL), right after Xen image binary. CONFIG_EARLY_UART_SIZE is introduced, to let user provide physical size of early UART. It is necessary in MPU system. We also check whether user-defined EARLY_UART_SIZE is aligned to PAGE_SIZE, or we may map more than necessary in MPU system. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - map the early UART with a hardcoded MPU memory region - error when uart size doesn't align with PAGE_SIZE --- xen/arch/arm/Kconfig.debug | 7 +++++++ xen/arch/arm/arm64/mpu/head.S | 25 +++++++++++++++++++++++++ xen/arch/arm/include/asm/arm64/mpu.h | 8 ++++++++ xen/arch/arm/include/asm/early_printk.h | 8 ++++++++ 4 files changed, 48 insertions(+) diff --git a/xen/arch/arm/Kconfig.debug b/xen/arch/arm/Kconfig.debug index eec860e88e..a3b0cb9daa 100644 --- a/xen/arch/arm/Kconfig.debug +++ b/xen/arch/arm/Kconfig.debug @@ -239,6 +239,13 @@ config EARLY_UART_BASE_ADDRESS default 0x1c020000 if EARLY_PRINTK_XGENE_STORM default 0xff000000 if EARLY_PRINTK_ZYNQMP =20 +config EARLY_UART_SIZE + depends on EARLY_PRINTK + depends on HAS_MPU + hex "Early printk, physical size of debug UART" + range 0x0 0xffffffff if ARM_32 + default 0x10000 if EARLY_PRINTK_FASTMODEL + config EARLY_UART_PL011_BAUD_RATE depends on EARLY_UART_PL011 int "Early printk UART baud rate for pl011" diff --git a/xen/arch/arm/arm64/mpu/head.S b/xen/arch/arm/arm64/mpu/head.S index 3cfce126d5..147a01e977 100644 --- a/xen/arch/arm/arm64/mpu/head.S +++ b/xen/arch/arm/arm64/mpu/head.S @@ -18,6 +18,7 @@ */ =20 #include +#include #include =20 /* @@ -29,8 +30,10 @@ #define REGION_TEXT_PRBAR 0x38 /* SH=3D11 AP=3D10 XN=3D00 */ #define REGION_RO_PRBAR 0x3A /* SH=3D11 AP=3D10 XN=3D10 */ #define REGION_DATA_PRBAR 0x32 /* SH=3D11 AP=3D00 XN=3D10 */ +#define REGION_DEVICE_PRBAR 0x22 /* SH=3D10 AP=3D00 XN=3D10 */ =20 #define REGION_NORMAL_PRLAR 0x0f /* NS=3D0 ATTR=3D111 EN=3D1 */ +#define REGION_DEVICE_PRLAR 0x09 /* NS=3D0 ATTR=3D100 EN=3D1 */ =20 /* * Macro to round up the section address to be PAGE_SIZE aligned @@ -216,6 +219,28 @@ ENTRY(enable_boot_mm) ret ENDPROC(enable_boot_mm) =20 +/* + * Map the early UART with a dedicated MPU memory region at + * slot(#REGION_DEVICE_SEL), right after Xen image binary. + * + * Clobbers x0 - x6 + * + */ +ENTRY(setup_early_uart) +#ifdef CONFIG_EARLY_PRINTK + mov x0, #REGION_UART_SEL + + /* Xen early UART section. */ + ldr x1, =3DCONFIG_EARLY_UART_BASE_ADDRESS + ldr x2, =3D(CONFIG_EARLY_UART_BASE_ADDRESS + CONFIG_EARLY_UART_SIZE) + prepare_xen_region x0, x1, x2, x3, x4, x5, x6, attr_prbar=3DREGION_DEV= ICE_PRBAR, attr_prlar=3DREGION_DEVICE_PRLAR + + /* Ensure any MPU memory mapping table updates made above have occurre= d. */ + dsb nshst + ret +#endif +ENDPROC(setup_early_uart) + /* * Local variables: * mode: ASM diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/as= m/arm64/mpu.h index 0c479086f4..6ec2c10b14 100644 --- a/xen/arch/arm/include/asm/arm64/mpu.h +++ b/xen/arch/arm/include/asm/arm64/mpu.h @@ -12,6 +12,14 @@ */ #define ARM_MAX_MPU_MEMORY_REGIONS 255 =20 +/* + * REGION_UART_SEL defines MPU region selector value for early UART, when + * earlyuart printk is enabled. + * #REGION_DEVICE_SEL shall be compliant with what describes in xen.lds.S, + * or it needs adjustment. + */ +#define REGION_UART_SEL 0x07 + #ifndef __ASSEMBLY__ =20 /* Protection Region Base Address Register */ diff --git a/xen/arch/arm/include/asm/early_printk.h b/xen/arch/arm/include= /asm/early_printk.h index ec5bcc343c..445a3fb7de 100644 --- a/xen/arch/arm/include/asm/early_printk.h +++ b/xen/arch/arm/include/asm/early_printk.h @@ -23,6 +23,14 @@ */ #define EARLY_UART_VIRTUAL_ADDRESS CONFIG_EARLY_UART_BASE_ADDRESS =20 +/* + * User-defined EARLY_UART_SIZE must be aligned to a PAGE_SIZE, or + * we may map more than necessary in MPU system. + */ +#if (EARLY_UART_SIZE % PAGE_SIZE) !=3D 0 +#error "EARLY_UART_SIZE must be aligned to PAGE_SIZE" +#endif + #else =20 /* need to add the uart address offset in page to the fixmap address */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750876813976.0873268590332; Sun, 25 Jun 2023 20:41:16 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555086.866845 (Exim 4.92) (envelope-from ) id 1qDd5j-0002k0-FD; Mon, 26 Jun 2023 03:40:39 +0000 Received: by outflank-mailman (output) from mailman id 555086.866845; Mon, 26 Jun 2023 03:40:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5i-0002Yk-38; Mon, 26 Jun 2023 03:40:38 +0000 Received: by outflank-mailman (input) for mailman id 555086; Mon, 26 Jun 2023 03:40:33 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1m-0007ej-DR for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:34 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id a5415410-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:33 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E0A9C1FB; Sun, 25 Jun 2023 20:37:16 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 44BE13F64C; Sun, 25 Jun 2023 20:36:30 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a5415410-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 27/52] xen/mpu: introduce setup_mm_mappings Date: Mon, 26 Jun 2023 11:34:18 +0800 Message-Id: <20230626033443.2943270-28-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750878215100003 Content-Type: text/plain; charset="utf-8" Function setup_pagetables is responsible for boot-time pagetable setup in MMU system at C world. In MPU system, as we have already built up start-of-day Xen MPU memory region mapping in assembly boot-time, here we only need to do a few memory management data initializtion, including reading the number of maximum MPU regions supported by the EL2 MPU, and setting the according bitfield for regions enabled in assembly boot-time, in bitmap xen_mpumap_ma= sk. This bitmap xen_mpumap_mask is responsible for recording the usage of EL2 M= PU memory regions. In order to keep only one codeflow in arm/setup.c, setup_mm_mappings , with a more generic name, is introduced to replace setup_pagetables. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - introduce bitmap xen_mpumap_mask for dynamic allocation on MPU regions --- xen/arch/arm/include/asm/arm64/mpu.h | 1 + xen/arch/arm/include/asm/arm64/sysregs.h | 3 +++ xen/arch/arm/include/asm/mm.h | 4 ++-- xen/arch/arm/mmu/mm.c | 7 +++++- xen/arch/arm/mpu/mm.c | 30 ++++++++++++++++++++++++ xen/arch/arm/setup.c | 2 +- 6 files changed, 43 insertions(+), 4 deletions(-) diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/as= m/arm64/mpu.h index 6ec2c10b14..407fec66c9 100644 --- a/xen/arch/arm/include/asm/arm64/mpu.h +++ b/xen/arch/arm/include/asm/arm64/mpu.h @@ -19,6 +19,7 @@ * or it needs adjustment. */ #define REGION_UART_SEL 0x07 +#define MPUIR_REGION_MASK ((_AC(1, UL) << 8) - 1) =20 #ifndef __ASSEMBLY__ =20 diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/includ= e/asm/arm64/sysregs.h index c41d805fde..a249a660a8 100644 --- a/xen/arch/arm/include/asm/arm64/sysregs.h +++ b/xen/arch/arm/include/asm/arm64/sysregs.h @@ -474,6 +474,9 @@ /* MPU Protection Region Selection Register encode */ #define PRSELR_EL2 S3_4_C6_C2_1 =20 +/* MPU Type registers encode */ +#define MPUIR_EL2 S3_4_C0_C0_4 + #endif =20 /* Access to system registers */ diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index 5d890a6a45..eb520b49e3 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -201,8 +201,8 @@ extern unsigned long total_pages; =20 extern uint64_t init_mm; =20 -/* Boot-time pagetable setup */ -extern void setup_pagetables(unsigned long boot_phys_offset); +/* Boot-time memory mapping setup */ +extern void setup_mm_mappings(unsigned long boot_phys_offset); /* Map FDT in boot pagetable */ extern void *early_fdt_map(paddr_t fdt_paddr); /* Remove early mappings */ diff --git a/xen/arch/arm/mmu/mm.c b/xen/arch/arm/mmu/mm.c index 43c19fa914..d7d5bf7287 100644 --- a/xen/arch/arm/mmu/mm.c +++ b/xen/arch/arm/mmu/mm.c @@ -398,7 +398,7 @@ static void clear_table(void *table) =20 /* Boot-time pagetable setup. * Changes here may need matching changes in head.S */ -void __init setup_pagetables(unsigned long boot_phys_offset) +static void __init setup_pagetables(unsigned long boot_phys_offset) { uint64_t ttbr; lpae_t pte, *p; @@ -470,6 +470,11 @@ void __init setup_pagetables(unsigned long boot_phys_o= ffset) #endif } =20 +void setup_mm_mappings(unsigned long boot_phys_offset) +{ + setup_pagetables(boot_phys_offset); +} + static void clear_boot_pagetables(void) { /* diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index fb6bb721b1..e06a6e5810 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -20,6 +20,7 @@ */ =20 #include +#include #include #include =20 @@ -27,6 +28,35 @@ pr_t __aligned(PAGE_SIZE) __section(".data.page_aligned") xen_mpumap[ARM_MAX_MPU_MEMORY_REGIONS]; =20 +/* Maximum number of supported MPU memory regions by the EL2 MPU. */ +uint8_t __ro_after_init max_xen_mpumap; + +/* + * Bitmap xen_mpumap_mask is to record the usage of EL2 MPU memory regions. + * Bit 0 represents MPU memory region 0, bit 1 represents MPU memory + * region 1, ..., and so on. + * If a MPU memory region gets enabled, set the according bit to 1. + */ +static DECLARE_BITMAP(xen_mpumap_mask, ARM_MAX_MPU_MEMORY_REGIONS); + +void __init setup_mm_mappings(unsigned long boot_phys_offset) +{ + unsigned int nr_regions =3D REGION_UART_SEL, i =3D 0; + + /* + * MPUIR_EL2.Region[0:7] identifies the number of regions supported by + * the EL2 MPU. + */ + max_xen_mpumap =3D (uint8_t)(READ_SYSREG(MPUIR_EL2) & MPUIR_REGION_MAS= K); + + /* Set the bitfield for regions enabled in assembly boot-time. */ +#ifdef CONFIG_EARLY_PRINTK + nr_regions =3D REGION_UART_SEL + 1; +#endif + for ( ; i < nr_regions; i++ ) + set_bit(i, xen_mpumap_mask); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 6f8dd98d6b..f42b53d17b 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -781,7 +781,7 @@ void __init start_xen(unsigned long boot_phys_offset, /* Initialize traps early allow us to get backtrace when an error occu= rred */ init_traps(); =20 - setup_pagetables(boot_phys_offset); + setup_mm_mappings(boot_phys_offset); =20 smp_clear_cpu_maps(); =20 --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750843202261.0442613914835; Sun, 25 Jun 2023 20:40:43 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555057.866733 (Exim 4.92) (envelope-from ) id 1qDd5L-00058x-0P; Mon, 26 Jun 2023 03:40:15 +0000 Received: by outflank-mailman (output) from mailman id 555057.866733; Mon, 26 Jun 2023 03:40:14 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5K-00054m-3O; Mon, 26 Jun 2023 03:40:14 +0000 Received: by outflank-mailman (input) for mailman id 555057; Mon, 26 Jun 2023 03:40:11 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1p-0007ej-4G for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:37 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id a719f79e-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:36 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 136B41FB; Sun, 25 Jun 2023 20:37:20 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 604803F64C; Sun, 25 Jun 2023 20:36:33 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a719f79e-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 28/52] xen/mpu: plump virt/maddr conversion in MPU system Date: Mon, 26 Jun 2023 11:34:19 +0800 Message-Id: <20230626033443.2943270-29-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750844073100005 Content-Type: text/plain; charset="utf-8" virt_to_maddr and maddr_to_virt are used widely in Xen code. So even there is no VMSA in MPU system, we keep the interface in MPU to stay the same code flow. The MPU version of virt/maddr conversion is simple, and we just return the input address as the output with type conversion. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - Fix typos - Move the implementation from mm/mpu.h to mm.h, to share as much as possible with MMU system. --- xen/arch/arm/include/asm/mm.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index eb520b49e3..ea4847c12b 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -267,13 +267,22 @@ static inline void __iomem *ioremap_wc(paddr_t start,= size_t len) /* Page-align address and convert to frame number format */ #define paddr_to_pfn_aligned(paddr) paddr_to_pfn(PAGE_ALIGN(paddr)) =20 +#ifndef CONFIG_HAS_MPU static inline paddr_t __virt_to_maddr(vaddr_t va) { uint64_t par =3D va_to_par(va); return (par & PADDR_MASK & PAGE_MASK) | (va & ~PAGE_MASK); } +#else +static inline paddr_t __virt_to_maddr(vaddr_t va) +{ + return (paddr_t)va; +} +#endif /* CONFIG_HAS_MPU */ + #define virt_to_maddr(va) __virt_to_maddr((vaddr_t)(va)) =20 +#ifndef CONFIG_HAS_MPU #ifdef CONFIG_ARM_32 static inline void *maddr_to_virt(paddr_t ma) { @@ -292,6 +301,12 @@ static inline void *maddr_to_virt(paddr_t ma) ((ma & ma_top_mask) >> pfn_pdx_hole_shift))); } #endif +#else /* CONFIG_HAS_MPU */ +static inline void *maddr_to_virt(paddr_t ma) +{ + return (void *)(unsigned long)ma; +} +#endif =20 /* * Translate a guest virtual address to a machine address. --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750865600892.7743945609436; Sun, 25 Jun 2023 20:41:05 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555078.866809 (Exim 4.92) (envelope-from ) id 1qDd5a-0000kj-Qy; Mon, 26 Jun 2023 03:40:30 +0000 Received: by outflank-mailman (output) from mailman id 555078.866809; Mon, 26 Jun 2023 03:40:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5Z-0000ad-V1; Mon, 26 Jun 2023 03:40:29 +0000 Received: by outflank-mailman (input) for mailman id 555078; Mon, 26 Jun 2023 03:40:23 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1t-0007ej-6U for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:41 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id a8f28c11-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:39 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2B2AE1FB; Sun, 25 Jun 2023 20:37:23 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 840103F64C; Sun, 25 Jun 2023 20:36:36 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a8f28c11-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 29/52] xen/mpu: introduce a pair helper read_protection_region()/write_protection_region() Date: Mon, 26 Jun 2023 11:34:20 +0800 Message-Id: <20230626033443.2943270-30-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750866210100005 Content-Type: text/plain; charset="utf-8" Each EL2 MPU protection region could be configured using PRBAR_EL2 and PRLAR_EL2. This commit introduces a pair helper read_protection_region()/ write_protection_region() to read/write EL2 MPU protection region. As explained in section G1.3.18 of the reference manual for AArch64v8R, a set of system register PRBAR_EL2 and PRLAR_EL2 provide access to the EL2 MPU region which is determined by the value of 'n' and PRSELR_EL2.REGION as PRSELR_EL2.REGION<7:4>:n(n =3D 0, 1, 2, ... , 15). For example to access regions from 16 to 31: - Set PRSELR_EL2 to 0b1xxxx - Region 16 configuration is accessible through PRBAR0_EL2 and PRLAR0_EL2 - Region 17 configuration is accessible through PRBAR1_EL2 and PRLAR1_EL2 - Region 18 configuration is accessible through PRBAR2_EL2 and PRLAR2_EL2 - ... - Region 31 configuration is accessible through PRBAR15_EL2 and PRLAR15_EL2 Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - use WRITE_SYSREG()/READ_SYSREG() to avoid open-coding - move the selection part outside of the macro, So it could be outside of t= he switch and reduce the code generation. - introduce two helpers (one for the read operation, the other for the write operation). This would make the code a bit easier to read. - error out when the caller pass a number higher than 15 --- xen/arch/arm/include/asm/arm64/sysregs.h | 32 +++++ xen/arch/arm/mpu/mm.c | 173 +++++++++++++++++++++++ 2 files changed, 205 insertions(+) diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/includ= e/asm/arm64/sysregs.h index a249a660a8..c8a679afdd 100644 --- a/xen/arch/arm/include/asm/arm64/sysregs.h +++ b/xen/arch/arm/include/asm/arm64/sysregs.h @@ -467,9 +467,41 @@ =20 /* EL2 MPU Protection Region Base Address Register encode */ #define PRBAR_EL2 S3_4_C6_C8_0 +#define PRBAR0_EL2 S3_4_C6_C8_0 +#define PRBAR1_EL2 S3_4_C6_C8_4 +#define PRBAR2_EL2 S3_4_C6_C9_0 +#define PRBAR3_EL2 S3_4_C6_C9_4 +#define PRBAR4_EL2 S3_4_C6_C10_0 +#define PRBAR5_EL2 S3_4_C6_C10_4 +#define PRBAR6_EL2 S3_4_C6_C11_0 +#define PRBAR7_EL2 S3_4_C6_C11_4 +#define PRBAR8_EL2 S3_4_C6_C12_0 +#define PRBAR9_EL2 S3_4_C6_C12_4 +#define PRBAR10_EL2 S3_4_C6_C13_0 +#define PRBAR11_EL2 S3_4_C6_C13_4 +#define PRBAR12_EL2 S3_4_C6_C14_0 +#define PRBAR13_EL2 S3_4_C6_C14_4 +#define PRBAR14_EL2 S3_4_C6_C15_0 +#define PRBAR15_EL2 S3_4_C6_C15_4 =20 /* EL2 MPU Protection Region Limit Address Register encode */ #define PRLAR_EL2 S3_4_C6_C8_1 +#define PRLAR0_EL2 S3_4_C6_C8_1 +#define PRLAR1_EL2 S3_4_C6_C8_5 +#define PRLAR2_EL2 S3_4_C6_C9_1 +#define PRLAR3_EL2 S3_4_C6_C9_5 +#define PRLAR4_EL2 S3_4_C6_C10_1 +#define PRLAR5_EL2 S3_4_C6_C10_5 +#define PRLAR6_EL2 S3_4_C6_C11_1 +#define PRLAR7_EL2 S3_4_C6_C11_5 +#define PRLAR8_EL2 S3_4_C6_C12_1 +#define PRLAR9_EL2 S3_4_C6_C12_5 +#define PRLAR10_EL2 S3_4_C6_C13_1 +#define PRLAR11_EL2 S3_4_C6_C13_5 +#define PRLAR12_EL2 S3_4_C6_C14_1 +#define PRLAR13_EL2 S3_4_C6_C14_5 +#define PRLAR14_EL2 S3_4_C6_C15_1 +#define PRLAR15_EL2 S3_4_C6_C15_5 =20 /* MPU Protection Region Selection Register encode */ #define PRSELR_EL2 S3_4_C6_C2_1 diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index e06a6e5810..7b1b5d6e27 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -39,6 +39,23 @@ uint8_t __ro_after_init max_xen_mpumap; */ static DECLARE_BITMAP(xen_mpumap_mask, ARM_MAX_MPU_MEMORY_REGIONS); =20 +/* Write a MPU protection region */ +#define WRITE_PROTECTION_REGION(pr, prbar_el2, prlar_el2) ({ \ + const pr_t *_pr =3D pr; \ + \ + WRITE_SYSREG(_pr->prbar.bits, prbar_el2); \ + WRITE_SYSREG(_pr->prlar.bits, prlar_el2); \ +}) + +/* Read a MPU protection region */ +#define READ_PROTECTION_REGION(prbar_el2, prlar_el2) ({ \ + pr_t _pr; \ + \ + _pr.prbar.bits =3D READ_SYSREG(prbar_el2); \ + _pr.prlar.bits =3D READ_SYSREG(prlar_el2); \ + _pr; \ +}) + void __init setup_mm_mappings(unsigned long boot_phys_offset) { unsigned int nr_regions =3D REGION_UART_SEL, i =3D 0; @@ -57,6 +74,162 @@ void __init setup_mm_mappings(unsigned long boot_phys_o= ffset) set_bit(i, xen_mpumap_mask); } =20 +/* + * Armv8-R AArch64 at most supports 255 MPU protection regions. + * See section G1.3.18 of the reference manual for Armv8-R AArch64, + * PRBAR_EL2 and PRLAR_EL2 provide access to the EL2 MPU region + * determined by the value of 'n' and PRSELR_EL2.REGION as + * PRSELR_EL2.REGION<7:4>:n(n =3D 0, 1, 2, ... , 15) + * For example to access regions from 16 to 31 (0b10000 to 0b11111): + * - Set PRSELR_EL2 to 0b1xxxx + * - Region 16 configuration is accessible through PRBAR0_EL2 and PRLAR0_E= L2 + * - Region 17 configuration is accessible through PRBAR1_EL2 and PRLAR1_E= L2 + * - Region 18 configuration is accessible through PRBAR2_EL2 and PRLAR2_E= L2 + * - ... + * - Region 31 configuration is accessible through PRBAR15_EL2 and PRLAR15= _EL2 + */ +/* + * Read EL2 MPU Protection Region. + * + * @pr_read: mpu protection region returned by read op. + * @sel: mpu protection region selector + */ +static void read_protection_region(pr_t *pr_read, uint8_t sel) +{ + /* + * Before accessing EL2 MPU region register PRBAR_EL2/PRLAR_EL2, + * make sure PRSELR_EL2 is set, as it determines which MPU region + * is selected. + */ + WRITE_SYSREG(sel, PRSELR_EL2); + isb(); + + switch ( sel & 0x0f ) + { + case 0: + *pr_read =3D READ_PROTECTION_REGION(PRBAR0_EL2, PRLAR0_EL2); + break; + case 1: + *pr_read =3D READ_PROTECTION_REGION(PRBAR1_EL2, PRLAR1_EL2); + break; + case 2: + *pr_read =3D READ_PROTECTION_REGION(PRBAR2_EL2, PRLAR2_EL2); + break; + case 3: + *pr_read =3D READ_PROTECTION_REGION(PRBAR3_EL2, PRLAR3_EL2); + break; + case 4: + *pr_read =3D READ_PROTECTION_REGION(PRBAR4_EL2, PRLAR4_EL2); + break; + case 5: + *pr_read =3D READ_PROTECTION_REGION(PRBAR5_EL2, PRLAR5_EL2); + break; + case 6: + *pr_read =3D READ_PROTECTION_REGION(PRBAR6_EL2, PRLAR6_EL2); + break; + case 7: + *pr_read =3D READ_PROTECTION_REGION(PRBAR7_EL2, PRLAR7_EL2); + break; + case 8: + *pr_read =3D READ_PROTECTION_REGION(PRBAR8_EL2, PRLAR8_EL2); + break; + case 9: + *pr_read =3D READ_PROTECTION_REGION(PRBAR9_EL2, PRLAR9_EL2); + break; + case 10: + *pr_read =3D READ_PROTECTION_REGION(PRBAR10_EL2, PRLAR10_EL2); + break; + case 11: + *pr_read =3D READ_PROTECTION_REGION(PRBAR11_EL2, PRLAR11_EL2); + break; + case 12: + *pr_read =3D READ_PROTECTION_REGION(PRBAR12_EL2, PRLAR12_EL2); + break; + case 13: + *pr_read =3D READ_PROTECTION_REGION(PRBAR13_EL2, PRLAR13_EL2); + break; + case 14: + *pr_read =3D READ_PROTECTION_REGION(PRBAR14_EL2, PRLAR14_EL2); + break; + case 15: + *pr_read =3D READ_PROTECTION_REGION(PRBAR15_EL2, PRLAR15_EL2); + break; + default: + panic("Unsupported selector %u\n", sel); + } +} + +/* + * Write EL2 MPU Protection Region. + * + * @pr_write: const mpu protection region passed through write op. + * @sel: mpu protection region selector + */ +static void write_protection_region(const pr_t *pr_write, uint8_t sel) +{ + /* + * Before accessing EL2 MPU region register PRBAR_EL2/PRLAR_EL2, + * make sure PRSELR_EL2 is set, as it determines which MPU region + * is selected. + */ + WRITE_SYSREG(sel, PRSELR_EL2); + isb(); + + switch ( sel & 0x0f ) + { + case 0: + WRITE_PROTECTION_REGION(pr_write, PRBAR0_EL2, PRLAR0_EL2); + break; + case 1: + WRITE_PROTECTION_REGION(pr_write, PRBAR1_EL2, PRLAR1_EL2); + break; + case 2: + WRITE_PROTECTION_REGION(pr_write, PRBAR2_EL2, PRLAR2_EL2); + break; + case 3: + WRITE_PROTECTION_REGION(pr_write, PRBAR3_EL2, PRLAR3_EL2); + break; + case 4: + WRITE_PROTECTION_REGION(pr_write, PRBAR4_EL2, PRLAR4_EL2); + break; + case 5: + WRITE_PROTECTION_REGION(pr_write, PRBAR5_EL2, PRLAR5_EL2); + break; + case 6: + WRITE_PROTECTION_REGION(pr_write, PRBAR6_EL2, PRLAR6_EL2); + break; + case 7: + WRITE_PROTECTION_REGION(pr_write, PRBAR7_EL2, PRLAR7_EL2); + break; + case 8: + WRITE_PROTECTION_REGION(pr_write, PRBAR8_EL2, PRLAR8_EL2); + break; + case 9: + WRITE_PROTECTION_REGION(pr_write, PRBAR9_EL2, PRLAR9_EL2); + break; + case 10: + WRITE_PROTECTION_REGION(pr_write, PRBAR10_EL2, PRLAR10_EL2); + break; + case 11: + WRITE_PROTECTION_REGION(pr_write, PRBAR11_EL2, PRLAR11_EL2); + break; + case 12: + WRITE_PROTECTION_REGION(pr_write, PRBAR12_EL2, PRLAR12_EL2); + break; + case 13: + WRITE_PROTECTION_REGION(pr_write, PRBAR13_EL2, PRLAR13_EL2); + break; + case 14: + WRITE_PROTECTION_REGION(pr_write, PRBAR14_EL2, PRLAR14_EL2); + break; + case 15: + WRITE_PROTECTION_REGION(pr_write, PRBAR15_EL2, PRLAR15_EL2); + break; + default: + panic("Unsupported selector %u\n", sel); + } +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750849499550.0844452535671; Sun, 25 Jun 2023 20:40:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555049.866706 (Exim 4.92) (envelope-from ) id 1qDd5G-0003pF-1s; Mon, 26 Jun 2023 03:40:10 +0000 Received: by outflank-mailman (output) from mailman id 555049.866706; Mon, 26 Jun 2023 03:40:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5F-0003hT-7m; Mon, 26 Jun 2023 03:40:09 +0000 Received: by outflank-mailman (input) for mailman id 555049; Mon, 26 Jun 2023 03:40:03 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1w-0007ej-4x for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:44 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id aad60b95-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:42 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 473F21FB; Sun, 25 Jun 2023 20:37:26 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9C3963F64C; Sun, 25 Jun 2023 20:36:39 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: aad60b95-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 30/52] xen/mpu: populate a new region in Xen MPU mapping table Date: Mon, 26 Jun 2023 11:34:21 +0800 Message-Id: <20230626033443.2943270-31-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750851744100001 Content-Type: text/plain; charset="utf-8" The new helper xen_mpumap_update() is responsible for updating Xen MPU memo= ry mapping table(xen_mpumap), including creating a new entry, updating or destroying an existing one. It is equivalent to xen_pt_update in MMU. This commit only talks about populating a new entry in Xen MPU memory mappi= ng table(xen_mpumap). Others will be introduced in the following commits. When populating a new entry in Xen MPU memory mapping table(xen_mpumap), firstly, we shall check if requested address range [base, limit) is mapped. If not, we shall find a free slot in xen_mpumap to insert, based on bitmap xen_mpumap_mask, and use standard entry pr_of_xenaddr() to build up MPU mem= ory region structure(pr_t) In the last, we set memory attribute and permission based on variable @flag= s. To summarize all region attributes in one variable @flags, layout of the flags is elaborated as follows: [0:2] Memory attribute Index [3:4] Execute Never [5:6] Access Permission [7] Region Present Also, we provide a set of definitions(REGION_HYPERVISOR_RW, etc) that combi= ne the memory attribute and permission for common combinations. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - implement pr_set_base/pr_set_limit/region_is_valid using static inline. - define index uint8_t to limit its size - stay the same major entry map_pages_to_xen, then go different path in different context(xen_pt_update in MMU, and xen_mpumap_update in MPU) --- xen/arch/arm/include/asm/arm64/mpu.h | 64 +++++++ xen/arch/arm/include/asm/mm.h | 3 + xen/arch/arm/include/asm/mpu/mm.h | 16 ++ xen/arch/arm/include/asm/page.h | 22 +++ xen/arch/arm/mm.c | 20 +++ xen/arch/arm/mmu/mm.c | 9 +- xen/arch/arm/mpu/mm.c | 255 +++++++++++++++++++++++++++ 7 files changed, 381 insertions(+), 8 deletions(-) create mode 100644 xen/arch/arm/include/asm/mpu/mm.h diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/as= m/arm64/mpu.h index 407fec66c9..a6b07bab02 100644 --- a/xen/arch/arm/include/asm/arm64/mpu.h +++ b/xen/arch/arm/include/asm/arm64/mpu.h @@ -6,6 +6,10 @@ #ifndef __ARM64_MPU_H__ #define __ARM64_MPU_H__ =20 +#define MPU_REGION_SHIFT 6 +#define MPU_REGION_ALIGN (_AC(1, UL) << MPU_REGION_SHIFT) +#define MPU_REGION_MASK (~(MPU_REGION_ALIGN - 1)) + /* * MPUIR_EL2.Region identifies the number of regions supported by the EL2 = MPU. * It is a 8-bit field, so 255 MPU memory regions at most. @@ -21,8 +25,33 @@ #define REGION_UART_SEL 0x07 #define MPUIR_REGION_MASK ((_AC(1, UL) << 8) - 1) =20 +/* Access permission attributes. */ +/* Read/Write at EL2, No Access at EL1/EL0. */ +#define AP_RW_EL2 0x0 +/* Read/Write at EL2/EL1/EL0 all levels. */ +#define AP_RW_ALL 0x1 +/* Read-only at EL2, No Access at EL1/EL0. */ +#define AP_RO_EL2 0x2 +/* Read-only at EL2/EL1/EL0 all levels. */ +#define AP_RO_ALL 0x3 + +/* + * Excute never. + * Stage 1 EL2 translation regime. + * XN[1] determines whether execution of the instruction fetched from the = MPU + * memory region is permitted. + * Stage 2 EL1/EL0 translation regime. + * XN[0] determines whether execution of the instruction fetched from the = MPU + * memory region is permitted. + */ +#define XN_DISABLED 0x0 +#define XN_P2M_ENABLED 0x1 +#define XN_ENABLED 0x2 + #ifndef __ASSEMBLY__ =20 +#define INVALID_REGION_IDX 0xff + /* Protection Region Base Address Register */ typedef union { struct __packed { @@ -54,6 +83,41 @@ typedef struct { prlar_t prlar; } pr_t; =20 +/* Access to set base address of MPU protection region(pr_t). */ +static inline void pr_set_base(pr_t *pr, paddr_t base) +{ + pr->prbar.reg.base =3D (base >> MPU_REGION_SHIFT); +} + +/* Access to set limit address of MPU protection region(pr_t). */ +static inline void pr_set_limit(pr_t *pr, paddr_t limit) +{ + pr->prlar.reg.limit =3D (limit >> MPU_REGION_SHIFT); +} + +/* + * Access to get base address of MPU protection region(pr_t). + * The base address shall be zero extended. + */ +static inline paddr_t pr_get_base(pr_t *pr) +{ + return (paddr_t)(pr->prbar.reg.base << MPU_REGION_SHIFT); +} + +/* + * Access to get limit address of MPU protection region(pr_t). + * The limit address shall be concatenated with 0x3f. + */ +static inline paddr_t pr_get_limit(pr_t *pr) +{ + return (paddr_t)((pr->prlar.reg.limit << MPU_REGION_SHIFT) | ~MPU_REGI= ON_MASK); +} + +static inline bool region_is_valid(pr_t *pr) +{ + return pr->prlar.reg.en; +} + #endif /* __ASSEMBLY__ */ =20 #endif /* __ARM64_MPU_H__ */ diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index ea4847c12b..daa6329505 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -16,6 +16,8 @@ =20 #ifdef CONFIG_HAS_MMU #include +#else +#include #endif =20 /* Align Xen to a 2 MiB boundary. */ @@ -203,6 +205,7 @@ extern uint64_t init_mm; =20 /* Boot-time memory mapping setup */ extern void setup_mm_mappings(unsigned long boot_phys_offset); +extern bool flags_has_rwx(unsigned int flags); /* Map FDT in boot pagetable */ extern void *early_fdt_map(paddr_t fdt_paddr); /* Remove early mappings */ diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h new file mode 100644 index 0000000000..eec572ecfc --- /dev/null +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ARCH_ARM_MM_MPU__ +#define __ARCH_ARM_MM_MPU__ + +extern int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int fla= gs); + +#endif /* __ARCH_ARM_MM_MPU__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/page.h b/xen/arch/arm/include/asm/pag= e.h index 3893303c8f..85ecd5e4de 100644 --- a/xen/arch/arm/include/asm/page.h +++ b/xen/arch/arm/include/asm/page.h @@ -62,6 +62,7 @@ =20 #define MAIRVAL (MAIR1VAL << 32 | MAIR0VAL) =20 +#ifndef CONFIG_HAS_MPU /* * Layout of the flags used for updating the hypervisor page tables * @@ -89,6 +90,27 @@ =20 #define _PAGE_CONTIG_BIT 8 #define _PAGE_CONTIG (1U << _PAGE_CONTIG_BIT) +#else +/* + * Layout of the flags used for updating MPU memory region attributes + * [0:2] Memory attribute Index + * [3:4] Execute Never + * [5:6] Access Permission + * [7] Region Present + */ +#define _PAGE_AI_BIT 0 +#define _PAGE_XN_BIT 3 +#define _PAGE_AP_BIT 5 +#define _PAGE_PRESENT_BIT 7 +#define _PAGE_AI (7U << _PAGE_AI_BIT) +#define _PAGE_XN (2U << _PAGE_XN_BIT) +#define _PAGE_RO (2U << _PAGE_AP_BIT) +#define _PAGE_PRESENT (1U << _PAGE_PRESENT_BIT) +#define PAGE_AI_MASK(x) (((x) >> _PAGE_AI_BIT) & 0x7U) +#define PAGE_XN_MASK(x) (((x) >> _PAGE_XN_BIT) & 0x3U) +#define PAGE_AP_MASK(x) (((x) >> _PAGE_AP_BIT) & 0x3U) +#define PAGE_RO_MASK(x) (((x) >> _PAGE_AP_BIT) & 0x2U) +#endif /* CONFIG_HAS_MPU */ =20 /* * _PAGE_DEVICE and _PAGE_NORMAL are convenience defines. They are not diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 97642f35d3..d35e7e280f 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -117,12 +117,32 @@ void *ioremap(paddr_t pa, size_t len) return ioremap_attr(pa, len, PAGE_HYPERVISOR_NOCACHE); } =20 +bool flags_has_rwx(unsigned int flags) +{ + /* + * The hardware was configured to forbid mapping both writeable and + * executable. + * When modifying/creating mapping (i.e _PAGE_PRESENT is set), + * prevent any update if this happen. + */ + if ( (flags & _PAGE_PRESENT) && !PAGE_RO_MASK(flags) && + !PAGE_XN_MASK(flags) ) + return true; + else + return false; +} + int map_pages_to_xen(unsigned long virt, mfn_t mfn, unsigned long nr_mfns, unsigned int flags) { +#ifndef CONFIG_HAS_MPU return xen_pt_update(virt, mfn, nr_mfns, flags); +#else + return xen_mpumap_update(mfn_to_maddr(mfn), + mfn_to_maddr(mfn_add(mfn, nr_mfns)), flags); +#endif } =20 int destroy_xen_mappings(unsigned long s, unsigned long e) diff --git a/xen/arch/arm/mmu/mm.c b/xen/arch/arm/mmu/mm.c index d7d5bf7287..2f29cb53fe 100644 --- a/xen/arch/arm/mmu/mm.c +++ b/xen/arch/arm/mmu/mm.c @@ -1037,14 +1037,7 @@ int xen_pt_update(unsigned long virt, mfn_t mfn, */ const mfn_t root =3D maddr_to_mfn(READ_SYSREG64(TTBR0_EL2)); =20 - /* - * The hardware was configured to forbid mapping both writeable and - * executable. - * When modifying/creating mapping (i.e _PAGE_PRESENT is set), - * prevent any update if this happen. - */ - if ( (flags & _PAGE_PRESENT) && !PAGE_RO_MASK(flags) && - !PAGE_XN_MASK(flags) ) + if ( flags_has_rwx(flags) ) { mm_printk("Mappings should not be both Writeable and Executable.\n= "); return -EINVAL; diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 7b1b5d6e27..14a1309ca1 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -23,6 +23,19 @@ #include #include #include +#include + +#ifdef NDEBUG +static inline void __attribute__ ((__format__ (__printf__, 1, 2))) +region_printk(const char *fmt, ...) {} +#else +#define region_printk(fmt, args...) \ + do \ + { \ + dprintk(XENLOG_ERR, fmt, ## args); \ + WARN(); \ + } while (0) +#endif =20 /* EL2 Xen MPU memory region mapping table. */ pr_t __aligned(PAGE_SIZE) __section(".data.page_aligned") @@ -39,6 +52,10 @@ uint8_t __ro_after_init max_xen_mpumap; */ static DECLARE_BITMAP(xen_mpumap_mask, ARM_MAX_MPU_MEMORY_REGIONS); =20 +static DEFINE_SPINLOCK(xen_mpumap_lock); + +static DEFINE_SPINLOCK(xen_mpumap_alloc_lock); + /* Write a MPU protection region */ #define WRITE_PROTECTION_REGION(pr, prbar_el2, prlar_el2) ({ \ const pr_t *_pr =3D pr; \ @@ -230,6 +247,244 @@ static void write_protection_region(const pr_t *pr_wr= ite, uint8_t sel) } } =20 +/* + * Standard entry for building up the structure of MPU memory region(pr_t). + * It is equivalent to mfn_to_xen_entry in MMU system. + * base and limit both refer to inclusive address. + */ +static inline pr_t pr_of_xenaddr(paddr_t base, paddr_t limit, unsigned att= r) +{ + prbar_t prbar; + prlar_t prlar; + pr_t region; + + /* Build up value for PRBAR_EL2. */ + prbar =3D (prbar_t) { + .reg =3D { + .ap =3D AP_RW_EL2, /* Read/Write at EL2, no access at EL1/EL0= . */ + .xn =3D XN_ENABLED, /* No need to execute outside .text */ + }}; + + switch ( attr ) + { + case MT_NORMAL_NC: + /* + * ARM ARM: Overlaying the shareability attribute (DDI + * 0406C.b B3-1376 to 1377) + * + * A memory region with a resultant memory type attribute of norma= l, + * and a resultant cacheability attribute of Inner non-cacheable, + * outer non-cacheable, must have a resultant shareability attribu= te + * of outer shareable, otherwise shareability is UNPREDICTABLE. + * + * On ARMv8 sharability is ignored and explicitly treated as outer + * shareable for normal inner non-cacheable, outer non-cacheable. + */ + prbar.reg.sh =3D LPAE_SH_OUTER; + break; + case MT_DEVICE_nGnRnE: + case MT_DEVICE_nGnRE: + /* + * Shareability is ignored for non-normal memory, Outer is as + * good as anything. + * + * On ARMv8 sharability is ignored and explicitly treated as outer + * shareable for any device memory type. + */ + prbar.reg.sh =3D LPAE_SH_OUTER; + break; + default: + /* Xen mappings are SMP coherent */ + prbar.reg.sh =3D LPAE_SH_INNER; + break; + } + + /* Build up value for PRLAR_EL2. */ + prlar =3D (prlar_t) { + .reg =3D { + .ns =3D 0, /* Hyp mode is in secure world */ + .ai =3D attr, + .en =3D 1, /* Region enabled */ + }}; + + /* Build up MPU memory region. */ + region =3D (pr_t) { + .prbar =3D prbar, + .prlar =3D prlar, + }; + + /* Set base address and limit address. */ + pr_set_base(®ion, base); + pr_set_limit(®ion, limit); + + return region; +} + +/* + * Allocate a new free EL2 MPU memory region, based on bitmap xen_mpumap_m= ask. + * If success, the associated index will be filled up. + * If failed, non-zero value -ENOENT will be returned. + */ +static int xen_mpumap_alloc_entry(uint8_t *idx) +{ + int rc =3D 0; + + spin_lock(&xen_mpumap_alloc_lock); + + *idx =3D find_first_zero_bit(xen_mpumap_mask, max_xen_mpumap); + if ( *idx =3D=3D max_xen_mpumap ) + { + rc =3D -ENOENT; + printk(XENLOG_ERR "mpu: EL2 MPU memory region mapping pool exhaust= ed\n"); + goto out; + } + + set_bit(*idx, xen_mpumap_mask); + +out: + spin_unlock(&xen_mpumap_alloc_lock); + return rc; +} + +#define MPUMAP_REGION_FAILED 0 +#define MPUMAP_REGION_FOUND 1 +#define MPUMAP_REGION_INCLUSIVE 2 +#define MPUMAP_REGION_OVERLAP 3 +/* + * Check whether memory range [base, limit] is mapped in MPU memory region + * mapping table #table. Only address range is checked, memory attributes + * and permission are not considered here. + * If we find the match, the associated index will be filled up. + * If the entry is not present, INVALID_REGION will be set in #index and + * specific non-zero error message will be returned. + * + * Make sure that parameter #base and #limit are both referring + * inclusive addresss + * + * Return values: + * MPUMAP_REGION_FAILED: no mapping and no overlapping + * MPUMAP_REGION_FOUND: find an exact match in #table + * MPUMAP_REGION_INCLUSIVE: find an inclusive match in #table + * MPUMAP_REGION_OVERLAP: overlap with the existing mapping + */ +static int mpumap_contain_region(pr_t *table, uint8_t nr_regions, + paddr_t base, paddr_t limit, uint8_t *ind= ex) +{ + uint8_t i =3D 0, _index =3D INVALID_REGION_IDX; + + /* Allow index to be NULL */ + index =3D index ? : &_index; + + if ( limit < base ) + { + region_printk("Base address 0x%"PRIpaddr" must be smaller than lim= it address 0x%"PRIpaddr"\n", + base, limit); + return -EINVAL; + } + + for ( ; i < nr_regions; i++ ) + { + paddr_t iter_base =3D pr_get_base(&table[i]); + paddr_t iter_limit =3D pr_get_limit(&table[i]); + + /* Found an exact valid match */ + if ( (iter_base =3D=3D base) && (iter_limit =3D=3D limit) && + region_is_valid(&table[i]) ) + { + *index =3D i; + return MPUMAP_REGION_FOUND; + } + + /* No overlapping */ + if ( (iter_limit < base) || (iter_base > limit) ) + continue; + /* Inclusive and valid */ + else if ( (base >=3D iter_base) && (limit <=3D iter_limit) && + region_is_valid(&table[i]) ) + { + *index =3D i; + return MPUMAP_REGION_INCLUSIVE; + } + else + { + region_printk("Range 0x%"PRIpaddr" - 0x%"PRIpaddr" overlaps wi= th the existing region 0x%"PRIpaddr" - 0x%"PRIpaddr"\n", + base, limit + 1, iter_base, iter_limit + 1); + return MPUMAP_REGION_OVERLAP; + } + } + + return MPUMAP_REGION_FAILED; +} + +/* + * Update an entry in Xen MPU memory region mapping table(xen_mpumap) at + * the index @idx. + * @base: base address(inclusive) + * @limit: limit address(exclusive) + * @flags: region attributes, should be the combination of PAGE_HYPERVISOR= _xx + */ +static int xen_mpumap_update_entry(paddr_t base, paddr_t limit, + unsigned int flags) +{ + uint8_t idx; + int rc; + + rc =3D mpumap_contain_region(xen_mpumap, max_xen_mpumap, base, limit -= 1, + &idx); + if ( (rc < 0) || (rc =3D=3D MPUMAP_REGION_OVERLAP) ) + return -EINVAL; + + /* We are inserting a mapping =3D> Create new region. */ + if ( flags & _PAGE_PRESENT ) + { + if ( rc !=3D MPUMAP_REGION_FAILED ) + return -EINVAL; + + rc =3D xen_mpumap_alloc_entry(&idx); + if ( rc ) + return -ENOENT; + + xen_mpumap[idx] =3D pr_of_xenaddr(base, limit - 1, PAGE_AI_MASK(fl= ags)); + /* Set permission */ + xen_mpumap[idx].prbar.reg.ap =3D PAGE_AP_MASK(flags); + xen_mpumap[idx].prbar.reg.xn =3D PAGE_XN_MASK(flags); + + write_protection_region((const pr_t*)(&xen_mpumap[idx]), idx); + } + + return 0; +} + +/* + * It is equivalent to xen_pt_update in MMU system. + * base refers to inclusive address and limit refers to exclusive address. + */ +int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags) +{ + int rc; + + if ( flags_has_rwx(flags) ) + { + region_printk("Mappings should not be both Writeable and Executabl= e\n"); + return -EINVAL; + } + + if ( !IS_ALIGNED(base, PAGE_SIZE) || !IS_ALIGNED(limit, PAGE_SIZE) ) + { + region_printk("base address 0x%"PRIpaddr", or limit address 0x%"PR= Ipaddr" is not page aligned\n", + base, limit); + return -EINVAL; + } + + spin_lock(&xen_mpumap_lock); + + rc =3D xen_mpumap_update_entry(base, limit, flags); + + spin_unlock(&xen_mpumap_lock); + + return rc; +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750885254273.4033804827062; Sun, 25 Jun 2023 20:41:25 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555101.866885 (Exim 4.92) (envelope-from ) id 1qDd5s-0005J1-EM; Mon, 26 Jun 2023 03:40:48 +0000 Received: by outflank-mailman (output) from mailman id 555101.866885; Mon, 26 Jun 2023 03:40:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5r-0005Cr-OS; Mon, 26 Jun 2023 03:40:47 +0000 Received: by outflank-mailman (input) for mailman id 555101; Mon, 26 Jun 2023 03:40:44 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd1z-0000HH-Su for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:47 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id acb2b9d8-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:36:45 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5FEB22F4; Sun, 25 Jun 2023 20:37:29 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B83B73F64C; Sun, 25 Jun 2023 20:36:42 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: acb2b9d8-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 31/52] xen/mpu: make early_fdt_map support in MPU systems Date: Mon, 26 Jun 2023 11:34:22 +0800 Message-Id: <20230626033443.2943270-32-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750886212100001 Content-Type: text/plain; charset="utf-8" In MPU system, MPU memory region is always mapped PAGE_ALIGN, so in order to not access unexpected memory area, dtb section in xen.lds.S should be made page-aligned too. We add . =3D ALIGN(PAGE_SIZE); in the head of dtb section to make it happen. In this commit, we map early FDT with a transient MPU memory region, as it will be relocated into heap and unmapped at the end of boot. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - map the first 2MB. Check the size and then re-map with an extra 2MB if ne= eded --- xen/arch/arm/include/asm/arm64/mpu.h | 3 ++- xen/arch/arm/include/asm/page.h | 5 +++++ xen/arch/arm/mm.c | 26 ++++++++++++++++++++------ xen/arch/arm/mpu/mm.c | 1 + xen/arch/arm/xen.lds.S | 5 ++++- 5 files changed, 32 insertions(+), 8 deletions(-) diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/as= m/arm64/mpu.h index a6b07bab02..715ea69884 100644 --- a/xen/arch/arm/include/asm/arm64/mpu.h +++ b/xen/arch/arm/include/asm/arm64/mpu.h @@ -72,7 +72,8 @@ typedef union { unsigned long ns:1; /* Not-Secure */ unsigned long res:1; /* Reserved 0 by hardware */ unsigned long limit:42; /* Limit Address */ - unsigned long pad:16; + unsigned long pad:15; + unsigned long tran:1; /* Transient region */ } reg; uint64_t bits; } prlar_t; diff --git a/xen/arch/arm/include/asm/page.h b/xen/arch/arm/include/asm/pag= e.h index 85ecd5e4de..a434e2205a 100644 --- a/xen/arch/arm/include/asm/page.h +++ b/xen/arch/arm/include/asm/page.h @@ -97,19 +97,24 @@ * [3:4] Execute Never * [5:6] Access Permission * [7] Region Present + * [8] Transient Region, e.g. MPU memory region is temproraily + * mapped for a short time */ #define _PAGE_AI_BIT 0 #define _PAGE_XN_BIT 3 #define _PAGE_AP_BIT 5 #define _PAGE_PRESENT_BIT 7 +#define _PAGE_TRANSIENT_BIT 8 #define _PAGE_AI (7U << _PAGE_AI_BIT) #define _PAGE_XN (2U << _PAGE_XN_BIT) #define _PAGE_RO (2U << _PAGE_AP_BIT) #define _PAGE_PRESENT (1U << _PAGE_PRESENT_BIT) +#define _PAGE_TRANSIENT (1U << _PAGE_TRANSIENT_BIT) #define PAGE_AI_MASK(x) (((x) >> _PAGE_AI_BIT) & 0x7U) #define PAGE_XN_MASK(x) (((x) >> _PAGE_XN_BIT) & 0x3U) #define PAGE_AP_MASK(x) (((x) >> _PAGE_AP_BIT) & 0x3U) #define PAGE_RO_MASK(x) (((x) >> _PAGE_AP_BIT) & 0x2U) +#define PAGE_TRANSIENT_MASK(x) (((x) >> _PAGE_TRANSIENT_BIT) & 0x1U) #endif /* CONFIG_HAS_MPU */ =20 /* diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index d35e7e280f..8625066256 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -61,8 +61,17 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icac= he) =20 void * __init early_fdt_map(paddr_t fdt_paddr) { +#ifndef CONFIG_HAS_MPU /* We are using 2MB superpage for mapping the FDT */ paddr_t base_paddr =3D fdt_paddr & SECOND_MASK; + unsigned int flags =3D PAGE_HYPERVISOR_RO | _PAGE_BLOCK; + unsigned long base_virt =3D BOOT_FDT_VIRT_START; +#else + /* MPU region must be PAGE aligned */ + paddr_t base_paddr =3D fdt_paddr & PAGE_MASK; + unsigned int flags =3D PAGE_HYPERVISOR_RO | _PAGE_TRANSIENT; + unsigned long base_virt =3D ~0UL; +#endif paddr_t offset; void *fdt_virt; uint32_t size; @@ -79,18 +88,24 @@ void * __init early_fdt_map(paddr_t fdt_paddr) if ( !fdt_paddr || fdt_paddr % MIN_FDT_ALIGN ) return NULL; =20 +#ifndef CONFIG_HAS_MPU /* The FDT is mapped using 2MB superpage */ BUILD_BUG_ON(BOOT_FDT_VIRT_START % SZ_2M); +#endif =20 - rc =3D map_pages_to_xen(BOOT_FDT_VIRT_START, maddr_to_mfn(base_paddr), - SZ_2M >> PAGE_SHIFT, - PAGE_HYPERVISOR_RO | _PAGE_BLOCK); + rc =3D map_pages_to_xen(base_virt, maddr_to_mfn(base_paddr), + SZ_2M >> PAGE_SHIFT, flags); if ( rc ) panic("Unable to map the device-tree.\n"); =20 =20 +#ifndef CONFIG_HAS_MPU offset =3D fdt_paddr % SECOND_SIZE; fdt_virt =3D (void *)BOOT_FDT_VIRT_START + offset; +#else + offset =3D fdt_paddr % PAGE_SIZE; + fdt_virt =3D (void *)fdt_paddr; +#endif =20 if ( fdt_magic(fdt_virt) !=3D FDT_MAGIC ) return NULL; @@ -101,10 +116,9 @@ void * __init early_fdt_map(paddr_t fdt_paddr) =20 if ( (offset + size) > SZ_2M ) { - rc =3D map_pages_to_xen(BOOT_FDT_VIRT_START + SZ_2M, + rc =3D map_pages_to_xen(base_virt + SZ_2M, maddr_to_mfn(base_paddr + SZ_2M), - SZ_2M >> PAGE_SHIFT, - PAGE_HYPERVISOR_RO | _PAGE_BLOCK); + SZ_2M >> PAGE_SHIFT, flags); if ( rc ) panic("Unable to map the device-tree\n"); } diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 14a1309ca1..f4ce19d36a 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -448,6 +448,7 @@ static int xen_mpumap_update_entry(paddr_t base, paddr_= t limit, /* Set permission */ xen_mpumap[idx].prbar.reg.ap =3D PAGE_AP_MASK(flags); xen_mpumap[idx].prbar.reg.xn =3D PAGE_XN_MASK(flags); + xen_mpumap[idx].prlar.reg.tran =3D PAGE_TRANSIENT_MASK(flags); =20 write_protection_region((const pr_t*)(&xen_mpumap[idx]), idx); } diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S index 4f7daa7dca..f2715d7cb7 100644 --- a/xen/arch/arm/xen.lds.S +++ b/xen/arch/arm/xen.lds.S @@ -216,7 +216,10 @@ SECTIONS _end =3D . ; =20 /* Section for the device tree blob (if any). */ - .dtb : { *(.dtb) } :text + .dtb : { + . =3D ALIGN(PAGE_SIZE); + *(.dtb) + } :text =20 DWARF2_DEBUG_SECTIONS =20 --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750888548382.12956133821865; Sun, 25 Jun 2023 20:41:28 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555110.866923 (Exim 4.92) (envelope-from ) id 1qDd5z-0007B1-Vv; Mon, 26 Jun 2023 03:40:55 +0000 Received: by outflank-mailman (output) from mailman id 555110.866923; Mon, 26 Jun 2023 03:40:55 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5z-00074p-3s; Mon, 26 Jun 2023 03:40:55 +0000 Received: by outflank-mailman (input) for mailman id 555110; Mon, 26 Jun 2023 03:40:51 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd21-0007ej-Q9 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:49 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id ae818a65-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:48 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7A7131FB; Sun, 25 Jun 2023 20:37:32 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D0CD03F64C; Sun, 25 Jun 2023 20:36:45 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ae818a65-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 32/52] xen/mpu: implement MPU version of setup_mm in mpu/setup.c Date: Mon, 26 Jun 2023 11:34:23 +0800 Message-Id: <20230626033443.2943270-33-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750890212100001 Content-Type: text/plain; charset="utf-8" In MPU system, resource, like Xenheap, must be statically configured to meet the requirement of static system with expected behavior. Then, in MPU version of setup_mm, we introduce setup_staticheap_mappings to map fixed MPU memory region for static Xenheap. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - move the changes to mpu/setup.c --- xen/arch/arm/Makefile | 1 + xen/arch/arm/include/asm/mpu/mm.h | 1 + xen/arch/arm/mpu/mm.c | 27 ++++++++++++ xen/arch/arm/mpu/setup.c | 70 +++++++++++++++++++++++++++++++ 4 files changed, 99 insertions(+) create mode 100644 xen/arch/arm/mpu/setup.c diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 3bd193ee32..5f6ee817ad 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -42,6 +42,7 @@ obj-y +=3D mmu/setup.o obj-y +=3D mmu/p2m.o else obj-y +=3D mpu/mm.o +obj-y +=3D mpu/setup.o endif obj-y +=3D mm.o obj-y +=3D monitor.o diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h index eec572ecfc..e26bd4f975 100644 --- a/xen/arch/arm/include/asm/mpu/mm.h +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -3,6 +3,7 @@ #define __ARCH_ARM_MM_MPU__ =20 extern int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int fla= gs); +extern void setup_staticheap_mappings(void); =20 #endif /* __ARCH_ARM_MM_MPU__ */ =20 diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index f4ce19d36a..7bd5609102 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -22,8 +22,10 @@ #include #include #include +#include #include #include +#include =20 #ifdef NDEBUG static inline void __attribute__ ((__format__ (__printf__, 1, 2))) @@ -486,6 +488,31 @@ int xen_mpumap_update(paddr_t base, paddr_t limit, uns= igned int flags) return rc; } =20 +/* + * Heap must be statically configured in Device Tree through + * "xen,static-heap" in MPU system. + */ +void __init setup_staticheap_mappings(void) +{ + unsigned int bank =3D 0; + + for ( ; bank < bootinfo.reserved_mem.nr_banks; bank++ ) + { + if ( bootinfo.reserved_mem.bank[bank].type =3D=3D MEMBANK_STATIC_H= EAP ) + { + paddr_t bank_start =3D round_pgup( + bootinfo.reserved_mem.bank[bank].start); + paddr_t bank_size =3D round_pgdown( + bootinfo.reserved_mem.bank[bank].size); + paddr_t bank_end =3D bank_start + bank_size; + + /* Map static heap with fixed MPU memory region */ + if ( xen_mpumap_update(bank_start, bank_end, PAGE_HYPERVISOR) ) + panic("mpu: failed to map static heap\n"); + } + } +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/mpu/setup.c b/xen/arch/arm/mpu/setup.c new file mode 100644 index 0000000000..31f412957c --- /dev/null +++ b/xen/arch/arm/mpu/setup.c @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * xen/arch/arm/mpu/setup.c + * + * Early bringup code for an Armv8-R with virt extensions. + * + * Copyright (C) 2023 Arm Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see . + */ + +#include +#include +#include +#include +#include +#include + +void __init setup_mm(void) +{ + paddr_t ram_start =3D ~0, ram_end =3D 0, ram_size =3D 0; + unsigned int bank; + + if ( !bootinfo.mem.nr_banks ) + panic("No memory bank\n"); + + init_pdx(); + + populate_boot_allocator(); + + total_pages =3D 0; + for ( bank =3D 0 ; bank < bootinfo.mem.nr_banks; bank++ ) + { + paddr_t bank_start =3D round_pgup(bootinfo.mem.bank[bank].start); + paddr_t bank_size =3D bootinfo.mem.bank[bank].size; + paddr_t bank_end =3D round_pgdown(bank_start + bank_size); + + ram_size =3D ram_size + bank_size; + ram_start =3D min(ram_start, bank_start); + ram_end =3D max(ram_end, bank_end); + } + + setup_staticheap_mappings(); + + total_pages +=3D ram_size >> PAGE_SHIFT; + max_page =3D PFN_DOWN(ram_end); + + setup_frametable_mappings(ram_start, ram_end); + + init_staticmem_pages(); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750800317613.3359893496976; Sun, 25 Jun 2023 20:40:00 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554990.866521 (Exim 4.92) (envelope-from ) id 1qDd4k-0004gi-3e; Mon, 26 Jun 2023 03:39:38 +0000 Received: by outflank-mailman (output) from mailman id 554990.866521; Mon, 26 Jun 2023 03:39:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4k-0004gb-0h; Mon, 26 Jun 2023 03:39:38 +0000 Received: by outflank-mailman (input) for mailman id 554990; Mon, 26 Jun 2023 03:39:36 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd24-0007ej-Vx for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:52 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id b05c4b7f-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:52 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 960CC1FB; Sun, 25 Jun 2023 20:37:35 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EEBF23F64C; Sun, 25 Jun 2023 20:36:48 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b05c4b7f-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 33/52] xen/mpu: initialize frametable in MPU system Date: Mon, 26 Jun 2023 11:34:24 +0800 Message-Id: <20230626033443.2943270-34-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750801851100003 Content-Type: text/plain; charset="utf-8" Xen is using page as the smallest granularity for memory managment. And we want to follow the same concept in MPU system. That is, structure page_info and the frametable which is used for storing and managing the smallest memory managment unit is also required in MPU sys= tem. In MPU system, since we can not use a fixed VA address(FRAMETABLE_VIRT_STAR= T) to map frametable like MMU system does and everything is 1:1 mapping, we instead define a variable "struct page_info *frame_table" as frametable pointer, and ask boot allocator to allocate appropriate memory for frametab= le. As frametable is successfully initialized, the convertion between machine f= rame number/machine address/"virtual address" and page-info structure is ready too, like mfn_to_page/maddr_to_page/virt_to_page, etc Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - add ASSERT() to confirm the MFN you pass is covered by the frametable. --- xen/arch/arm/include/asm/mm.h | 14 ++++++++++++++ xen/arch/arm/include/asm/mpu/mm.h | 3 +++ xen/arch/arm/mpu/mm.c | 27 +++++++++++++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index daa6329505..66d98b9a29 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -341,6 +341,19 @@ static inline uint64_t gvirt_to_maddr(vaddr_t va, padd= r_t *pa, #define virt_to_mfn(va) __virt_to_mfn(va) #define mfn_to_virt(mfn) __mfn_to_virt(mfn) =20 +#ifdef CONFIG_HAS_MPU +/* Convert between virtual address to page-info structure. */ +static inline struct page_info *virt_to_page(const void *v) +{ + unsigned long pdx; + + pdx =3D paddr_to_pdx(virt_to_maddr(v)); + ASSERT(pdx >=3D frametable_base_pdx); + ASSERT(pdx < frametable_pdx_end); + + return frame_table + pdx - frametable_base_pdx; +} +#else /* Convert between Xen-heap virtual addresses and page-info structures. */ static inline struct page_info *virt_to_page(const void *v) { @@ -354,6 +367,7 @@ static inline struct page_info *virt_to_page(const void= *v) pdx +=3D mfn_to_pdx(directmap_mfn_start); return frame_table + pdx - frametable_base_pdx; } +#endif =20 static inline void *page_to_virt(const struct page_info *pg) { diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h index e26bd4f975..98f6df65b8 100644 --- a/xen/arch/arm/include/asm/mpu/mm.h +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -2,6 +2,9 @@ #ifndef __ARCH_ARM_MM_MPU__ #define __ARCH_ARM_MM_MPU__ =20 +extern struct page_info *frame_table; +extern unsigned long frametable_pdx_end; + extern int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int fla= gs); extern void setup_staticheap_mappings(void); =20 diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 7bd5609102..0a65b58dc4 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -27,6 +27,10 @@ #include #include =20 +/* Override macros from asm/mm.h to make them work with mfn_t */ +#undef mfn_to_virt +#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn)) + #ifdef NDEBUG static inline void __attribute__ ((__format__ (__printf__, 1, 2))) region_printk(const char *fmt, ...) {} @@ -58,6 +62,9 @@ static DEFINE_SPINLOCK(xen_mpumap_lock); =20 static DEFINE_SPINLOCK(xen_mpumap_alloc_lock); =20 +struct page_info *frame_table; +unsigned long frametable_pdx_end __read_mostly; + /* Write a MPU protection region */ #define WRITE_PROTECTION_REGION(pr, prbar_el2, prlar_el2) ({ \ const pr_t *_pr =3D pr; \ @@ -513,6 +520,26 @@ void __init setup_staticheap_mappings(void) } } =20 +/* Map a frame table to cover physical addresses ps through pe */ +void __init setup_frametable_mappings(paddr_t ps, paddr_t pe) +{ + mfn_t base_mfn; + unsigned long nr_pdxs =3D mfn_to_pdx(mfn_add(maddr_to_mfn(pe), -1)) - + mfn_to_pdx(maddr_to_mfn(ps)) + 1; + unsigned long frametable_size =3D nr_pdxs * sizeof(struct page_info); + + frametable_base_pdx =3D paddr_to_pdx(ps); + frametable_size =3D ROUNDUP(frametable_size, PAGE_SIZE); + frametable_pdx_end =3D frametable_base_pdx + nr_pdxs; + + base_mfn =3D alloc_boot_pages(frametable_size >> PAGE_SHIFT, 1); + frame_table =3D (struct page_info *)mfn_to_virt(base_mfn); + + memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info)); + memset(&frame_table[nr_pdxs], -1, + frametable_size - (nr_pdxs * sizeof(struct page_info))); +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750829830494.3182558671135; Sun, 25 Jun 2023 20:40:29 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555035.866656 (Exim 4.92) (envelope-from ) id 1qDd54-0000sd-Sh; Mon, 26 Jun 2023 03:39:58 +0000 Received: by outflank-mailman (output) from mailman id 555035.866656; Mon, 26 Jun 2023 03:39:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd54-0000lk-0D; Mon, 26 Jun 2023 03:39:58 +0000 Received: by outflank-mailman (input) for mailman id 555035; Mon, 26 Jun 2023 03:39:55 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd29-0000HH-1W for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:57 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id b2367ec0-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:36:55 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AE1181FB; Sun, 25 Jun 2023 20:37:38 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 12A833F64C; Sun, 25 Jun 2023 20:36:51 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b2367ec0-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 34/52] xen/mpu: destroy an existing entry in Xen MPU memory mapping table Date: Mon, 26 Jun 2023 11:34:25 +0800 Message-Id: <20230626033443.2943270-35-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750830045100003 Content-Type: text/plain; charset="utf-8" This commit expands xen_mpumap_update/xen_mpumap_update_entry to include destroying an existing entry. We define a new helper "control_xen_mpumap_region_from_index" to enable/dis= able the MPU region based on index. If region is within [0, 31], we could quickly disable the MPU region through PRENR_EL2 which provides direct access to the PRLAR_EL2.EN bits of EL2 MPU regions. Rignt now, we only support destroying a *WHOLE* MPU memory region, part-region removing is not supported, as in worst case, it will leave two fragments behind. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - make pr_get_base()/pr_get_limit() static inline - need an isb to ensure register write visible before zeroing the entry --- xen/arch/arm/include/asm/arm64/mpu.h | 2 + xen/arch/arm/include/asm/arm64/sysregs.h | 3 + xen/arch/arm/mm.c | 5 ++ xen/arch/arm/mpu/mm.c | 74 ++++++++++++++++++++++++ 4 files changed, 84 insertions(+) diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/as= m/arm64/mpu.h index 715ea69884..aee7947223 100644 --- a/xen/arch/arm/include/asm/arm64/mpu.h +++ b/xen/arch/arm/include/asm/arm64/mpu.h @@ -25,6 +25,8 @@ #define REGION_UART_SEL 0x07 #define MPUIR_REGION_MASK ((_AC(1, UL) << 8) - 1) =20 +#define MPU_PRENR_BITS 32 + /* Access permission attributes. */ /* Read/Write at EL2, No Access at EL1/EL0. */ #define AP_RW_EL2 0x0 diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/includ= e/asm/arm64/sysregs.h index c8a679afdd..96c025053b 100644 --- a/xen/arch/arm/include/asm/arm64/sysregs.h +++ b/xen/arch/arm/include/asm/arm64/sysregs.h @@ -509,6 +509,9 @@ /* MPU Type registers encode */ #define MPUIR_EL2 S3_4_C0_C0_4 =20 +/* MPU Protection Region Enable Register encode */ +#define PRENR_EL2 S3_4_C6_C1_1 + #endif =20 /* Access to system registers */ diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 8625066256..247d17cfa1 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -164,7 +164,12 @@ int destroy_xen_mappings(unsigned long s, unsigned lon= g e) ASSERT(IS_ALIGNED(s, PAGE_SIZE)); ASSERT(IS_ALIGNED(e, PAGE_SIZE)); ASSERT(s <=3D e); +#ifndef CONFIG_HAS_MPU return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, 0); +#else + return xen_mpumap_update(virt_to_maddr((void *)s), + virt_to_maddr((void *)e), 0); +#endif } =20 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int fla= gs) diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 0a65b58dc4..a40055ae5e 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -425,6 +425,59 @@ static int mpumap_contain_region(pr_t *table, uint8_t = nr_regions, return MPUMAP_REGION_FAILED; } =20 +/* Disable or enable EL2 MPU memory region at index #index */ +static void control_mpu_region_from_index(uint8_t index, bool enable) +{ + pr_t region; + + read_protection_region(®ion, index); + if ( !region_is_valid(®ion) ^ enable ) + { + printk(XENLOG_WARNING + "mpu: MPU memory region[%u] is already %s\n", index, + enable ? "enabled" : "disabled"); + return; + } + + /* + * ARM64v8R provides PRENR_EL2 to have direct access to the + * PRLAR_EL2.EN bits of EL2 MPU regions from 0 to 31. + */ + if ( index < MPU_PRENR_BITS ) + { + uint64_t orig, after; + + orig =3D READ_SYSREG(PRENR_EL2); + if ( enable ) + /* Set respective bit */ + after =3D orig | (1UL << index); + else + /* Clear respective bit */ + after =3D orig & (~(1UL << index)); + WRITE_SYSREG(after, PRENR_EL2); + } + else + { + region.prlar.reg.en =3D enable ? 1 : 0; + write_protection_region((const pr_t*)®ion, index); + } + /* Ensure the write before zeroing the entry */ + isb(); + + /* Update according bitfield in xen_mpumap_mask */ + spin_lock(&xen_mpumap_alloc_lock); + + if ( enable ) + set_bit(index, xen_mpumap_mask); + else + { + clear_bit(index, xen_mpumap_mask); + memset(&xen_mpumap[index], 0, sizeof(pr_t)); + } + + spin_unlock(&xen_mpumap_alloc_lock); +} + /* * Update an entry in Xen MPU memory region mapping table(xen_mpumap) at * the index @idx. @@ -461,6 +514,27 @@ static int xen_mpumap_update_entry(paddr_t base, paddr= _t limit, =20 write_protection_region((const pr_t*)(&xen_mpumap[idx]), idx); } + else + { + /* + * Currently, we only support destroying a *WHOLE* MPU memory regi= on, + * part-region removing is not supported, as in worst case, it will + * leave two fragments behind. + * part-region removing will be introduced only when actual usage + * comes. + */ + if ( rc =3D=3D MPUMAP_REGION_INCLUSIVE ) + { + region_printk("mpu: part-region removing is not supported\n"); + return -EINVAL; + } + + /* We are removing the region */ + if ( rc !=3D MPUMAP_REGION_FOUND ) + return -EINVAL; + + control_mpu_region_from_index(idx, false); + } =20 return 0; } --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 168775085576778.81521242503345; Sun, 25 Jun 2023 20:40:55 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555072.866769 (Exim 4.92) (envelope-from ) id 1qDd5S-0007NK-P5; Mon, 26 Jun 2023 03:40:22 +0000 Received: by outflank-mailman (output) from mailman id 555072.866769; Mon, 26 Jun 2023 03:40:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5S-0007ML-Fb; Mon, 26 Jun 2023 03:40:22 +0000 Received: by outflank-mailman (input) for mailman id 555072; Mon, 26 Jun 2023 03:40:20 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2B-0007ej-Bm for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:36:59 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id b41595ed-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:36:58 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C69601FB; Sun, 25 Jun 2023 20:37:41 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2ACB33F64C; Sun, 25 Jun 2023 20:36:54 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b41595ed-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 35/52] xen/arm: map static memory on demand Date: Mon, 26 Jun 2023 11:34:26 +0800 Message-Id: <20230626033443.2943270-36-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750856174100001 Content-Type: text/plain; charset="utf-8" In function init_staticmem_pages, we need the access to static memory for proper initialization. It is not a problem in MMU system, as Xen map the whole RAM in setup_mm(). However, with limited MPU memory regions, it is too luxury to map the whole RAM. As a result, we follow the rule of "map on demand", to map static memory temporarily before its initialization, and unmap immediately after its initialization. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/include/asm/mm.h | 2 ++ xen/arch/arm/mmu/mm.c | 10 ++++++++++ xen/arch/arm/mpu/mm.c | 10 ++++++++++ xen/arch/arm/setup.c | 21 +++++++++++++++++++++ 4 files changed, 43 insertions(+) diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index 66d98b9a29..cffbf8a595 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -224,6 +224,8 @@ extern void mm_init_secondary_cpu(void); extern void setup_frametable_mappings(paddr_t ps, paddr_t pe); /* map a physical range in virtual memory */ void __iomem *ioremap_attr(paddr_t start, size_t len, unsigned int attribu= tes); +extern int map_staticmem_pages_to_xen(paddr_t start, paddr_t end); +extern int unmap_staticmem_pages_to_xen(paddr_t start, paddr_t end); =20 static inline void __iomem *ioremap_nocache(paddr_t start, size_t len) { diff --git a/xen/arch/arm/mmu/mm.c b/xen/arch/arm/mmu/mm.c index 2f29cb53fe..4196a55c32 100644 --- a/xen/arch/arm/mmu/mm.c +++ b/xen/arch/arm/mmu/mm.c @@ -1113,6 +1113,16 @@ int populate_pt_range(unsigned long virt, unsigned l= ong nr_mfns) return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE); } =20 +int __init map_staticmem_pages_to_xen(paddr_t start, paddr_t end) +{ + return 0; +} + +int __init unmap_staticmem_pages_to_xen(paddr_t start, paddr_t end) +{ + return 0; +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index a40055ae5e..9d5c1da39c 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -614,6 +614,16 @@ void __init setup_frametable_mappings(paddr_t ps, padd= r_t pe) frametable_size - (nr_pdxs * sizeof(struct page_info))); } =20 +int __init map_staticmem_pages_to_xen(paddr_t start, paddr_t end) +{ + return xen_mpumap_update(start, end, PAGE_HYPERVISOR | _PAGE_TRANSIENT= ); +} + +int __init unmap_staticmem_pages_to_xen(paddr_t start, paddr_t end) +{ + return xen_mpumap_update(start, end, 0); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index f42b53d17b..c21d1db763 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -637,12 +637,33 @@ void __init init_staticmem_pages(void) mfn_t bank_start =3D _mfn(PFN_UP(bootinfo.reserved_mem.bank[ba= nk].start)); unsigned long bank_pages =3D PFN_DOWN(bootinfo.reserved_mem.ba= nk[bank].size); mfn_t bank_end =3D mfn_add(bank_start, bank_pages); + int res; =20 if ( mfn_x(bank_end) <=3D mfn_x(bank_start) ) return; =20 + /* Map temporarily before initialization */ + res =3D map_staticmem_pages_to_xen(mfn_to_maddr(bank_start), + mfn_to_maddr(bank_end)); + if ( res ) + { + printk(XENLOG_ERR "Failed to map static memory to Xen: %d\= n", + res); + return; + } + unprepare_staticmem_pages(mfn_to_page(bank_start), bank_pages, false); + + /* Unmap immediately after initialization */ + res =3D unmap_staticmem_pages_to_xen(mfn_to_maddr(bank_start), + mfn_to_maddr(bank_end)); + if ( res ) + { + printk(XENLOG_ERR "Failed to unmap static memory to Xen: %= d\n", + res); + return; + } } } #endif --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750811305332.6945236705251; Sun, 25 Jun 2023 20:40:11 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555003.866554 (Exim 4.92) (envelope-from ) id 1qDd4p-0005i0-Ma; Mon, 26 Jun 2023 03:39:43 +0000 Received: by outflank-mailman (output) from mailman id 555003.866554; Mon, 26 Jun 2023 03:39:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4p-0005h8-EC; Mon, 26 Jun 2023 03:39:43 +0000 Received: by outflank-mailman (input) for mailman id 555003; Mon, 26 Jun 2023 03:39:42 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2E-0007ej-DN for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:02 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id b5ea710d-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:37:01 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DEFBB1FB; Sun, 25 Jun 2023 20:37:44 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 439C73F64C; Sun, 25 Jun 2023 20:36:58 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b5ea710d-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 36/52] xen/mpu: implememt ioremap_xxx in MPU Date: Mon, 26 Jun 2023 11:34:27 +0800 Message-Id: <20230626033443.2943270-37-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750811886100001 Content-Type: text/plain; charset="utf-8" A set of function ioremap_xxx are designed to map deivce memory or remap part of memory temporarily for short-time special purpose, like using ioremap_wc to temporarily remap guest kernel non-cacheable, for copying it to guest memory. As virtual translation is not supported in MPU, and we always follow the rule of "map in demand" in MPU, we implement MPU version of ioremap_xxx, through mapping the memory with a transient MPU memory region. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - adapt to the new rule of "map in demand" --- xen/arch/arm/include/asm/arm64/mpu.h | 4 + xen/arch/arm/include/asm/mm.h | 6 + xen/arch/arm/mpu/mm.c | 185 +++++++++++++++++++++++++++ 3 files changed, 195 insertions(+) diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/as= m/arm64/mpu.h index aee7947223..c5e69f239a 100644 --- a/xen/arch/arm/include/asm/arm64/mpu.h +++ b/xen/arch/arm/include/asm/arm64/mpu.h @@ -121,6 +121,10 @@ static inline bool region_is_valid(pr_t *pr) return pr->prlar.reg.en; } =20 +static inline bool region_is_transient(pr_t *pr) +{ + return pr->prlar.reg.tran; +} #endif /* __ASSEMBLY__ */ =20 #endif /* __ARM64_MPU_H__ */ diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index cffbf8a595..0352182d99 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -227,6 +227,7 @@ void __iomem *ioremap_attr(paddr_t start, size_t len, u= nsigned int attributes); extern int map_staticmem_pages_to_xen(paddr_t start, paddr_t end); extern int unmap_staticmem_pages_to_xen(paddr_t start, paddr_t end); =20 +#ifndef CONFIG_HAS_MPU static inline void __iomem *ioremap_nocache(paddr_t start, size_t len) { return ioremap_attr(start, len, PAGE_HYPERVISOR_NOCACHE); @@ -241,6 +242,11 @@ static inline void __iomem *ioremap_wc(paddr_t start, = size_t len) { return ioremap_attr(start, len, PAGE_HYPERVISOR_WC); } +#else +extern void __iomem *ioremap_nocache(paddr_t start, size_t len); +extern void __iomem *ioremap_cache(paddr_t start, size_t len); +extern void __iomem *ioremap_wc(paddr_t start, size_t len); +#endif =20 /* XXX -- account for base */ #define mfn_valid(mfn) ({ = \ diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 9d5c1da39c..3bb1a5c7c4 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -624,6 +624,191 @@ int __init unmap_staticmem_pages_to_xen(paddr_t start= , paddr_t end) return xen_mpumap_update(start, end, 0); } =20 +/* + * Check whether memory range [pa, pa + len) is mapped in Xen MPU + * memory mapping table xen_mpumap. + * + * If it is mapped, the associated index will be returned. + * If it is not mapped, INVALID_REGION_IDX will be returned. + */ +static uint8_t is_mm_range_mapped(paddr_t pa, paddr_t len) +{ + int rc; + uint8_t idx; + + rc =3D mpumap_contain_region(xen_mpumap, max_xen_mpumap, pa, pa + len = - 1, + &idx); + if ( (rc =3D=3D MPUMAP_REGION_FOUND) || (rc =3D=3D MPUMAP_REGION_INCLU= SIVE) ) + return idx; + + if ( rc =3D=3D MPUMAP_REGION_OVERLAP ) + panic("mpu: can not deal with overlapped MPU memory region\n"); + /* Not mapped */ + return INVALID_REGION_IDX; +} + +static bool is_mm_attr_match(pr_t *region, unsigned int attributes) +{ + if ( region->prbar.reg.ap !=3D PAGE_AP_MASK(attributes) ) + { + printk(XENLOG_WARNING "region permission is not matched (0x%x -> 0= x%x)\n", + region->prbar.reg.ap, PAGE_AP_MASK(attributes)); + return false; + } + + if ( region->prbar.reg.xn !=3D PAGE_XN_MASK(attributes) ) + { + printk(XENLOG_WARNING "region execution permission is not matched = (0x%x -> 0x%x)\n", + region->prbar.reg.xn, PAGE_XN_MASK(attributes)); + return false; + } + + if ( region->prlar.reg.ai !=3D PAGE_AI_MASK(attributes) ) + { + printk(XENLOG_WARNING "region memory attributes is not matched (0x= %x -> 0x%x)\n", + region->prlar.reg.ai, PAGE_AI_MASK(attributes)); + return false; + } + + return true; +} + +/* + * Check whether memory range [pa, pa + len) is mapped with memory + * attributes #attr in Xen MPU memory mapping table xen_mpumap. + * + * If it is mapped but with different memory attributes, Errno -EINVAL + * will be returned. + * If it is not mapped at all, Errno -ENOENT will be returned. + */ +static int is_mm_range_mapped_with_attr(paddr_t pa, paddr_t len, + unsigned int attr) +{ + uint8_t idx; + + idx =3D is_mm_range_mapped(pa, len); + if ( idx !=3D INVALID_REGION_IDX ) + { + pr_t *region; + + region =3D &xen_mpumap[idx]; + if ( !is_mm_attr_match(region, attr) ) + return -EINVAL; + + return 0; + } + + return -ENOENT; +} + +/* + * map_mm_range shall work with unmap_mm_range to map a chunk + * of memory with a transient MPU memory region for a period of short time. + */ +static void *map_mm_range(paddr_t pa, size_t len, unsigned int attributes) +{ + if ( xen_mpumap_update(pa, pa + len, attributes | _PAGE_TRANSIENT) ) + printk(XENLOG_ERR "Failed to map_mm_range 0x%"PRIpaddr"-0x%"PRIpad= dr"\n", + pa, pa + len); + + return maddr_to_virt(pa); +} + +static void unmap_mm_range(paddr_t pa) +{ + uint8_t idx; + + /* + * The mapping size in map_mm_range is at least PAGE_SIZE. + * Find the MPU memory region mapped through map_mm_range, and associa= ted + * idx will be returned. + */ + idx =3D is_mm_range_mapped(pa, PAGE_SIZE); + if ( idx =3D=3D INVALID_REGION_IDX ) + { + printk(XENLOG_ERR "Failed to unmap_mm_range MPU memory region at 0= x%"PRIpaddr"\n", + pa); + return; + } + + if ( !region_is_transient(&xen_mpumap[idx]) ) + { + printk(XENLOG_WARNING "Failed to unmap MPU memory region at 0x%"PR= Ipaddr"\n, as it is not transient\n", + pa); + return; + } + + /* Disable MPU memory region and clear the according entry in xen_mpum= ap */ + control_mpu_region_from_index(idx, false); +} + +/* + * It works with "iounmap" as a pair to temporarily map a chunk of memory + * with a transient MPU memory region, for short-time special accessing. + */ +void *ioremap_attr(paddr_t pa, size_t len, unsigned int attributes) +{ + return map_mm_range(round_pgdown(pa), round_pgup(len), attributes); +} + +/* ioremap_nocache is normally used to map device memory */ +void __iomem *ioremap_nocache(paddr_t start, size_t len) +{ + int rc; + + /* Check whether it is already mapped as device memory */ + rc =3D is_mm_range_mapped_with_attr(start, len, PAGE_HYPERVISOR_NOCACH= E); + if ( rc =3D=3D -ENOENT ) + return ioremap_attr(start, len, PAGE_HYPERVISOR_NOCACHE); + else if ( rc !=3D 0 ) + return NULL; + + /* Already mapped */ + return maddr_to_virt(start); +} + +/* + * ioremap_cache which is working with iounmap as a pair, is normally used= to + * map a chunck of cacheable memory temporarily for short-time special pur= pose. + */ +void __iomem *ioremap_cache(paddr_t start, size_t len) +{ + int rc; + + rc =3D is_mm_range_mapped_with_attr(start, len, PAGE_HYPERVISOR); + if ( rc =3D=3D -ENOENT ) + return ioremap_attr(start, len, PAGE_HYPERVISOR); + else if ( rc !=3D 0 ) + return NULL; + + /* Already mapped */ + return maddr_to_virt(start); +} + +/* + * ioremap_wc which is working with iounmap as a pair, is normally used to + * map a chunck of non-cacheable memory temporarily for short-time special + * purpose. + */ +void __iomem *ioremap_wc(paddr_t start, size_t len) +{ + int rc; + + rc =3D is_mm_range_mapped_with_attr(start, len, PAGE_HYPERVISOR_WC); + if ( rc =3D=3D -ENOENT ) + ioremap_attr(start, len, PAGE_HYPERVISOR_WC); + else if ( rc !=3D 0 ) + return NULL; + + /* Already mapped */ + return maddr_to_virt(start); +} + +void iounmap(void __iomem *va) +{ + unmap_mm_range(virt_to_maddr(va)); +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750842172292.89306631610725; Sun, 25 Jun 2023 20:40:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555045.866689 (Exim 4.92) (envelope-from ) id 1qDd5C-0002gL-86; Mon, 26 Jun 2023 03:40:06 +0000 Received: by outflank-mailman (output) from mailman id 555045.866689; Mon, 26 Jun 2023 03:40:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5A-0002Ys-SZ; Mon, 26 Jun 2023 03:40:04 +0000 Received: by outflank-mailman (input) for mailman id 555045; Mon, 26 Jun 2023 03:40:01 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2I-0000HH-Ci for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:06 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id b7c70753-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:37:04 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0370E1FB; Sun, 25 Jun 2023 20:37:48 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5BAA53F64C; Sun, 25 Jun 2023 20:37:01 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b7c70753-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 37/52] xen/mpu: implement MPU version of copy_from_paddr Date: Mon, 26 Jun 2023 11:34:28 +0800 Message-Id: <20230626033443.2943270-38-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750844097100007 Content-Type: text/plain; charset="utf-8" When implementing MPU version of copy_from_paddr, if source physical address is not accessible, we shall map it temporarily with a transient MPU memory region for copying and pasting. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new patch --- xen/arch/arm/include/asm/mpu/mm.h | 3 +++ xen/arch/arm/mpu/mm.c | 6 +++--- xen/arch/arm/mpu/setup.c | 32 +++++++++++++++++++++++++++++++ 3 files changed, 38 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h index 98f6df65b8..452fe20c5f 100644 --- a/xen/arch/arm/include/asm/mpu/mm.h +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -7,6 +7,9 @@ extern unsigned long frametable_pdx_end; =20 extern int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int fla= gs); extern void setup_staticheap_mappings(void); +extern uint8_t is_mm_range_mapped(paddr_t pa, paddr_t len); +extern void *map_mm_range(paddr_t pa, size_t len, unsigned int attributes); +extern void unmap_mm_range(paddr_t pa); =20 #endif /* __ARCH_ARM_MM_MPU__ */ =20 diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 3bb1a5c7c4..21276d6de9 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -631,7 +631,7 @@ int __init unmap_staticmem_pages_to_xen(paddr_t start, = paddr_t end) * If it is mapped, the associated index will be returned. * If it is not mapped, INVALID_REGION_IDX will be returned. */ -static uint8_t is_mm_range_mapped(paddr_t pa, paddr_t len) +uint8_t is_mm_range_mapped(paddr_t pa, paddr_t len) { int rc; uint8_t idx; @@ -705,7 +705,7 @@ static int is_mm_range_mapped_with_attr(paddr_t pa, pad= dr_t len, * map_mm_range shall work with unmap_mm_range to map a chunk * of memory with a transient MPU memory region for a period of short time. */ -static void *map_mm_range(paddr_t pa, size_t len, unsigned int attributes) +void *map_mm_range(paddr_t pa, size_t len, unsigned int attributes) { if ( xen_mpumap_update(pa, pa + len, attributes | _PAGE_TRANSIENT) ) printk(XENLOG_ERR "Failed to map_mm_range 0x%"PRIpaddr"-0x%"PRIpad= dr"\n", @@ -714,7 +714,7 @@ static void *map_mm_range(paddr_t pa, size_t len, unsig= ned int attributes) return maddr_to_virt(pa); } =20 -static void unmap_mm_range(paddr_t pa) +void unmap_mm_range(paddr_t pa) { uint8_t idx; =20 diff --git a/xen/arch/arm/mpu/setup.c b/xen/arch/arm/mpu/setup.c index 31f412957c..9963975b4e 100644 --- a/xen/arch/arm/mpu/setup.c +++ b/xen/arch/arm/mpu/setup.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -60,6 +61,37 @@ void __init setup_mm(void) init_staticmem_pages(); } =20 +/* + * copy_from_paddr - copy data from a physical address + * @dst: destination virtual address + * @paddr: source physical address + * @len: length to copy + */ +void __init copy_from_paddr(void *dst, paddr_t paddr, unsigned long len) +{ + void *src, *rc =3D NULL; + uint8_t idx; + + idx =3D is_mm_range_mapped(round_pgdown(paddr), round_pgup(len)); + if ( idx =3D=3D INVALID_REGION_IDX ) + { + /* + * If source physical address is not accessible, we shall map it + * temporarily for copying and pasting + */ + rc =3D map_mm_range(round_pgdown(paddr), round_pgup(len), + PAGE_HYPERVISOR_WC); + if ( !rc ) + return; + } + + src =3D maddr_to_virt(paddr); + memcpy(dst, src, len); + + if ( rc ) + unmap_mm_range(round_pgdown(paddr)); +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750885819459.17948223097085; Sun, 25 Jun 2023 20:41:25 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555106.866899 (Exim 4.92) (envelope-from ) id 1qDd5u-0005kM-LM; Mon, 26 Jun 2023 03:40:50 +0000 Received: by outflank-mailman (output) from mailman id 555106.866899; Mon, 26 Jun 2023 03:40:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5t-0005es-MR; Mon, 26 Jun 2023 03:40:49 +0000 Received: by outflank-mailman (input) for mailman id 555106; Mon, 26 Jun 2023 03:40:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2L-0000HH-IC for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:09 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id b9a4e930-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:37:07 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1BF951FB; Sun, 25 Jun 2023 20:37:51 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 745663F64C; Sun, 25 Jun 2023 20:37:04 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b9a4e930-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 38/52] xen/mpu: map domain page in MPU system Date: Mon, 26 Jun 2023 11:34:29 +0800 Message-Id: <20230626033443.2943270-39-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750886225100002 Content-Type: text/plain; charset="utf-8" In MPU system, we implement map_domain_page()/unmap_domain_page() through mapping the domain page with a transient MPU region on demand. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new patch --- xen/arch/arm/Makefile | 4 ++ xen/arch/arm/include/asm/mpu/mm.h | 1 + xen/arch/arm/mpu/domain_page.c | 68 +++++++++++++++++++++++++++++++ xen/arch/arm/mpu/mm.c | 17 ++++++++ 4 files changed, 90 insertions(+) create mode 100644 xen/arch/arm/mpu/domain_page.c diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 5f6ee817ad..feb49640a0 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -17,7 +17,11 @@ obj-y +=3D device.o obj-$(CONFIG_IOREQ_SERVER) +=3D dm.o obj-y +=3D domain.o obj-y +=3D domain_build.init.o +ifneq ($(CONFIG_HAS_MPU),y) obj-$(CONFIG_ARCH_MAP_DOMAIN_PAGE) +=3D domain_page.o +else +obj-$(CONFIG_ARCH_MAP_DOMAIN_PAGE) +=3D mpu/domain_page.o +endif obj-y +=3D domctl.o obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o obj-y +=3D efi/ diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h index 452fe20c5f..a83519ad13 100644 --- a/xen/arch/arm/include/asm/mpu/mm.h +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -10,6 +10,7 @@ extern void setup_staticheap_mappings(void); extern uint8_t is_mm_range_mapped(paddr_t pa, paddr_t len); extern void *map_mm_range(paddr_t pa, size_t len, unsigned int attributes); extern void unmap_mm_range(paddr_t pa); +extern bool is_mm_range_mapped_transient(paddr_t pa, paddr_t len); =20 #endif /* __ARCH_ARM_MM_MPU__ */ =20 diff --git a/xen/arch/arm/mpu/domain_page.c b/xen/arch/arm/mpu/domain_page.c new file mode 100644 index 0000000000..da408bb9e0 --- /dev/null +++ b/xen/arch/arm/mpu/domain_page.c @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#include +#include + +/* Override macros from asm/mm.h to make them work with mfn_t */ +#undef mfn_to_virt +#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn)) + +void *map_domain_page_global(mfn_t mfn) +{ + /* TODO: map shared domain page globally */ + printk(XENLOG_ERR + "mpu: mapping shared domain page not SUPPORTED right now!\n"); + return NULL; +} + +void unmap_domain_page_global(const void *va) +{ + /* TODO: map shared domain page globally */ + printk(XENLOG_ERR + "mpu: mapping shared domain page not SUPPORTED right now!\n"); + return; +} + +/* Map a page of domain memory */ +void *map_domain_page(mfn_t mfn) +{ + uint8_t idx; + paddr_t pa =3D mfn_to_maddr(mfn); + + idx =3D is_mm_range_mapped(pa, PAGE_SIZE); + if ( idx !=3D INVALID_REGION_IDX ) + /* Already mapped */ + return mfn_to_virt(mfn); + else + /* + * Map it temporarily with a transient MPU region. + * And it is caller's responsibity to unmap it + * through unmap_domain_page. + */ + return map_mm_range(pa, PAGE_SIZE, PAGE_HYPERVISOR_RW); +} + +/* Release a mapping taken with map_domain_page() */ +void unmap_domain_page(const void *va) +{ + paddr_t pa =3D (paddr_t)(unsigned long)(va); + + /* Only unmap transient page */ + if ( is_mm_range_mapped_transient(pa, PAGE_SIZE) ) + unmap_mm_range(pa); +} + +mfn_t domain_page_map_to_mfn(const void *ptr) +{ + printk(XENLOG_ERR + "mpu: domain_page_map_to_mfn() not SUPPORTED right now!\n"); + return INVALID_MFN; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 21276d6de9..b2419f0603 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -809,6 +809,23 @@ void iounmap(void __iomem *va) unmap_mm_range(virt_to_maddr(va)); } =20 +bool is_mm_range_mapped_transient(paddr_t pa, paddr_t len) +{ + uint8_t idx; + + idx =3D is_mm_range_mapped(pa, len); + if ( idx !=3D INVALID_REGION_IDX ) + { + pr_t *region; + + region =3D &xen_mpumap[idx]; + if ( region_is_transient(region) ) + return true; + } + + return false; +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750823474597.1872321677741; Sun, 25 Jun 2023 20:40:23 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555016.866598 (Exim 4.92) (envelope-from ) id 1qDd4u-0006m2-6P; Mon, 26 Jun 2023 03:39:48 +0000 Received: by outflank-mailman (output) from mailman id 555016.866598; Mon, 26 Jun 2023 03:39:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4t-0006jm-Jx; Mon, 26 Jun 2023 03:39:47 +0000 Received: by outflank-mailman (input) for mailman id 555016; Mon, 26 Jun 2023 03:39:45 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2O-0000HH-Mx for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:12 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id bb901191-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:37:10 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 417381FB; Sun, 25 Jun 2023 20:37:54 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8CDF73F64C; Sun, 25 Jun 2023 20:37:07 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bb901191-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 39/52] xen/mpu: support free_init_memory in MPU system Date: Mon, 26 Jun 2023 11:34:30 +0800 Message-Id: <20230626033443.2943270-40-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750823942100001 Content-Type: text/plain; charset="utf-8" This commit refines free_init_memory() to make it support in MPU system. We are supporting modify_xen_mappings() in MPU system too, and it is responsible for modifying memory permission of a existing MPU memory region. Currently, we only support modifying a *WHOLE* MPU memory region, part-region modification is not supported, as in worst case, it will leave three fragments behind. In MPU system, we map init text and init data section, each with a MPU memo= ry region. So we shall destroy it seperately in free_init_memory(). Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - As MMU and MPU could share a lot of codes, we made the changes in original function free_init_memory() in mm.c --- xen/arch/arm/mm.c | 14 +++++++++++++- xen/arch/arm/mpu/mm.c | 33 ++++++++++++++++++++++++++++++++- 2 files changed, 45 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 247d17cfa1..ba4ae74e18 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -177,7 +177,12 @@ int modify_xen_mappings(unsigned long s, unsigned long= e, unsigned int flags) ASSERT(IS_ALIGNED(s, PAGE_SIZE)); ASSERT(IS_ALIGNED(e, PAGE_SIZE)); ASSERT(s <=3D e); +#ifndef CONFIG_HAS_MPU return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, flags); +#else + return xen_mpumap_update(virt_to_maddr((void *)s), + virt_to_maddr((void *)e), flags); +#endif } =20 /* Release all __init and __initdata ranges to be reused */ @@ -212,10 +217,17 @@ void free_init_memory(void) for ( i =3D 0; i < nr; i++ ) *(p + i) =3D insn; =20 + /* Remove init text section */ rc =3D destroy_xen_mappings((unsigned long)__init_begin, + (unsigned long)inittext_end); + if ( rc ) + panic("Unable to remove the init text section (rc =3D %d)\n", rc); + + /* Remove init data section */ + rc =3D destroy_xen_mappings((unsigned long)inittext_end, (unsigned long)__init_end); if ( rc ) - panic("Unable to remove the init section (rc =3D %d)\n", rc); + panic("Unable to remove the init data section (rc =3D %d)\n", rc); =20 if ( !xen_is_using_staticheap() ) { diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index b2419f0603..79d1c10d05 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -496,8 +496,39 @@ static int xen_mpumap_update_entry(paddr_t base, paddr= _t limit, if ( (rc < 0) || (rc =3D=3D MPUMAP_REGION_OVERLAP) ) return -EINVAL; =20 + /* We are updating the permission. */ + if ( (flags & _PAGE_PRESENT) && (rc =3D=3D MPUMAP_REGION_FOUND || + rc =3D=3D MPUMAP_REGION_INCLUSIVE) ) + { + /* + * Currently, we only support modifying a *WHOLE* MPU memory regio= n, + * part-region modification is not supported, as in worst case, it= will + * leave three fragments behind. + * part-region modification will be introduced only when actual us= age + * come + */ + if ( rc =3D=3D MPUMAP_REGION_INCLUSIVE ) + { + region_printk("mpu: part-region modification is not supported\= n"); + return -EINVAL; + } + + /* We don't allow changing memory attributes. */ + if (xen_mpumap[idx].prlar.reg.ai !=3D PAGE_AI_MASK(flags) ) + { + region_printk("Modifying memory attributes is not allowed (0x%= x -> 0x%x).\n", + xen_mpumap[idx].prlar.reg.ai, PAGE_AI_MASK(flags= )); + return -EINVAL; + } + + /* Set new permission */ + xen_mpumap[idx].prbar.reg.ap =3D PAGE_AP_MASK(flags); + xen_mpumap[idx].prbar.reg.xn =3D PAGE_XN_MASK(flags); + + write_protection_region((const pr_t*)(&xen_mpumap[idx]), idx); + } /* We are inserting a mapping =3D> Create new region. */ - if ( flags & _PAGE_PRESENT ) + else if ( flags & _PAGE_PRESENT ) { if ( rc !=3D MPUMAP_REGION_FAILED ) return -EINVAL; --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750812557356.48246366633975; Sun, 25 Jun 2023 20:40:12 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555007.866576 (Exim 4.92) (envelope-from ) id 1qDd4r-000657-IO; Mon, 26 Jun 2023 03:39:45 +0000 Received: by outflank-mailman (output) from mailman id 555007.866576; Mon, 26 Jun 2023 03:39:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4q-00061n-UX; Mon, 26 Jun 2023 03:39:44 +0000 Received: by outflank-mailman (input) for mailman id 555007; Mon, 26 Jun 2023 03:39:42 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2Q-0007ej-PM for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:14 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id bd598d93-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:37:13 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5A0401FB; Sun, 25 Jun 2023 20:37:57 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B249C3F64C; Sun, 25 Jun 2023 20:37:10 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bd598d93-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 40/52] xen/mpu: implement remove_early_mappings in MPU system Date: Mon, 26 Jun 2023 11:34:31 +0800 Message-Id: <20230626033443.2943270-41-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750813937100001 Content-Type: text/plain; charset="utf-8" We implement remove_early_mappings() to remove early mappings of FDT in MPU system. When mapping FDT in early_fdt_map(), we mapped the first 2MB firstly, and check the size and then map with an extra 2MB if needed. So the unmapping shall follow the same strategy. In MMU, we could use fixed virtual address to remove the mapping. As it is not workable for MPU, we pass the FDT physical address in remove_early_mappings() for MPU to destroy the mapping. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - adapt to the change of mapping FDT in two 2MB chunks if needed --- xen/arch/arm/include/asm/mm.h | 2 +- xen/arch/arm/mmu/mm.c | 2 +- xen/arch/arm/mpu/mm.c | 15 +++++++++++++++ xen/arch/arm/setup.c | 3 +-- 4 files changed, 18 insertions(+), 4 deletions(-) diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index 0352182d99..2b119a87da 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -209,7 +209,7 @@ extern bool flags_has_rwx(unsigned int flags); /* Map FDT in boot pagetable */ extern void *early_fdt_map(paddr_t fdt_paddr); /* Remove early mappings */ -extern void remove_early_mappings(void); +extern void remove_early_mappings(paddr_t dtb_paddr); /* * Allocate and initialise memory mapping for a secondary CPU. * Sets init_mm to the new memory mapping table diff --git a/xen/arch/arm/mmu/mm.c b/xen/arch/arm/mmu/mm.c index 4196a55c32..f37912d066 100644 --- a/xen/arch/arm/mmu/mm.c +++ b/xen/arch/arm/mmu/mm.c @@ -361,7 +361,7 @@ lpae_t pte_of_xenaddr(vaddr_t va) return mfn_to_xen_entry(maddr_to_mfn(ma), MT_NORMAL); } =20 -void __init remove_early_mappings(void) +void __init remove_early_mappings(paddr_t dtb_paddr) { int rc; =20 diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 79d1c10d05..27d924e449 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -20,9 +20,11 @@ */ =20 #include +#include #include #include #include +#include #include #include #include @@ -857,6 +859,19 @@ bool is_mm_range_mapped_transient(paddr_t pa, paddr_t = len) return false; } =20 +void __init remove_early_mappings(paddr_t dtb_paddr) +{ + paddr_t pa =3D dtb_paddr & PAGE_MASK, offset =3D dtb_paddr % PAGE_SIZE; + uint32_t size =3D fdt_totalsize(maddr_to_virt(dtb_paddr)); + + if ( xen_mpumap_update(pa, pa + SZ_2M, 0) ) + panic("Unable to destroy early Device-Tree mapping.\n"); + + if ( (offset + size) > SZ_2M ) + if ( xen_mpumap_update(pa + SZ_2M, pa + SZ_2M + SZ_2M, 0) ) + panic("Unable to destroy early Device-Tree mapping.\n"); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index c21d1db763..200fa6eb53 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -529,8 +529,6 @@ void __init discard_initial_modules(void) =20 mi->nr_mods =3D 0; } - - remove_early_mappings(); } =20 /* Relocate the FDT in Xen heap */ @@ -973,6 +971,7 @@ void __init start_xen(unsigned long boot_phys_offset, * will be scrubbed (unless suppressed). */ discard_initial_modules(); + remove_early_mappings(fdt_paddr); =20 heap_init_late(); =20 --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750815172473.0518738997836; Sun, 25 Jun 2023 20:40:15 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555010.866580 (Exim 4.92) (envelope-from ) id 1qDd4r-0006Ex-Uj; Mon, 26 Jun 2023 03:39:45 +0000 Received: by outflank-mailman (output) from mailman id 555010.866580; Mon, 26 Jun 2023 03:39:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4r-00069s-JY; Mon, 26 Jun 2023 03:39:45 +0000 Received: by outflank-mailman (input) for mailman id 555010; Mon, 26 Jun 2023 03:39:43 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2U-0000HH-Iw for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:18 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id bf32e1a5-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:37:16 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7A1D91FB; Sun, 25 Jun 2023 20:38:00 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CAB9F3F64C; Sun, 25 Jun 2023 20:37:13 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bf32e1a5-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 41/52] xen/mpu: Use secure hypervisor timer in MPU system Date: Mon, 26 Jun 2023 11:34:32 +0800 Message-Id: <20230626033443.2943270-42-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750815914100004 Content-Type: text/plain; charset="utf-8" As MPU system only has one secure state, we have to use secure EL2 hypervis= or timer for Xen in secure EL2. In this patch, we introduce a new Kconfig option ARM_SECURE_STATE and a set of secure hypervisor timer registers CNTHPS_*_EL2. We alias CNTHP_*_EL2 to CNTHPS_*_EL2 to keep the timer code flow unchanged. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - alias CNTHP_*_EL2 to CNTHPS_*_EL2 to avoid renaming --- xen/arch/arm/Kconfig | 4 ++++ xen/arch/arm/include/asm/arm64/sysregs.h | 15 +++++++++++++++ 2 files changed, 19 insertions(+) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index b2710c1c31..3f67aacbbf 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -92,6 +92,10 @@ config ARM_EFI UEFI firmware. A UEFI stub is provided to allow Xen to be booted as an EFI application. =20 +config ARM_SECURE_STATE + bool "Xen will run in Arm Secure State" + default n + config GICV3 bool "GICv3 driver" depends on !NEW_VGIC diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/includ= e/asm/arm64/sysregs.h index 96c025053b..ab0e6a97d3 100644 --- a/xen/arch/arm/include/asm/arm64/sysregs.h +++ b/xen/arch/arm/include/asm/arm64/sysregs.h @@ -514,6 +514,21 @@ =20 #endif =20 +#ifdef CONFIG_ARM_SECURE_STATE +/* + * The Armv8-R AArch64 architecture always executes code in Secure + * state with EL2 as the highest Exception. + * + * Hypervisor timer registers for Secure EL2. + */ +#define CNTHPS_TVAL_EL2 S3_4_C14_C5_0 +#define CNTHPS_CTL_EL2 S3_4_C14_C5_1 +#define CNTHPS_CVAL_EL2 S3_4_C14_C5_2 +#define CNTHP_TVAL_EL2 CNTHPS_TVAL_EL2 +#define CNTHP_CTL_EL2 CNTHPS_CTL_EL2 +#define CNTHP_CVAL_EL2 CNTHPS_CVAL_EL2 +#endif + /* Access to system registers */ =20 #define WRITE_SYSREG64(v, name) do { \ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750802109448.46084455202924; Sun, 25 Jun 2023 20:40:02 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554999.866551 (Exim 4.92) (envelope-from ) id 1qDd4p-0005g9-AU; Mon, 26 Jun 2023 03:39:43 +0000 Received: by outflank-mailman (output) from mailman id 554999.866551; Mon, 26 Jun 2023 03:39:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4p-0005fu-6f; Mon, 26 Jun 2023 03:39:43 +0000 Received: by outflank-mailman (input) for mailman id 554999; Mon, 26 Jun 2023 03:39:41 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2Y-0000HH-2o for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:22 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id c12175c0-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:37:20 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ACEEE1FB; Sun, 25 Jun 2023 20:38:03 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 01B993F64C; Sun, 25 Jun 2023 20:37:16 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c12175c0-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 42/52] xen/mpu: implement setup_virt_paging for MPU system Date: Mon, 26 Jun 2023 11:34:33 +0800 Message-Id: <20230626033443.2943270-43-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750803925100001 Content-Type: text/plain; charset="utf-8" For MMU system, setup_virt_paging is used to configure stage 2 address translation regime, like IPA bits, VMID allocator set up, etc. Some could be inherited in MPU system, like VMID allocator set up, etc. For MPU system, we could have the following memory translation regime: - PMSAv8-64 at both EL1/EL0 and EL2 - VMSAv8-64 at EL1/EL0 and PMSAv8-64 at EL2 The default option will be the second, unless the platform could not suppor= t, which could be checked against MSA_frac bit in Memory Model Feature Registe= r 0( ID_AA64MMFR0_EL1). Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - no change --- xen/arch/arm/Makefile | 1 + xen/arch/arm/include/asm/cpufeature.h | 7 ++ xen/arch/arm/include/asm/p2m.h | 8 +++ xen/arch/arm/include/asm/processor.h | 13 ++++ xen/arch/arm/mpu/p2m.c | 92 +++++++++++++++++++++++++++ 5 files changed, 121 insertions(+) create mode 100644 xen/arch/arm/mpu/p2m.c diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index feb49640a0..9f4b11b069 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -47,6 +47,7 @@ obj-y +=3D mmu/p2m.o else obj-y +=3D mpu/mm.o obj-y +=3D mpu/setup.o +obj-y +=3D mpu/p2m.o endif obj-y +=3D mm.o obj-y +=3D monitor.o diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/a= sm/cpufeature.h index 894f278a4a..cbaf41881b 100644 --- a/xen/arch/arm/include/asm/cpufeature.h +++ b/xen/arch/arm/include/asm/cpufeature.h @@ -250,6 +250,12 @@ struct cpuinfo_arm { unsigned long tgranule_16K:4; unsigned long tgranule_64K:4; unsigned long tgranule_4K:4; +#ifdef CONFIG_HAS_MPU + unsigned long __res:16; + unsigned long msa:4; + unsigned long msa_frac:4; + unsigned long __res0:8; +#else unsigned long tgranule_16k_2:4; unsigned long tgranule_64k_2:4; unsigned long tgranule_4k_2:4; @@ -257,6 +263,7 @@ struct cpuinfo_arm { unsigned long __res0:8; unsigned long fgt:4; unsigned long ecv:4; +#endif =20 /* MMFR1 */ unsigned long hafdbs:4; diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index f62d632830..d9c91d4a98 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -16,8 +16,16 @@ extern unsigned int p2m_ipa_bits; =20 extern unsigned int p2m_root_order; extern unsigned int p2m_root_level; +#ifdef CONFIG_HAS_MPU +/* + * A 4KB Page is enough for stage 2 translation in MPU system, which could + * store at most 255 EL2 MPU memory regions. + */ +#define P2M_ROOT_ORDER 0 +#else #define P2M_ROOT_ORDER p2m_root_order #define P2M_ROOT_LEVEL p2m_root_level +#endif =20 #define MAX_VMID_8_BIT (1UL << 8) #define MAX_VMID_16_BIT (1UL << 16) diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/as= m/processor.h index 685f9b18fd..fe761ce50f 100644 --- a/xen/arch/arm/include/asm/processor.h +++ b/xen/arch/arm/include/asm/processor.h @@ -389,6 +389,12 @@ =20 #define VTCR_RES1 (_AC(1,UL)<<31) =20 +#ifdef CONFIG_HAS_MPU +#define VTCR_MSA_VMSA (_AC(0x1,UL)<<31) +#define VTCR_MSA_PMSA ~(_AC(0x1,UL)<<31) +#define NSA_SEL2 ~(_AC(0x1,UL)<<30) +#endif + /* HCPTR Hyp. Coprocessor Trap Register */ #define HCPTR_TAM ((_AC(1,U)<<30)) #define HCPTR_TTA ((_AC(1,U)<<20)) /* Trap trace registers */ @@ -449,6 +455,13 @@ #define MM64_VMID_16_BITS_SUPPORT 0x2 #endif =20 +#ifdef CONFIG_HAS_MPU +#define MM64_MSA_PMSA_SUPPORT 0xf +#define MM64_MSA_FRAC_NONE_SUPPORT 0x0 +#define MM64_MSA_FRAC_PMSA_SUPPORT 0x1 +#define MM64_MSA_FRAC_VMSA_SUPPORT 0x2 +#endif + #ifndef __ASSEMBLY__ =20 extern register_t __cpu_logical_map[]; diff --git a/xen/arch/arm/mpu/p2m.c b/xen/arch/arm/mpu/p2m.c new file mode 100644 index 0000000000..04c44825cb --- /dev/null +++ b/xen/arch/arm/mpu/p2m.c @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#include +#include +#include +#include + +#include +#include +#include + +void __init setup_virt_paging(void) +{ + uint64_t val =3D 0; + bool p2m_vmsa =3D true; + + /* PA size */ + const unsigned int pa_range_info[] =3D { 32, 36, 40, 42, 44, 48, 52, 0= , /* Invalid */ }; + + /* + * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured + * with IPA bits =3D=3D PA bits, compare against "pabits". + */ + if ( pa_range_info[system_cpuinfo.mm64.pa_range] < p2m_ipa_bits ) + p2m_ipa_bits =3D pa_range_info[system_cpuinfo.mm64.pa_range]; + + /* In ARMV8R, hypervisor in secure EL2. */ + val &=3D NSA_SEL2; + + /* + * The MSA and MSA_frac fields in the ID_AA64MMFR0_EL1 register + * identify the memory system configurations supported at EL1. + * In Armv8-R AArch64, the only permitted value for ID_AA64MMFR0_EL1.M= SA is + * 0b1111. When ID_AA64MMFR0_EL1.MSA_frac is 0b0010, the stage 1 of the + * Secure EL1&0 translation regime can enable PMSAv8-64 or VMSAv8-64 + * architecture. + */ + if ( system_cpuinfo.mm64.msa =3D=3D MM64_MSA_PMSA_SUPPORT ) + { + if ( system_cpuinfo.mm64.msa_frac =3D=3D MM64_MSA_FRAC_NONE_SUPPOR= T ) + goto fault; + + if ( system_cpuinfo.mm64.msa_frac !=3D MM64_MSA_FRAC_VMSA_SUPPORT ) + { + p2m_vmsa =3D false; + warning_add("Be aware of that there is no support for VMSAv8-6= 4 at EL1 on this platform.\n"); + } + } + else + goto fault; + + /* + * If PE supports both PMSAv8-64 and VMSAv8-64 at EL1, then VTCR_EL2.M= SA + * determines the memory system architecture enabled at stage 1 of the + * Secure EL1&0 translation regime. + * + * Normally, we set the initial VTCR_EL2.MSA value VMSAv8-64 support, + * unless this platform only supports PMSAv8-64. + */ + if ( !p2m_vmsa ) + val &=3D VTCR_MSA_PMSA; + else + val |=3D VTCR_MSA_VMSA; + + /* + * cpuinfo sanitization makes sure we support 16bits VMID only if + * all cores are supporting it. + */ + if ( system_cpuinfo.mm64.vmid_bits =3D=3D MM64_VMID_16_BITS_SUPPORT ) + max_vmid =3D MAX_VMID_16_BIT; + + /* Set the VS bit only if 16 bit VMID is supported. */ + if ( MAX_VMID =3D=3D MAX_VMID_16_BIT ) + val |=3D VTCR_VS; + + p2m_vmid_allocator_init(); + + WRITE_SYSREG(val, VTCR_EL2); + + return; + +fault: + panic("Hardware with no PMSAv8-64 support in any translation regime.\n= "); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750857725249.89555471223935; Sun, 25 Jun 2023 20:40:57 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555074.866781 (Exim 4.92) (envelope-from ) id 1qDd5U-0007af-Bl; Mon, 26 Jun 2023 03:40:24 +0000 Received: by outflank-mailman (output) from mailman id 555074.866781; Mon, 26 Jun 2023 03:40:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5T-0007Wt-LQ; Mon, 26 Jun 2023 03:40:23 +0000 Received: by outflank-mailman (input) for mailman id 555074; Mon, 26 Jun 2023 03:40:21 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2Z-0007ej-Ve for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:23 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id c2f443a7-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:37:23 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C57022F4; Sun, 25 Jun 2023 20:38:06 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 29A203F64C; Sun, 25 Jun 2023 20:37:19 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c2f443a7-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 43/52] xen/mpu: configure VSTCR_EL2 in MPU system Date: Mon, 26 Jun 2023 11:34:34 +0800 Message-Id: <20230626033443.2943270-44-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750858162100003 VSTCR_EL2, Virtualization Secure Translation Control Register=EF=BC=8Cis the control register for stage 2 of the Secure EL1&0 translation regime. VSTCR_EL2.SA defines secure stage 2 translation output address space. To make sure that all stage 2 translations for the Secure PA space access the Secure PA space, we keep SA bit as 0. VSTCR_EL2.SC is NS check enable bit. To make sure that Stage 2 NS configuration is checked against stage 1 NS configuration in EL1&0 translation regime for the given address, and generates a fault if they are different, we set SC bit 1. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/include/asm/arm64/sysregs.h | 6 ++++++ xen/arch/arm/mpu/p2m.c | 17 ++++++++++++++++- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/includ= e/asm/arm64/sysregs.h index ab0e6a97d3..35d7da411d 100644 --- a/xen/arch/arm/include/asm/arm64/sysregs.h +++ b/xen/arch/arm/include/asm/arm64/sysregs.h @@ -512,6 +512,12 @@ /* MPU Protection Region Enable Register encode */ #define PRENR_EL2 S3_4_C6_C1_1 =20 +/* Virtualization Secure Translation Control Register */ +#define VSTCR_EL2 S3_4_C2_C6_2 +#define VSTCR_EL2_RES1_SHIFT 31 +#define VSTCR_EL2_SA ~(_AC(0x1,UL)<<30) +#define VSTCR_EL2_SC (_AC(0x1,UL)<<20) + #endif =20 #ifdef CONFIG_ARM_SECURE_STATE diff --git a/xen/arch/arm/mpu/p2m.c b/xen/arch/arm/mpu/p2m.c index 04c44825cb..a7a3912a9a 100644 --- a/xen/arch/arm/mpu/p2m.c +++ b/xen/arch/arm/mpu/p2m.c @@ -10,7 +10,7 @@ =20 void __init setup_virt_paging(void) { - uint64_t val =3D 0; + uint64_t val =3D 0, val2 =3D 0; bool p2m_vmsa =3D true; =20 /* PA size */ @@ -76,6 +76,21 @@ void __init setup_virt_paging(void) =20 WRITE_SYSREG(val, VTCR_EL2); =20 + /* + * VSTCR_EL2.SA defines secure stage 2 translation output address spac= e. + * To make sure that all stage 2 translations for the Secure PA space + * access the Secure PA space, we keep SA bit as 0. + * + * VSTCR_EL2.SC is NS check enable bit. + * To make sure that Stage 2 NS configuration is checked against stage= 1 + * NS configuration in EL1&0 translation regime for the given address,= and + * generates a fault if they are different, we set SC bit 1. + */ + val2 =3D 1 << VSTCR_EL2_RES1_SHIFT; + val2 &=3D VSTCR_EL2_SA; + val2 |=3D VSTCR_EL2_SC; + WRITE_SYSREG(val2, VSTCR_EL2); + return; =20 fault: --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750819275930.3593387552783; Sun, 25 Jun 2023 20:40:19 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555025.866627 (Exim 4.92) (envelope-from ) id 1qDd4y-0007oJ-U2; Mon, 26 Jun 2023 03:39:52 +0000 Received: by outflank-mailman (output) from mailman id 555025.866627; Mon, 26 Jun 2023 03:39:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4x-0007j3-Ut; Mon, 26 Jun 2023 03:39:51 +0000 Received: by outflank-mailman (input) for mailman id 555025; Mon, 26 Jun 2023 03:39:50 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2d-0007ej-7C for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:27 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id c4cd8afc-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:37:26 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DE4681FB; Sun, 25 Jun 2023 20:38:09 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 425AF3F64C; Sun, 25 Jun 2023 20:37:23 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c4cd8afc-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 44/52] xen/mpu: P2M initialization in MPU system Date: Mon, 26 Jun 2023 11:34:35 +0800 Message-Id: <20230626033443.2943270-45-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750820040100001 Content-Type: text/plain; charset="utf-8" We inherit p2m_init() to do P2M initialization in MPU system, including VMID assignment, setting up P2M MPU region mapping table, etc. p2m_alloc_table() is responsible for allocating per-domain P2M MPU memory region mapping table. As a MPU memory region structure(pr_t) takes 16 bytes, even with maximum supported MPU memory regions, 255, MPU memory mapping tab= le at most takes up less than 4KB. VSCTLR_EL2, Virtualization System Control Register, provides configuration information for VMSAv8-64 and PMSAv8-64 virtualization using stage 2 of EL1&0 translation regime, bit[63:48] of which determines VMID for the EL1-Guest-OS. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/include/asm/mpu/mm.h | 3 ++ xen/arch/arm/include/asm/p2m.h | 5 +++ xen/arch/arm/mpu/mm.c | 22 ++++++++++ xen/arch/arm/mpu/p2m.c | 69 +++++++++++++++++++++++++++++++ 4 files changed, 99 insertions(+) diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h index a83519ad13..4df69245c6 100644 --- a/xen/arch/arm/include/asm/mpu/mm.h +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -2,6 +2,8 @@ #ifndef __ARCH_ARM_MM_MPU__ #define __ARCH_ARM_MM_MPU__ =20 +#include + extern struct page_info *frame_table; extern unsigned long frametable_pdx_end; =20 @@ -11,6 +13,7 @@ extern uint8_t is_mm_range_mapped(paddr_t pa, paddr_t len= ); extern void *map_mm_range(paddr_t pa, size_t len, unsigned int attributes); extern void unmap_mm_range(paddr_t pa); extern bool is_mm_range_mapped_transient(paddr_t pa, paddr_t len); +extern pr_t *alloc_mpumap(void); =20 #endif /* __ARCH_ARM_MM_MPU__ */ =20 diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index d9c91d4a98..c3598d514e 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -61,8 +61,13 @@ struct p2m_domain { /* Current VMID in use */ uint16_t vmid; =20 +#ifndef CONFIG_HAS_MPU /* Current Translation Table Base Register for the p2m */ uint64_t vttbr; +#else + /* Current Virtualization System Control Register for the p2m */ + uint64_t vsctlr; +#endif =20 /* Highest guest frame that's ever been mapped in the p2m */ gfn_t max_mapped_gfn; diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 27d924e449..de5da96b80 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -872,6 +872,28 @@ void __init remove_early_mappings(paddr_t dtb_paddr) panic("Unable to destroy early Device-Tree mapping.\n"); } =20 +/* + * Standard entry to dynamically allocate MPU memory region mapping table. + * A 4KB page is enough for holding the maximum supported MPU memory + * regions. + */ +pr_t *alloc_mpumap(void) +{ + pr_t *map; + + /* + * A MPU memory region structure(pr_t) takes 16 bytes, even with maxim= um + * supported MPU memory regions, 255, MPU memory mapping table at most + * takes up less than 4KB. + */ + map =3D alloc_xenheap_pages(0, 0); + if ( map =3D=3D NULL ) + return NULL; + + clear_page(map); + return map; +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/mpu/p2m.c b/xen/arch/arm/mpu/p2m.c index a7a3912a9a..8f728f8957 100644 --- a/xen/arch/arm/mpu/p2m.c +++ b/xen/arch/arm/mpu/p2m.c @@ -4,6 +4,7 @@ #include #include =20 +#include #include #include #include @@ -97,6 +98,74 @@ fault: panic("Hardware with no PMSAv8-64 support in any translation regime.\n= "); } =20 +static uint64_t __init generate_vsctlr(uint16_t vmid) +{ + return ((uint64_t)vmid << 48); +} + +static int __init p2m_alloc_table(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + pr_t* p2m_map; + + p2m_map =3D alloc_mpumap(); + if ( !p2m_map ) + { + printk(XENLOG_G_ERR "DOM%pd: p2m: unable to allocate P2M MPU mappi= ng table\n", d); + return -ENOMEM; + } + + p2m->root =3D virt_to_page((const void *)p2m_map); + + return 0; +} + +int p2m_init(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + int rc =3D 0; + unsigned int cpu; + + rwlock_init(&p2m->lock); + spin_lock_init(&d->arch.paging.lock); + + p2m->vmid =3D INVALID_VMID; + p2m->max_mapped_gfn =3D _gfn(0); + p2m->lowest_mapped_gfn =3D _gfn(ULONG_MAX); + + p2m->default_access =3D p2m_access_rwx; + /* mem_access is NOT supported in MPU system. */ + p2m->mem_access_enabled =3D false; + + /* + * Make sure that the type chosen to is able to store an vCPU ID + * between 0 and the maximum of virtual CPUS supported as long as + * the INVALID_VCPU_ID. + */ + BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < MAX_VIRT_CPU= S); + BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < INVALID_VCPU= _ID); + + for_each_possible_cpu(cpu) + p2m->last_vcpu_ran[cpu] =3D INVALID_VCPU_ID; + + /* + * "Trivial" initialisation is now complete. Set the backpointer so + * p2m_teardown() and friends know to do something. + */ + p2m->domain =3D d; + + rc =3D p2m_alloc_vmid(d); + if ( rc ) + return rc; + p2m->vsctlr =3D generate_vsctlr(p2m->vmid); + + rc =3D p2m_alloc_table(d); + if ( rc ) + return rc; + + return rc; +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750867232613.8246146592734; Sun, 25 Jun 2023 20:41:07 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555084.866832 (Exim 4.92) (envelope-from ) id 1qDd5g-0001vI-8P; Mon, 26 Jun 2023 03:40:36 +0000 Received: by outflank-mailman (output) from mailman id 555084.866832; Mon, 26 Jun 2023 03:40:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5f-0001mQ-3h; Mon, 26 Jun 2023 03:40:35 +0000 Received: by outflank-mailman (input) for mailman id 555084; Mon, 26 Jun 2023 03:40:32 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2h-0000HH-Kl for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:31 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id c6b0abc6-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:37:29 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 034751FB; Sun, 25 Jun 2023 20:38:13 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5B0603F64C; Sun, 25 Jun 2023 20:37:26 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c6b0abc6-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 45/52] xen/mpu: insert an new entry into guest physmap in MPU system Date: Mon, 26 Jun 2023 11:34:36 +0800 Message-Id: <20230626033443.2943270-46-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750867670100007 Content-Type: text/plain; charset="utf-8" Function p2m_set_entry/__p2m_set_entry is responsible for inserting an entry in the p2m. In MPU system, it includes the following steps: - checking whether mapping already exists(sgfn -> mfn) - constituting a new P2M MPU memory region structure(pr_t) through standard entry region_to_p2m_entry() - insert the new entry into domain P2M table(p2m->root) Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/include/asm/arm64/mpu.h | 3 +- xen/arch/arm/include/asm/mpu/mm.h | 6 + xen/arch/arm/include/asm/p2m.h | 3 + xen/arch/arm/mpu/mm.c | 4 +- xen/arch/arm/mpu/p2m.c | 172 +++++++++++++++++++++++++++ 5 files changed, 185 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/include/asm/arm64/mpu.h b/xen/arch/arm/include/as= m/arm64/mpu.h index c5e69f239a..444ca716b8 100644 --- a/xen/arch/arm/include/asm/arm64/mpu.h +++ b/xen/arch/arm/include/asm/arm64/mpu.h @@ -61,7 +61,8 @@ typedef union { unsigned long ap:2; /* Acess Permission */ unsigned long sh:2; /* Sharebility */ unsigned long base:42; /* Base Address */ - unsigned long pad:16; + unsigned long pad:12; + unsigned long p2m_type:4; /* Ignore by hardware. Used to store p2m= types.*/ } reg; uint64_t bits; } prbar_t; diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h index 4df69245c6..0abb0a6c92 100644 --- a/xen/arch/arm/include/asm/mpu/mm.h +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -14,6 +14,12 @@ extern void *map_mm_range(paddr_t pa, size_t len, unsign= ed int attributes); extern void unmap_mm_range(paddr_t pa); extern bool is_mm_range_mapped_transient(paddr_t pa, paddr_t len); extern pr_t *alloc_mpumap(void); +#define MPUMAP_REGION_FAILED 0 +#define MPUMAP_REGION_FOUND 1 +#define MPUMAP_REGION_INCLUSIVE 2 +#define MPUMAP_REGION_OVERLAP 3 +extern int mpumap_contain_region(pr_t *table, uint8_t nr_regions, + paddr_t base, paddr_t limit, uint8_t *ind= ex); =20 #endif /* __ARCH_ARM_MM_MPU__ */ =20 diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index c3598d514e..68837b6df7 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -67,6 +67,9 @@ struct p2m_domain { #else /* Current Virtualization System Control Register for the p2m */ uint64_t vsctlr; + + /* Number of MPU memory regions in P2M MPU memory mapping table. */ + uint8_t nr_regions; #endif =20 /* Highest guest frame that's ever been mapped in the p2m */ diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index de5da96b80..8cdb7d7219 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -378,8 +378,8 @@ out: * MPUMAP_REGION_INCLUSIVE: find an inclusive match in #table * MPUMAP_REGION_OVERLAP: overlap with the existing mapping */ -static int mpumap_contain_region(pr_t *table, uint8_t nr_regions, - paddr_t base, paddr_t limit, uint8_t *ind= ex) +int mpumap_contain_region(pr_t *table, uint8_t nr_regions, + paddr_t base, paddr_t limit, uint8_t *index) { uint8_t i =3D 0, _index =3D INVALID_REGION_IDX; =20 diff --git a/xen/arch/arm/mpu/p2m.c b/xen/arch/arm/mpu/p2m.c index 8f728f8957..4838d5b625 100644 --- a/xen/arch/arm/mpu/p2m.c +++ b/xen/arch/arm/mpu/p2m.c @@ -166,6 +166,178 @@ int p2m_init(struct domain *d) return rc; } =20 +static void p2m_set_permission(pr_t *region, p2m_type_t t) +{ + switch ( t ) + { + case p2m_ram_rw: + region->prbar.reg.xn =3D XN_DISABLED; + region->prbar.reg.ap =3D AP_RW_ALL; + break; + + case p2m_ram_ro: + region->prbar.reg.xn =3D XN_DISABLED; + region->prbar.reg.ap =3D AP_RO_ALL; + break; + + case p2m_invalid: + region->prbar.reg.xn =3D XN_P2M_ENABLED; + region->prbar.reg.ap =3D AP_RO_ALL; + break; + + case p2m_max_real_type: + BUG(); + break; + + case p2m_mmio_direct_dev: + case p2m_mmio_direct_nc: + case p2m_mmio_direct_c: + case p2m_iommu_map_ro: + case p2m_iommu_map_rw: + case p2m_map_foreign_ro: + case p2m_map_foreign_rw: + case p2m_grant_map_ro: + case p2m_grant_map_rw: + panic(XENLOG_G_ERR "p2m: UNIMPLEMENTED p2m permission in MPU syste= m\n"); + break; + } +} + +static inline pr_t region_to_p2m_entry(mfn_t smfn, unsigned long nr_mfn, + p2m_type_t t) +{ + prbar_t prbar; + prlar_t prlar; + pr_t region; + + prbar =3D (prbar_t) { + .reg =3D { + .p2m_type =3D t, /* P2M Type */ + }}; + + prlar =3D (prlar_t) { + .reg =3D { + .ns =3D 0, /* Hyp mode is in secure world */ + .en =3D 1, /* Region enabled */ + }}; + + BUILD_BUG_ON(p2m_max_real_type > (1 << 4)); + + switch ( t ) + { + case p2m_invalid: + case p2m_ram_rw: + case p2m_ram_ro: + case p2m_max_real_type: + prbar.reg.sh =3D LPAE_SH_INNER; + prlar.reg.ai =3D MT_NORMAL; + break; + + default: + panic(XENLOG_G_ERR "p2m: UNIMPLEMENTED p2m type in MPU system\n"); + break; + } + + region =3D (pr_t) { + .prbar =3D prbar, + .prlar =3D prlar, + }; + + /* + * xn and ap bit will be defined in the p2m_set_permission + * based on t. + */ + p2m_set_permission(®ion, t); + + /* Set base address and limit address */ + pr_set_base(®ion, mfn_to_maddr(smfn)); + pr_set_limit(®ion, (mfn_to_maddr(mfn_add(smfn, nr_mfn)) - 1)); + + return region; +} + +/* + * Check whether guest memory [sgfn, sgfn + nr_gfns) is mapped. + * + * If it is mapped, the index of associated MPU memory region will be fill= ed + * up, and 0 is returned. + * If it is not mapped, -ENOENT errno will be returned. + */ +static int is_gfns_mapped(struct p2m_domain *p2m, gfn_t sgfn, + unsigned long nr_gfns, uint8_t *idx) +{ + paddr_t gbase =3D gfn_to_gaddr(sgfn), + glimit =3D gfn_to_gaddr(gfn_add(sgfn, nr_gfns)) - 1; + int rc; + pr_t *table; + + table =3D (pr_t *)page_to_virt(p2m->root); + if ( !table ) + return -EEXIST; + + rc =3D mpumap_contain_region(table, p2m->nr_regions, gbase, glimit, id= x); + if ( (rc =3D=3D MPUMAP_REGION_FOUND) || (rc =3D=3D MPUMAP_REGION_INCLU= SIVE) ) + return 0; + else if ( rc =3D=3D MPUMAP_REGION_FAILED ) + return -ENOENT; + + /* Partially mapped */ + return -EINVAL; +} + +int __p2m_set_entry(struct p2m_domain *p2m, gfn_t sgfn, unsigned int nr, + mfn_t smfn, p2m_type_t t, p2m_access_t a) +{ + pr_t *table; + mfn_t emfn =3D mfn_add(smfn, nr); + uint8_t idx =3D INVALID_REGION_IDX; + + /* + * Other than removing mapping (i.e MFN_INVALID), + * gfn =3D=3D mfn in MPU system. + */ + if ( !mfn_eq(smfn, INVALID_MFN) ) + if ( gfn_x(sgfn) !=3D mfn_x(smfn) ) + { + printk(XENLOG_G_ERR "Unable to map MFN %#"PRI_mfn" at %#"PRI_m= fn"\n", + mfn_x(smfn), gfn_x(sgfn)); + return -EINVAL; + } + + if ( is_gfns_mapped(p2m, sgfn, nr, &idx) !=3D -ENOENT ) + { + printk(XENLOG_G_ERR "p2m: unable to insert P2M MPU memory region 0= x%"PRIpaddr"-0x%"PRIpaddr"\n", + gfn_to_gaddr(sgfn), gfn_to_gaddr(gfn_add(sgfn, nr))); + return -EINVAL; + } + + table =3D (pr_t *)page_to_virt(p2m->root); + if ( !table ) + return -EEXIST; + table[p2m->nr_regions] =3D region_to_p2m_entry(smfn, nr, t); + p2m->nr_regions++; + + p2m->max_mapped_gfn =3D gfn_max(p2m->max_mapped_gfn, _gfn(mfn_x(emfn))= ); + p2m->lowest_mapped_gfn =3D gfn_min(p2m->lowest_mapped_gfn, _gfn(mfn_x(= smfn))); + + return 0; +} + +int p2m_set_entry(struct p2m_domain *p2m, gfn_t sgfn, unsigned long nr, + mfn_t smfn, p2m_type_t t, p2m_access_t a) +{ + /* + * Any reference taken by the P2M mappings (e.g. foreign mapping) will + * be dropped in relinquish_p2m_mapping(). As the P2M will still + * be accessible after, we need to prevent mapping to be added when the + * domain is dying. + */ + if ( unlikely(p2m->domain->is_dying) ) + return -ENOMEM; + + return __p2m_set_entry(p2m, sgfn, nr, smfn, t, a); +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750796112803.9010756509522; Sun, 25 Jun 2023 20:39:56 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554991.866527 (Exim 4.92) (envelope-from ) id 1qDd4k-0004kM-HW; Mon, 26 Jun 2023 03:39:38 +0000 Received: by outflank-mailman (output) from mailman id 554991.866527; Mon, 26 Jun 2023 03:39:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4k-0004jz-AX; Mon, 26 Jun 2023 03:39:38 +0000 Received: by outflank-mailman (input) for mailman id 554991; Mon, 26 Jun 2023 03:39:36 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2j-0007ej-IC for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:33 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id c8960fe6-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:37:32 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2DE512F4; Sun, 25 Jun 2023 20:38:16 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7996F3F64C; Sun, 25 Jun 2023 20:37:29 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c8960fe6-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 46/52] xen/mpu: look up entry in p2m table Date: Mon, 26 Jun 2023 11:34:37 +0800 Message-Id: <20230626033443.2943270-47-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750797937100001 Content-Type: text/plain; charset="utf-8" Function p2m_lookup() is responsible for looking up an entry in p2m table. In MPU system, We check whether mapping exists. If it does, we get the details of the guest MPU memory region in domain P2M table(p2m->roo= t) through p2m_get_mpu_region() Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/include/asm/mpu/p2m.h | 18 ++++++++ xen/arch/arm/include/asm/p2m.h | 2 + xen/arch/arm/mpu/p2m.c | 73 ++++++++++++++++++++++++++++++ 3 files changed, 93 insertions(+) create mode 100644 xen/arch/arm/include/asm/mpu/p2m.h diff --git a/xen/arch/arm/include/asm/mpu/p2m.h b/xen/arch/arm/include/asm/= mpu/p2m.h new file mode 100644 index 0000000000..bdb33148e3 --- /dev/null +++ b/xen/arch/arm/include/asm/mpu/p2m.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _XEN_P2M_MPU_H +#define _XEN_P2M_MPU_H + +static inline bool region_is_p2m_valid(pr_t *pr) +{ + return (pr->prbar.reg.p2m_type !=3D p2m_invalid); +} + +#endif /* _XEN_P2M_MPU_H */ +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index 68837b6df7..395bfd4f69 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -188,6 +188,8 @@ typedef enum { =20 #ifdef CONFIG_HAS_MMU #include +#else +#include #endif =20 static inline bool arch_acquire_resource_check(struct domain *d) diff --git a/xen/arch/arm/mpu/p2m.c b/xen/arch/arm/mpu/p2m.c index 4838d5b625..d403479229 100644 --- a/xen/arch/arm/mpu/p2m.c +++ b/xen/arch/arm/mpu/p2m.c @@ -338,6 +338,79 @@ int p2m_set_entry(struct p2m_domain *p2m, gfn_t sgfn, = unsigned long nr, return __p2m_set_entry(p2m, sgfn, nr, smfn, t, a); } =20 +/* + * Get the details of guest MPU memory region [gfn, gfn + nr_gfns). + * + * If it is mapped, the starting MFN will be returned and according + * p2m type will get filled up. + * If it is not mapped, INVALID_MFN will be returned. + */ +static mfn_t p2m_get_mpu_region(struct p2m_domain *p2m, gfn_t gfn, + unsigned long nr_gfns, p2m_type_t *t, + bool *valid) +{ + pr_t *table, *region =3D NULL; + p2m_type_t _t; + uint8_t idx =3D INVALID_REGION_IDX; + gfn_t egfn =3D gfn_add(gfn, nr_gfns); + + ASSERT(p2m_is_locked(p2m)); + + /* Allow t to be NULL. */ + t =3D t ? : &_t; + + *t =3D p2m_invalid; + + if ( valid ) + *valid =3D false; + + /* + * Check if the ending gfn is higher than the highest the p2m map + * currently holds, or the starting gfn lower than the lowest it holds + */ + if ( (gfn_x(egfn) > gfn_x(p2m->max_mapped_gfn)) || + (gfn_x(gfn) < gfn_x(p2m->lowest_mapped_gfn)) ) + return INVALID_MFN; + + table =3D (pr_t *)page_to_virt(p2m->root); + /* The table should always be non-NULL and is always present. */ + if ( !table ) + ASSERT_UNREACHABLE(); + + if ( is_gfns_mapped(p2m, gfn, nr_gfns, &idx) ) + return INVALID_MFN; + + region =3D &table[idx]; + if ( region_is_p2m_valid(region) ) + { + *t =3D region->prbar.reg.p2m_type; + + if ( valid ) + *valid =3D region_is_valid(region); + } + + /* Always GFN =3D=3D MFN in MPU system. */ + return _mfn(gfn_x(gfn)); +} + +/* + * Get the details of a given gfn. + * + * If the entry is present, the associated MFN will be returned and the + * p2m type gets filled up. + * If the entry is not present, INVALID_MFN will be returned + * + * The page_order is meaningless in MPU system, and we keep it here + * to be compatible with MMU system. + */ +mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t, p2m_access_t *a, + unsigned int *page_order, + bool *valid) +{ + return p2m_get_mpu_region(p2m, gfn, 1, t, valid); +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750865901703.1353690290922; Sun, 25 Jun 2023 20:41:05 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555087.866853 (Exim 4.92) (envelope-from ) id 1qDd5l-0003Pz-Li; Mon, 26 Jun 2023 03:40:41 +0000 Received: by outflank-mailman (output) from mailman id 555087.866853; Mon, 26 Jun 2023 03:40:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5k-0003IV-F2; Mon, 26 Jun 2023 03:40:40 +0000 Received: by outflank-mailman (input) for mailman id 555087; Mon, 26 Jun 2023 03:40:35 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2m-0007ej-M2 for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:36 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id ca704925-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:37:35 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 485401FB; Sun, 25 Jun 2023 20:38:19 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9FEE23F64C; Sun, 25 Jun 2023 20:37:32 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ca704925-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 47/52] xen/mpu: support vcpu context switch in MPU system Date: Mon, 26 Jun 2023 11:34:38 +0800 Message-Id: <20230626033443.2943270-48-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750866146100003 Content-Type: text/plain; charset="utf-8" When vcpu switching into guest mode, in MMU system, we update VTTBR_EL2 with the incoming guest's P2M table, simple and fast. While in MPU system, we have MPU register PRBAR_EL2/PRLAR_EL2 for both stage 1 EL2 address translation and stage 2 EL1&EL0 address translation. That is, MPU memory region mapping table(xen_mpumap) shall be also updated with P2M regions during context switch. In p2m_save_state(), we need to manually disable all P2M MPU memory regions from last-running vcpu, and in p2m_restore_state(), we need to manually enable incoming guest's P2M MPU memory regions. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/include/asm/arm64/sysregs.h | 3 ++ xen/arch/arm/include/asm/page.h | 4 ++ xen/arch/arm/mpu/mm.c | 6 ++- xen/arch/arm/mpu/p2m.c | 61 ++++++++++++++++++++++++ 4 files changed, 73 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/includ= e/asm/arm64/sysregs.h index 35d7da411d..aa6c07cd4f 100644 --- a/xen/arch/arm/include/asm/arm64/sysregs.h +++ b/xen/arch/arm/include/asm/arm64/sysregs.h @@ -512,6 +512,9 @@ /* MPU Protection Region Enable Register encode */ #define PRENR_EL2 S3_4_C6_C1_1 =20 +/* Virtualization System Control Register */ +#define VSCTLR_EL2 S3_4_C2_C0_0 + /* Virtualization Secure Translation Control Register */ #define VSTCR_EL2 S3_4_C2_C6_2 #define VSTCR_EL2_RES1_SHIFT 31 diff --git a/xen/arch/arm/include/asm/page.h b/xen/arch/arm/include/asm/pag= e.h index a434e2205a..e28c3d59c5 100644 --- a/xen/arch/arm/include/asm/page.h +++ b/xen/arch/arm/include/asm/page.h @@ -99,22 +99,26 @@ * [7] Region Present * [8] Transient Region, e.g. MPU memory region is temproraily * mapped for a short time + * [9] P2M Region for stage 2 translation */ #define _PAGE_AI_BIT 0 #define _PAGE_XN_BIT 3 #define _PAGE_AP_BIT 5 #define _PAGE_PRESENT_BIT 7 #define _PAGE_TRANSIENT_BIT 8 +#define _PAGE_P2M_BIT 9 #define _PAGE_AI (7U << _PAGE_AI_BIT) #define _PAGE_XN (2U << _PAGE_XN_BIT) #define _PAGE_RO (2U << _PAGE_AP_BIT) #define _PAGE_PRESENT (1U << _PAGE_PRESENT_BIT) #define _PAGE_TRANSIENT (1U << _PAGE_TRANSIENT_BIT) +#define _PAGE_P2M (1U << _PAGE_P2M_BIT) #define PAGE_AI_MASK(x) (((x) >> _PAGE_AI_BIT) & 0x7U) #define PAGE_XN_MASK(x) (((x) >> _PAGE_XN_BIT) & 0x3U) #define PAGE_AP_MASK(x) (((x) >> _PAGE_AP_BIT) & 0x3U) #define PAGE_RO_MASK(x) (((x) >> _PAGE_AP_BIT) & 0x2U) #define PAGE_TRANSIENT_MASK(x) (((x) >> _PAGE_TRANSIENT_BIT) & 0x1U) +#define PAGE_P2M_MASK(x) (((x) >> _PAGE_P2M_BIT) & 0x1U) #endif /* CONFIG_HAS_MPU */ =20 /* diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 8cdb7d7219..c6b287b3aa 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -580,7 +580,11 @@ int xen_mpumap_update(paddr_t base, paddr_t limit, uns= igned int flags) { int rc; =20 - if ( flags_has_rwx(flags) ) + /* + * Mappings should not be both Writeable and Executable, unless + * it is for guest P2M mapping. + */ + if ( flags_has_rwx(flags) && !PAGE_P2M_MASK(flags) ) { region_printk("Mappings should not be both Writeable and Executabl= e\n"); return -EINVAL; diff --git a/xen/arch/arm/mpu/p2m.c b/xen/arch/arm/mpu/p2m.c index d403479229..e21b76813d 100644 --- a/xen/arch/arm/mpu/p2m.c +++ b/xen/arch/arm/mpu/p2m.c @@ -411,6 +411,67 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, return p2m_get_mpu_region(p2m, gfn, 1, t, valid); } =20 +static unsigned int build_p2m_memory_region_flags(pr_t *p2m_region) +{ + return (p2m_region->prlar.reg.ai << _PAGE_AI_BIT | + p2m_region->prbar.reg.ap << _PAGE_AP_BIT | + p2m_region->prbar.reg.xn << _PAGE_XN_BIT); +} + +static int p2m_xenmpu_update(struct p2m_domain *p2m, bool online) +{ + pr_t *p2m_table; + unsigned int i =3D 0; + unsigned int flags =3D online ? (_PAGE_PRESENT | _PAGE_P2M) : 0; + + p2m_table =3D (pr_t *)page_to_virt(p2m->root); + if ( !p2m_table ) + return -EINVAL; + + for ( ; i < p2m->nr_regions; i++ ) + { + paddr_t base =3D pr_get_base(&p2m_table[i]); + paddr_t limit =3D pr_get_limit(&p2m_table[i]); + unsigned int region_flags; + + region_flags =3D build_p2m_memory_region_flags(&p2m_table[i]) | fl= ags; + if ( xen_mpumap_update(base, limit + 1, region_flags) ) + { + printk(XENLOG_G_ERR "p2m: unable to update MPU memory mapping = with P2M region 0x%"PRIpaddr"-0x%"PRIpaddr"\n", + base, limit + 1); + return -EINVAL; + } + } + + return 0; +} + +/* p2m_save_state and p2m_restore_state work in pair. */ +void p2m_save_state(struct vcpu *p) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(p->domain); + + p->arch.sctlr =3D READ_SYSREG(SCTLR_EL1); + + if ( p2m_xenmpu_update(p2m, false) ) + panic("Failed to offline P2M MPU memory mapping\n"); +} + +void p2m_restore_state(struct vcpu *n) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(n->domain); + uint8_t *last_vcpu_ran =3D &p2m->last_vcpu_ran[smp_processor_id()]; + + WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1); + WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2); + + WRITE_SYSREG64(p2m->vsctlr, VSCTLR_EL2); + if ( p2m_xenmpu_update(p2m, true) ) + panic("Failed to online P2M MPU memory mapping\n"); + + *last_vcpu_ran =3D n->vcpu_id; +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750800977425.0898908258963; Sun, 25 Jun 2023 20:40:00 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.554992.866532 (Exim 4.92) (envelope-from ) id 1qDd4k-0004ol-Rl; Mon, 26 Jun 2023 03:39:38 +0000 Received: by outflank-mailman (output) from mailman id 554992.866532; Mon, 26 Jun 2023 03:39:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4k-0004nK-Kz; Mon, 26 Jun 2023 03:39:38 +0000 Received: by outflank-mailman (input) for mailman id 554992; Mon, 26 Jun 2023 03:39:36 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2p-0007ej-OP for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:39 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id cc4ae79f-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:37:38 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6CEE21FB; Sun, 25 Jun 2023 20:38:22 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B952C3F64C; Sun, 25 Jun 2023 20:37:35 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: cc4ae79f-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 48/52] xen/mpu: enable MMIO region trap in MPU system Date: Mon, 26 Jun 2023 11:34:39 +0800 Message-Id: <20230626033443.2943270-49-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750801325100001 Content-Type: text/plain; charset="utf-8" MMIO region traps, generating from insufficient access permissions, will lead to data abort with Permission Fault in MPU system. It is different with MMU system, which generates Translation Fault instead. We extract common codes for dealing with MMIO trap into a new helper do_mmio_trap_stage2_abort_guest(). Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/traps.c | 81 +++++++++++++++++++++++++++++--------------- 1 file changed, 54 insertions(+), 27 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index ef5c6a8195..bffa147c36 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1848,6 +1848,45 @@ static inline bool check_p2m(bool is_data, paddr_t g= pa) return false; } =20 +static int do_mmio_trap_stage2_abort_guest(struct cpu_user_regs *regs, + const union hsr hsr, + mmio_info_t *info, + vaddr_t gva, paddr_t gpa) +{ + enum io_state state; + + state =3D try_handle_mmio(regs, info); + switch ( state ) + { + case IO_ABORT: + goto inject_abt; + case IO_HANDLED: + /* + * If the instruction was decoded and has executed successfully + * on the MMIO region, then Xen should execute the next part of + * the instruction. (for eg increment the rn if it is a + * post-indexing instruction. + */ + finalize_instr_emulation(&info->dabt_instr); + advance_pc(regs, hsr); + return 0; + case IO_RETRY: + /* finish later */ + return 0; + case IO_UNHANDLED: + /* IO unhandled, try another way to handle it. */ + return -EFAULT; + } + +inject_abt: + gdprintk(XENLOG_DEBUG, + "HSR=3D%#"PRIregister" pc=3D%#"PRIregister" gva=3D%#"PRIvaddr= " gpa=3D%#"PRIpaddr"\n", + hsr.bits, regs->pc, gva, gpa); + inject_dabt_exception(regs, gva, hsr.len); + + return 0; +} + static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs, const union hsr hsr) { @@ -1862,7 +1901,6 @@ static void do_trap_stage2_abort_guest(struct cpu_use= r_regs *regs, uint8_t fsc =3D xabt.fsc & ~FSC_LL_MASK; bool is_data =3D (hsr.ec =3D=3D HSR_EC_DATA_ABORT_LOWER_EL); mmio_info_t info; - enum io_state state; =20 /* * If this bit has been set, it means that this stage-2 abort is caused @@ -1896,6 +1934,8 @@ static void do_trap_stage2_abort_guest(struct cpu_use= r_regs *regs, return; /* Try again */ } =20 + info.gpa =3D gpa; + info.dabt =3D hsr.dabt; switch ( fsc ) { case FSC_FLT_PERM: @@ -1909,6 +1949,17 @@ static void do_trap_stage2_abort_guest(struct cpu_us= er_regs *regs, }; =20 p2m_mem_access_check(gpa, gva, npfec); + +#ifdef CONFIG_HAS_MPU + /* + * MMIO region traps, generating from insufficient access permissi= ons, + * will lead to data abort with Permission fault. + */ + if ( is_data && + (do_mmio_trap_stage2_abort_guest(regs, hsr, &info, gva, gpa) = =3D=3D 0) ) + return; +#endif + /* * The only way to get here right now is because of mem_access, * thus reinjecting the exception to the guest is never required. @@ -1917,9 +1968,6 @@ static void do_trap_stage2_abort_guest(struct cpu_use= r_regs *regs, } case FSC_FLT_TRANS: { - info.gpa =3D gpa; - info.dabt =3D hsr.dabt; - /* * Assumption :- Most of the times when we get a data abort and th= e ISS * is invalid or an instruction abort, the underlying cause is tha= t the @@ -1948,29 +1996,8 @@ static void do_trap_stage2_abort_guest(struct cpu_us= er_regs *regs, if ( info.dabt_instr.state =3D=3D INSTR_ERROR ) goto inject_abt; =20 - state =3D try_handle_mmio(regs, &info); - - switch ( state ) - { - case IO_ABORT: - goto inject_abt; - case IO_HANDLED: - /* - * If the instruction was decoded and has executed success= fully - * on the MMIO region, then Xen should execute the next pa= rt of - * the instruction. (for eg increment the rn if it is a - * post-indexing instruction. - */ - finalize_instr_emulation(&info.dabt_instr); - advance_pc(regs, hsr); - return; - case IO_RETRY: - /* finish later */ - return; - case IO_UNHANDLED: - /* IO unhandled, try another way to handle it. */ - break; - } + if ( do_mmio_trap_stage2_abort_guest(regs, hsr, &info, gva, gpa) = =3D=3D 0 ) + return; =20 /* * If the instruction syndrome was invalid, then we already checke= d if --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750879198288.6600745764616; Sun, 25 Jun 2023 20:41:19 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555104.866893 (Exim 4.92) (envelope-from ) id 1qDd5t-0005X5-MQ; Mon, 26 Jun 2023 03:40:49 +0000 Received: by outflank-mailman (output) from mailman id 555104.866893; Mon, 26 Jun 2023 03:40:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5s-0005Sx-Q0; Mon, 26 Jun 2023 03:40:48 +0000 Received: by outflank-mailman (input) for mailman id 555104; Mon, 26 Jun 2023 03:40:46 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2t-0000HH-NL for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:43 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id ce34796e-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:37:42 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A525B1FB; Sun, 25 Jun 2023 20:38:25 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DE5633F64C; Sun, 25 Jun 2023 20:37:38 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ce34796e-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 49/52] xen/mpu: enable device passthrough in MPU system Date: Mon, 26 Jun 2023 11:34:40 +0800 Message-Id: <20230626033443.2943270-50-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750880179100008 Content-Type: text/plain; charset="utf-8" In order to enable device passthrough in MPU system, we only need to provide p2m_mmio_direct_dev permission set up. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/mpu/p2m.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/mpu/p2m.c b/xen/arch/arm/mpu/p2m.c index e21b76813d..a68a06105f 100644 --- a/xen/arch/arm/mpu/p2m.c +++ b/xen/arch/arm/mpu/p2m.c @@ -185,11 +185,15 @@ static void p2m_set_permission(pr_t *region, p2m_type= _t t) region->prbar.reg.ap =3D AP_RO_ALL; break; =20 + case p2m_mmio_direct_dev: + region->prbar.reg.xn =3D XN_P2M_ENABLED; + region->prbar.reg.ap =3D AP_RW_ALL; + break; + case p2m_max_real_type: BUG(); break; =20 - case p2m_mmio_direct_dev: case p2m_mmio_direct_nc: case p2m_mmio_direct_c: case p2m_iommu_map_ro: @@ -233,6 +237,11 @@ static inline pr_t region_to_p2m_entry(mfn_t smfn, uns= igned long nr_mfn, prlar.reg.ai =3D MT_NORMAL; break; =20 + case p2m_mmio_direct_dev: + prbar.reg.sh =3D LPAE_SH_OUTER; + prlar.reg.ai =3D MT_DEVICE_nGnRE; + break; + default: panic(XENLOG_G_ERR "p2m: UNIMPLEMENTED p2m type in MPU system\n"); break; --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750867060420.07529822473066; Sun, 25 Jun 2023 20:41:07 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555060.866742 (Exim 4.92) (envelope-from ) id 1qDd5M-0005Zn-OT; Mon, 26 Jun 2023 03:40:16 +0000 Received: by outflank-mailman (output) from mailman id 555060.866742; Mon, 26 Jun 2023 03:40:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd5L-0005UX-Ne; Mon, 26 Jun 2023 03:40:15 +0000 Received: by outflank-mailman (input) for mailman id 555060; Mon, 26 Jun 2023 03:40:12 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd2w-0007ej-6M for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:46 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id d0226451-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:37:45 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DC6C61FB; Sun, 25 Jun 2023 20:38:28 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2788C3F64C; Sun, 25 Jun 2023 20:37:41 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d0226451-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 50/52] xen/mpu: dump debug message in MPU system Date: Mon, 26 Jun 2023 11:34:41 +0800 Message-Id: <20230626033443.2943270-51-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750868210100009 Content-Type: text/plain; charset="utf-8" A set of helpers dump_xxx and show_registers are responsible for dumping memory mapping info and register info when debugging. In this commit, we implement them all in MPU system too. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/include/asm/mpu/mm.h | 3 +++ xen/arch/arm/mpu/mm.c | 35 +++++++++++++++++++++++++++++++ xen/arch/arm/mpu/p2m.c | 11 ++++++++++ xen/arch/arm/p2m.c | 4 ++++ xen/arch/arm/traps.c | 16 ++++++++++++++ 5 files changed, 69 insertions(+) diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h index 0abb0a6c92..d3dcf0024a 100644 --- a/xen/arch/arm/include/asm/mpu/mm.h +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -21,6 +21,9 @@ extern pr_t *alloc_mpumap(void); extern int mpumap_contain_region(pr_t *table, uint8_t nr_regions, paddr_t base, paddr_t limit, uint8_t *ind= ex); =20 +/* Print a walk of a MPU memory mapping table */ +void dump_mpu_walk(pr_t *table, uint8_t nr_regions); + #endif /* __ARCH_ARM_MM_MPU__ */ =20 /* diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index c6b287b3aa..ef8a327037 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -898,6 +898,41 @@ pr_t *alloc_mpumap(void) return map; } =20 +void dump_mpu_walk(pr_t *table, uint8_t nr_regions) +{ + uint8_t i =3D 0; + + for ( ; i < nr_regions; i++ ) + { + paddr_t base, limit; + + if ( region_is_valid(&table[i]) ) + { + base =3D pr_get_base(&table[i]); + limit =3D pr_get_limit(&table[i]); + + printk(XENLOG_INFO + "Walking MPU memory mapping table: Region[%u]: 0x%"PRIp= addr"-0x%"PRIpaddr"\n", + i, base, limit); + } + } +} + +void dump_hyp_walk(vaddr_t addr) +{ + uint8_t i =3D 0; + pr_t region; + + for ( i =3D 0; i < max_xen_mpumap; i++ ) + { + read_protection_region(®ion, i); + if ( region_is_valid(®ion) ) + printk(XENLOG_INFO + "Walking hypervisor MPU memory region [%u]: 0x%"PRIpadd= r"-0x%"PRIpaddr"\n", + i, pr_get_base(®ion), pr_get_limit(®ion)); + } +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/mpu/p2m.c b/xen/arch/arm/mpu/p2m.c index a68a06105f..87e350270d 100644 --- a/xen/arch/arm/mpu/p2m.c +++ b/xen/arch/arm/mpu/p2m.c @@ -481,6 +481,17 @@ void p2m_restore_state(struct vcpu *n) *last_vcpu_ran =3D n->vcpu_id; } =20 +void p2m_dump_info(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + p2m_read_lock(p2m); + printk("p2m mappings for domain %d (vmid %d):\n", + d->domain_id, p2m->vmid); + printk(" Number of P2M Memory Region: %u \n", p2m->nr_regions); + p2m_read_unlock(p2m); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index e29b11334e..d3961997d0 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -51,8 +51,12 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr) printk("P2M @ %p mfn:%#"PRI_mfn"\n", p2m->root, mfn_x(page_to_mfn(p2m->root))); =20 +#ifndef CONFIG_HAS_MPU dump_pt_walk(page_to_maddr(p2m->root), addr, P2M_ROOT_LEVEL, P2M_ROOT_PAGES); +#else + dump_mpu_walk((pr_t *)page_to_virt(p2m->root), p2m->nr_regions); +#endif } =20 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index bffa147c36..0592eee91c 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -710,7 +710,11 @@ struct reg_ctxt { #endif =20 /* Hypervisor-side state */ +#ifdef CONFIG_HAS_MPU + uint64_t vsctlr_el2; +#else uint64_t vttbr_el2; +#endif }; =20 static const char *mode_string(register_t cpsr) @@ -908,7 +912,11 @@ static void _show_registers(const struct cpu_user_regs= *regs, #endif } printk(" VTCR_EL2: %"PRIregister"\n", READ_SYSREG(VTCR_EL2)); +#ifndef CONFIG_HAS_MPU printk(" VTTBR_EL2: %016"PRIx64"\n", ctxt->vttbr_el2); +#else + printk(" VSCTLR_EL2: %016"PRIx64"\n", ctxt->vsctlr_el2); +#endif printk("\n"); =20 printk(" SCTLR_EL2: %"PRIregister"\n", READ_SYSREG(SCTLR_EL2)); @@ -945,7 +953,11 @@ void show_registers(const struct cpu_user_regs *regs) if ( guest_mode(regs) && is_32bit_domain(current->domain) ) ctxt.ifsr32_el2 =3D READ_SYSREG(IFSR32_EL2); #endif +#ifndef CONFIG_HAS_MPU ctxt.vttbr_el2 =3D READ_SYSREG64(VTTBR_EL2); +#else + ctxt.vsctlr_el2 =3D READ_SYSREG64(VSCTLR_EL2); +#endif =20 _show_registers(regs, &ctxt, guest_mode(regs), current); } @@ -968,7 +980,11 @@ void vcpu_show_registers(const struct vcpu *v) ctxt.ifsr32_el2 =3D v->arch.ifsr; #endif =20 +#ifdef CONFIG_HAS_MPU + ctxt.vsctlr_el2 =3D v->domain->arch.p2m.vsctlr; +#else ctxt.vttbr_el2 =3D v->domain->arch.p2m.vttbr; +#endif =20 _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1, v); } --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750814704800.2121789366196; Sun, 25 Jun 2023 20:40:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555006.866569 (Exim 4.92) (envelope-from ) id 1qDd4q-0005wi-Rv; Mon, 26 Jun 2023 03:39:44 +0000 Received: by outflank-mailman (output) from mailman id 555006.866569; Mon, 26 Jun 2023 03:39:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4q-0005ss-9L; Mon, 26 Jun 2023 03:39:44 +0000 Received: by outflank-mailman (input) for mailman id 555006; Mon, 26 Jun 2023 03:39:42 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd30-0000HH-6q for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:50 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id d1fab75a-13d2-11ee-8611-37d641c3527e; Mon, 26 Jun 2023 05:37:48 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F31831FB; Sun, 25 Jun 2023 20:38:31 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 587093F64C; Sun, 25 Jun 2023 20:37:45 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d1fab75a-13d2-11ee-8611-37d641c3527e From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 51/52] xen/mpu: create stubs of function/variables for UNSUPPORTED features Date: Mon, 26 Jun 2023 11:34:42 +0800 Message-Id: <20230626033443.2943270-52-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750815915100005 Content-Type: text/plain; charset="utf-8" As we are not introduing features like SMP, SET/WAY emulation, etc, in MPU system, so we create empty stubs of function/variables and warnings for these UNSUPPORTED features. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - new commit --- xen/arch/arm/arm64/mpu/head.S | 6 ++++++ xen/arch/arm/mpu/mm.c | 16 ++++++++++++++++ xen/arch/arm/mpu/p2m.c | 16 ++++++++++++++++ 3 files changed, 38 insertions(+) diff --git a/xen/arch/arm/arm64/mpu/head.S b/xen/arch/arm/arm64/mpu/head.S index 147a01e977..9f3c5b8990 100644 --- a/xen/arch/arm/arm64/mpu/head.S +++ b/xen/arch/arm/arm64/mpu/head.S @@ -241,6 +241,12 @@ ENTRY(setup_early_uart) #endif ENDPROC(setup_early_uart) =20 +ENTRY(enable_runtime_mm) + PRINT("- SMP NOT SUPPORTED -\r\n") +1: wfe + b 1b +ENDPROC(enable_runtime_mm) + /* * Local variables: * mode: ASM diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index ef8a327037..8a554a950b 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -933,6 +933,22 @@ void dump_hyp_walk(vaddr_t addr) } } =20 +void mm_init_secondary_cpu(void) +{ + printk(XENLOG_ERR "SMP not *SUPPORTED*\n"); +} + +int init_secondary_mm(int cpu) +{ + printk(XENLOG_ERR "mpu: SMP not *SUPPORTED*\n"); + return -EINVAL; +} + +void update_mm_mapping(bool enable) +{ + printk(XENLOG_ERR "mpu: SMP not *SUPPORTED*\n"); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/mpu/p2m.c b/xen/arch/arm/mpu/p2m.c index 87e350270d..4bc09326f5 100644 --- a/xen/arch/arm/mpu/p2m.c +++ b/xen/arch/arm/mpu/p2m.c @@ -492,6 +492,22 @@ void p2m_dump_info(struct domain *d) p2m_read_unlock(p2m); } =20 +void setup_virt_paging_one(void *data) +{ + printk(XENLOG_ERR "mpu: SMP not *SUPPORTED*\n"); +} + +void p2m_invalidate_root(struct p2m_domain *p2m) +{ + printk(XENLOG_ERR "mpu: p2m_invalidate_root() not *SUPPORTED*\n"); +} + +bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn) +{ + printk(XENLOG_ERR "mpu: p2m_resolve_translation_fault() not *SUPPORTED= *\n"); + return false; +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Sat May 11 10:54:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1687750804195255.16311441564608; Sun, 25 Jun 2023 20:40:04 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.555004.866562 (Exim 4.92) (envelope-from ) id 1qDd4q-0005mS-5s; Mon, 26 Jun 2023 03:39:44 +0000 Received: by outflank-mailman (output) from mailman id 555004.866562; Mon, 26 Jun 2023 03:39:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd4p-0005jc-Pj; Mon, 26 Jun 2023 03:39:43 +0000 Received: by outflank-mailman (input) for mailman id 555004; Mon, 26 Jun 2023 03:39:42 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qDd32-0007ej-8a for xen-devel@lists.xenproject.org; Mon, 26 Jun 2023 03:37:52 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id d3e01bd9-13d2-11ee-b237-6b7b168915f2; Mon, 26 Jun 2023 05:37:51 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 301021FB; Sun, 25 Jun 2023 20:38:35 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6E9333F64C; Sun, 25 Jun 2023 20:37:48 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d3e01bd9-13d2-11ee-b237-6b7b168915f2 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen Subject: [PATCH v3 52/52] xen/arm: add Kconfig option CONFIG_HAS_MPU to enable MPU system support Date: Mon, 26 Jun 2023 11:34:43 +0800 Message-Id: <20230626033443.2943270-53-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626033443.2943270-1-Penny.Zheng@arm.com> References: <20230626033443.2943270-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1687750805866100003 Content-Type: text/plain; charset="utf-8" Introduce a Kconfig option CONFIG_HAS_MPU to enable MPU architecture support. STATIC_MEMORY, ARCH_MAP_DOMAIN_PAGE and ARM_SECURE_STATE will be selected by MPU system by default. Also, features like, ARM_EFI, are not supported right now. Current MPU system design is only for ARM 64-bit platform. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen --- v3: - select ARCH_MAP_DOMAIN_PAGE and ARM_SECURE_STATE - remove platform-specific config: CONFIG_ARM_V8R --- xen/arch/arm/Kconfig | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 3f67aacbbf..2acdf39ec8 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -62,6 +62,7 @@ source "arch/Kconfig" config HAS_MMU bool "Memory Management Unit support in a VMSA system" default y + depends on !HAS_MPU select HAS_PAGING_MEMPOOL select HAS_PMAP select HAS_VMAP @@ -70,6 +71,17 @@ config HAS_MMU a memory system through a set of virtual to physical address mappings a= nd associated memory properties held in memory-mapped tables known as translation tables. =20 +config HAS_MPU + bool "Memory Protection Unit support in a PMSA system" + default n + depends on ARM_64 + select ARCH_MAP_DOMAIN_PAGE + select ARM_SECURE_STATE + select STATIC_MEMORY + help + The PMSA is based on a Memory Protection Unit (MPU), which provides a m= uch simpler + memory protection scheme than the MMU based VMSA. + config HAS_FIXMAP bool "Provide special-purpose 4K mapping slots in a VMSA" depends on HAS_MMU @@ -85,7 +97,7 @@ config ACPI =20 config ARM_EFI bool "UEFI boot service support" - depends on ARM_64 + depends on ARM_64 && !HAS_MPU default y help This option provides support for boot services through --=20 2.25.1