From nobody Thu May 9 11:34:13 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=@amazon.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=amazon.com ARC-Seal: i=1; a=rsa-sha256; t=1705432132; cv=none; d=zohomail.com; s=zohoarc; b=NDZt1Um1x9bx1defDpSFehQv1RhkQNK4I3uzTuVCreoD8RyZXqTC3i7/2+4HLbw71VjSeIOnBEfUAP10uMXZOhRn36B2YFROFc8exuLcUQy6FsmTGcpgEikjbqQlc54P/2Ife22fK5NG8Ul7qMaqguryJiwpOX6Nh01goPHXziA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1705432132; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=HBVAS715RpCNC8CvMLRfobjdzDnsKgjKWFL+xKdoEXs=; b=Zf6eJaGFGYRwiWogfjw8O53oDf9UxvbWs/TfIokBHY64Y53xuevJjI7TsAgQPBuoifPUz6BMOHIMKQ+RVzG1y36yLJfQUMuKsbOKKipuiIHGWshhA1TmNfJFxA1xTU/Z2HBsBb3Mkw1z8Zh8/09FZ4UiESgIRpJR2oz1NrPvASs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@amazon.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1705432132866116.4316658989311; Tue, 16 Jan 2024 11:08:52 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.668035.1039898 (Exim 4.92) (envelope-from ) id 1rPonf-0000xj-9E; Tue, 16 Jan 2024 19:08:39 +0000 Received: by outflank-mailman (output) from mailman id 668035.1039898; Tue, 16 Jan 2024 19:08:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rPonf-0000wr-2L; Tue, 16 Jan 2024 19:08:39 +0000 Received: by outflank-mailman (input) for mailman id 668035; Tue, 16 Jan 2024 19:08:37 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rPoY1-0002UD-Ou for xen-devel@lists.xenproject.org; Tue, 16 Jan 2024 18:52:29 +0000 Received: from smtp-fw-80009.amazon.com (smtp-fw-80009.amazon.com [99.78.197.220]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 6486fdbc-b4a0-11ee-98f1-6d05b1d4d9a1; Tue, 16 Jan 2024 19:52:28 +0100 (CET) Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-1197e3af.us-west-2.amazon.com) ([10.25.36.210]) by smtp-border-fw-80009.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2024 18:52:26 +0000 Received: from smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev (pdx2-ws-svc-p26-lb5-vlan2.pdx.amazon.com [10.39.38.66]) by email-inbound-relay-pdx-2a-m6i4x-1197e3af.us-west-2.amazon.com (Postfix) with ESMTPS id 117E51000D8; Tue, 16 Jan 2024 18:52:26 +0000 (UTC) Received: from EX19MTAUEB002.ant.amazon.com [10.0.44.209:63933] by smtpin.naws.us-east-1.prod.farcaster.email.amazon.dev [10.0.3.20:2525] with esmtp (Farcaster) id cfe1bab4-7948-4ffa-8d0e-cec429382164; Tue, 16 Jan 2024 18:52:25 +0000 (UTC) Received: from EX19D008UEA001.ant.amazon.com (10.252.134.62) by EX19MTAUEB002.ant.amazon.com (10.252.135.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 16 Jan 2024 18:52:25 +0000 Received: from EX19MTAUWB001.ant.amazon.com (10.250.64.248) by EX19D008UEA001.ant.amazon.com (10.252.134.62) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Tue, 16 Jan 2024 18:52:24 +0000 Received: from dev-dsk-eliasely-1a-fd74790f.eu-west-1.amazon.com (10.253.91.118) by mail-relay.amazon.com (10.250.64.254) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40 via Frontend Transport; Tue, 16 Jan 2024 18:52:23 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Inumbo-ID: 6486fdbc-b4a0-11ee-98f1-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1705431148; x=1736967148; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HBVAS715RpCNC8CvMLRfobjdzDnsKgjKWFL+xKdoEXs=; b=IMiLBBO2HxsR0jYQ3ORzskVquLf1vNOKUDXIsS4WtpGiZODZnR57FJlp YV8Bphh/Z7GWXE8yc589qn9CGZTY+t7WADok1hjrXWTDKyEOs6jjtkNJq 0l6HJikl/sMK1Wz1i7KQOMv/64+78rHJJxilJUf+tKVZno2oNABVvrtsC Q=; X-IronPort-AV: E=Sophos;i="6.05,200,1701129600"; d="scan'208";a="58692070" X-Farcaster-Flow-ID: cfe1bab4-7948-4ffa-8d0e-cec429382164 From: Elias El Yandouzi To: CC: , , , Julien Grall , Stefano Stabellini , "Bertrand Marquis" , Michal Orzel , Volodymyr Babchuk , Elias El Yandouzi Subject: [PATCH v2] xen/arm64: mm: Use per-pCPU page-tables Date: Tue, 16 Jan 2024 18:50:54 +0000 Message-ID: <20240116185056.15000-26-eliasely@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240116185056.15000-1-eliasely@amazon.com> References: <20240116185056.15000-1-eliasely@amazon.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: Bulk X-ZohoMail-DKIM: pass (identity @amazon.com) X-ZM-MESSAGEID: 1705432133828100001 Content-Type: text/plain; charset="utf-8" From: Julien Grall At the moment, on Arm64, every pCPU is sharing the same page-tables. In a follow-up patch, we will allow the possibility to remove the direct map and therefore it will be necessary to have a mapcache. While we have plenty of spare virtual address space to reserve part for each pCPU, it means that temporary mappings (e.g. guest memory) could be accessible by every pCPU. In order to increase our security posture, it would be better if those mappings are only accessible by the pCPU doing the temporary mapping. In addition to that, a per-pCPU page-tables opens the way to have per-domain mapping area. Arm32 is already using per-pCPU page-tables so most of the code can be re-used. Arm64 doesn't yet have support for the mapcache, so a stub is provided (moved to its own header asm/domain_page.h). Take the opportunity to fix a typo in a comment that is modified. Signed-off-by: Julien Grall Signed-off-by: Elias El Yandouzi ---- Changelog since v1: * Rebase * Fix typoes diff --git a/xen/arch/arm/arm64/mmu/mm.c b/xen/arch/arm/arm64/mmu/mm.c index d2651c9486..4f339efb7b 100644 --- a/xen/arch/arm/arm64/mmu/mm.c +++ b/xen/arch/arm/arm64/mmu/mm.c @@ -75,6 +75,7 @@ static void __init prepare_runtime_identity_mapping(void) paddr_t id_addr =3D virt_to_maddr(_start); lpae_t pte; DECLARE_OFFSETS(id_offsets, id_addr); + lpae_t *root =3D this_cpu(xen_pgtable); =20 if ( id_offsets[0] >=3D IDENTITY_MAPPING_AREA_NR_L0 ) panic("Cannot handle ID mapping above %uTB\n", @@ -85,7 +86,7 @@ static void __init prepare_runtime_identity_mapping(void) pte.pt.table =3D 1; pte.pt.xn =3D 0; =20 - write_pte(&xen_pgtable[id_offsets[0]], pte); + write_pte(&root[id_offsets[0]], pte); =20 /* Link second ID table */ pte =3D pte_of_xenaddr((vaddr_t)xen_second_id); diff --git a/xen/arch/arm/domain_page.c b/xen/arch/arm/domain_page.c index 3a43601623..ac2a6d0332 100644 --- a/xen/arch/arm/domain_page.c +++ b/xen/arch/arm/domain_page.c @@ -3,6 +3,8 @@ #include #include =20 +#include + /* Override macros from asm/page.h to make them work with mfn_t */ #undef virt_to_mfn #define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm= /arm32/mm.h index 856f2dbec4..87a315db01 100644 --- a/xen/arch/arm/include/asm/arm32/mm.h +++ b/xen/arch/arm/include/asm/arm32/mm.h @@ -1,12 +1,6 @@ #ifndef __ARM_ARM32_MM_H__ #define __ARM_ARM32_MM_H__ =20 -#include - -#include - -DECLARE_PER_CPU(lpae_t *, xen_pgtable); - /* * Only a limited amount of RAM, called xenheap, is always mapped on ARM32. * For convenience always return false. @@ -16,8 +10,6 @@ static inline bool arch_mfns_in_directmap(unsigned long m= fn, unsigned long nr) return false; } =20 -bool init_domheap_mappings(unsigned int cpu); - static inline void arch_setup_page_tables(void) { } diff --git a/xen/arch/arm/include/asm/domain_page.h b/xen/arch/arm/include/= asm/domain_page.h new file mode 100644 index 0000000000..e9f52685e2 --- /dev/null +++ b/xen/arch/arm/include/asm/domain_page.h @@ -0,0 +1,13 @@ +#ifndef __ASM_ARM_DOMAIN_PAGE_H__ +#define __ASM_ARM_DOMAIN_PAGE_H__ + +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE +bool init_domheap_mappings(unsigned int cpu); +#else +static inline bool init_domheap_mappings(unsigned int cpu) +{ + return true; +} +#endif + +#endif /* __ASM_ARM_DOMAIN_PAGE_H__ */ diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index 9a94d7eaf7..a76578a16f 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -2,6 +2,9 @@ #define __ARCH_ARM_MM__ =20 #include +#include + +#include #include #include #include diff --git a/xen/arch/arm/include/asm/mmu/mm.h b/xen/arch/arm/include/asm/m= mu/mm.h index c5e03a66bf..c03c3a51e4 100644 --- a/xen/arch/arm/include/asm/mmu/mm.h +++ b/xen/arch/arm/include/asm/mmu/mm.h @@ -2,6 +2,8 @@ #ifndef __ARM_MMU_MM_H__ #define __ARM_MMU_MM_H__ =20 +DECLARE_PER_CPU(lpae_t *, xen_pgtable); + /* Non-boot CPUs use this to find the correct pagetables. */ extern uint64_t init_ttbr; =20 diff --git a/xen/arch/arm/mmu/pt.c b/xen/arch/arm/mmu/pt.c index a7755728ae..e772ab4e66 100644 --- a/xen/arch/arm/mmu/pt.c +++ b/xen/arch/arm/mmu/pt.c @@ -606,9 +606,9 @@ static int xen_pt_update(unsigned long virt, unsigned long left =3D nr_mfns; =20 /* - * For arm32, page-tables are different on each CPUs. Yet, they share - * some common mappings. It is assumed that only common mappings - * will be modified with this function. + * Page-tables are different on each CPU. Yet, they share some common + * mappings. It is assumed that only common mappings will be modified + * with this function. * * XXX: Add a check. */ diff --git a/xen/arch/arm/mmu/setup.c b/xen/arch/arm/mmu/setup.c index 57f1b46499..8c81e26da3 100644 --- a/xen/arch/arm/mmu/setup.c +++ b/xen/arch/arm/mmu/setup.c @@ -26,17 +26,15 @@ * PCPUs. */ =20 -#ifdef CONFIG_ARM_64 -DEFINE_PAGE_TABLE(xen_pgtable); -static DEFINE_PAGE_TABLE(xen_first); -#define THIS_CPU_PGTABLE xen_pgtable -#else /* Per-CPU pagetable pages */ /* xen_pgtable =3D=3D root of the trie (zeroeth level on 64-bit, first on = 32-bit) */ DEFINE_PER_CPU(lpae_t *, xen_pgtable); #define THIS_CPU_PGTABLE this_cpu(xen_pgtable) /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */ static DEFINE_PAGE_TABLE(cpu0_pgtable); + +#ifdef CONFIG_ARM_64 +static DEFINE_PAGE_TABLE(xen_first); #endif =20 /* Common pagetable leaves */ @@ -228,19 +226,22 @@ void __init setup_pagetables(unsigned long boot_phys_= offset) lpae_t pte, *p; int i; =20 + p =3D cpu0_pgtable; + phys_offset =3D boot_phys_offset; =20 + /* arch_setup_page_tables() may need to access the root page-tables. */ + per_cpu(xen_pgtable, 0) =3D cpu0_pgtable; + arch_setup_page_tables(); =20 #ifdef CONFIG_ARM_64 pte =3D pte_of_xenaddr((uintptr_t)xen_first); pte.pt.table =3D 1; pte.pt.xn =3D 0; - xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] =3D pte; + p[zeroeth_table_offset(XEN_VIRT_START)] =3D pte; =20 - p =3D (void *) xen_first; -#else - p =3D (void *) cpu0_pgtable; + p =3D xen_first; #endif =20 /* Map xen second level page-table */ @@ -283,19 +284,11 @@ void __init setup_pagetables(unsigned long boot_phys_= offset) pte.pt.table =3D 1; xen_second[second_table_offset(FIXMAP_ADDR(0))] =3D pte; =20 -#ifdef CONFIG_ARM_64 - ttbr =3D (uintptr_t) xen_pgtable + phys_offset; -#else ttbr =3D (uintptr_t) cpu0_pgtable + phys_offset; -#endif =20 switch_ttbr(ttbr); =20 xen_pt_enforce_wnx(); - -#ifdef CONFIG_ARM_32 - per_cpu(xen_pgtable, 0) =3D cpu0_pgtable; -#endif } =20 void *__init arch_vmap_virt_end(void) diff --git a/xen/arch/arm/mmu/smpboot.c b/xen/arch/arm/mmu/smpboot.c index fb5df667ba..fdd9b9c580 100644 --- a/xen/arch/arm/mmu/smpboot.c +++ b/xen/arch/arm/mmu/smpboot.c @@ -7,6 +7,7 @@ =20 #include =20 +#include #include =20 /* @@ -68,20 +69,6 @@ static void clear_boot_pagetables(void) clear_table(boot_third); } =20 -#ifdef CONFIG_ARM_64 -int prepare_secondary_mm(int cpu) -{ - clear_boot_pagetables(); - - /* - * Set init_ttbr for this CPU coming up. All CPUs share a single setof - * pagetables, but rewrite it each time for consistency with 32 bit. - */ - init_ttbr =3D virt_to_maddr(xen_pgtable); - clean_dcache(init_ttbr); - return 0; -} -#else int prepare_secondary_mm(int cpu) { lpae_t *root =3D alloc_xenheap_page(); @@ -112,7 +99,6 @@ int prepare_secondary_mm(int cpu) =20 return 0; } -#endif =20 /* * Local variables: diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 7e28f62d09..3dec365c57 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include --=20 2.40.1