From nobody Mon Nov 25 08:01:18 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=@amazon.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=amazon.com ARC-Seal: i=1; a=rsa-sha256; t=1715598725; cv=none; d=zohomail.com; s=zohoarc; b=gv/2J5P2uA3Yv1wJPNdtyNqAxbjT16KUO/3qvqsR9hL5J+gKYJmT8XCt3PbNa7KBlGLUZWDMXA8PdtZZop2G3uHQbQbQEKajfiDACt4KtC3ooxaHQA6osRM0j0MFOPxUw4rvP6XFBIGgOBJdrameU3NieYQ6SN6udnBtMwL2FQE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1715598725; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=l8CclplVrvwG16aVcnUczikHaxmRbby77xL0adFNfQA=; b=NegkH8YB1OfzAjNh4e6SyWpliSKapMt418YotgKdbEBFCPa8Tjo7IorgeeL7yRgNZkzf0uvyTr3Y/3ovjpcNGNtsnQsl3Ebnbo1BfaeW94+Amcgl+WQDvD/5OxzCmTgZZmUp2MDAElCYWLwWReTYb4R3EJsWr+KbNE/36rlcBW8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@amazon.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1715598725103725.818488275326; Mon, 13 May 2024 04:12:05 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.720806.1123742 (Exim 4.92) (envelope-from ) id 1s6Taq-0003d2-TL; Mon, 13 May 2024 11:11:44 +0000 Received: by outflank-mailman (output) from mailman id 720806.1123742; Mon, 13 May 2024 11:11:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s6Taq-0003cs-QA; Mon, 13 May 2024 11:11:44 +0000 Received: by outflank-mailman (input) for mailman id 720806; Mon, 13 May 2024 11:11:43 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1s6Tap-00036L-LY for xen-devel@lists.xenproject.org; Mon, 13 May 2024 11:11:43 +0000 Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com [207.171.190.10]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9272aed3-1119-11ef-b4bb-af5377834399; Mon, 13 May 2024 13:11:41 +0200 (CEST) Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-33001.sea14.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 May 2024 11:11:34 +0000 Received: from EX19MTAUEC002.ant.amazon.com [10.0.44.209:25808] by smtpin.naws.us-east-1.prod.farcaster.email.amazon.dev [10.0.23.176:2525] with esmtp (Farcaster) id 2912e586-6345-4f18-afe7-466c4c067f3f; Mon, 13 May 2024 11:11:32 +0000 (UTC) Received: from EX19D008UEA004.ant.amazon.com (10.252.134.191) by EX19MTAUEC002.ant.amazon.com (10.252.135.253) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 13 May 2024 11:11:31 +0000 Received: from EX19MTAUWB001.ant.amazon.com (10.250.64.248) by EX19D008UEA004.ant.amazon.com (10.252.134.191) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Mon, 13 May 2024 11:11:31 +0000 Received: from dev-dsk-eliasely-1a-fd74790f.eu-west-1.amazon.com (10.253.91.118) by mail-relay.amazon.com (10.250.64.254) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28 via Frontend Transport; Mon, 13 May 2024 11:11:30 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9272aed3-1119-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1715598702; x=1747134702; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=l8CclplVrvwG16aVcnUczikHaxmRbby77xL0adFNfQA=; b=IAE9KB0CWo9i/mgLaP9do3dwd/7oSSd55nljFX3D5dcVEw7gS5ZoURAO 8FQtk8VFO3VVraM99AkTrJwCWVcTOk6IZ/x+O+anX2VYBH8FpOm8VU0mw 3K2Xmym5RuRGtN/xB84AYCdq2H25RJa7HQLseVMpS2yiAJrH5gJ5CK8of o=; X-IronPort-AV: E=Sophos;i="6.08,158,1712620800"; d="scan'208";a="344136745" X-Farcaster-Flow-ID: 2912e586-6345-4f18-afe7-466c4c067f3f From: Elias El Yandouzi To: CC: , , , Hongyan Xia , Julien Grall , Elias El Yandouzi Subject: [PATCH V3 01/19] x86: Create per-domain mapping of guest_root_pt Date: Mon, 13 May 2024 11:10:59 +0000 Message-ID: <20240513111117.68828-2-eliasely@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240513111117.68828-1-eliasely@amazon.com> References: <20240513111117.68828-1-eliasely@amazon.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @amazon.com) X-ZM-MESSAGEID: 1715598725910100001 Content-Type: text/plain; charset="utf-8" From: Hongyan Xia Create a per-domain mapping of PV guest_root_pt as direct map is being removed. Note that we do not map and unmap root_pgt for now since it is still a xenheap page. Signed-off-by: Hongyan Xia Signed-off-by: Julien Grall Signed-off-by: Elias El Yandouzi ---- Changes in V3: * Rename SHADOW_ROOT * Haven't addressed the potentially over-allocation issue as I don'= t get it Changes in V2: * Rework the shadow perdomain mapping solution in the follow-up pat= ches Changes since Hongyan's version: * Remove the final dot in the commit title diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/c= onfig.h index ab7288cb36..5d710384df 100644 --- a/xen/arch/x86/include/asm/config.h +++ b/xen/arch/x86/include/asm/config.h @@ -203,7 +203,7 @@ extern unsigned char boot_edid_info[128]; /* Slot 260: per-domain mappings (including map cache). */ #define PERDOMAIN_VIRT_START (PML4_ADDR(260)) #define PERDOMAIN_SLOT_MBYTES (PML4_ENTRY_BYTES >> (20 + PAGETABLE_ORDER= )) -#define PERDOMAIN_SLOTS 3 +#define PERDOMAIN_SLOTS 4 #define PERDOMAIN_VIRT_SLOT(s) (PERDOMAIN_VIRT_START + (s) * \ (PERDOMAIN_SLOT_MBYTES << 20)) /* Slot 4: mirror of per-domain mappings (for compat xlat area accesses). = */ @@ -317,6 +317,14 @@ extern unsigned long xen_phys_start; #define ARG_XLAT_START(v) \ (ARG_XLAT_VIRT_START + ((v)->vcpu_id << ARG_XLAT_VA_SHIFT)) =20 +/* pv_root_pt mapping area. The fourth per-domain-mapping sub-area */ +#define PV_ROOT_PT_MAPPING_VIRT_START PERDOMAIN_VIRT_SLOT(3) +#define PV_ROOT_PT_MAPPING_ENTRIES MAX_VIRT_CPUS + +/* The address of a particular VCPU's PV_ROOT_PT */ +#define PV_ROOT_PT_MAPPING_VCPU_VIRT_START(v) \ + (PV_ROOT_PT_MAPPING_VIRT_START + ((v)->vcpu_id * PAGE_SIZE)) + #define ELFSIZE 64 =20 #define ARCH_CRASH_SAVE_VMCOREINFO diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/d= omain.h index f5daeb182b..8a97530607 100644 --- a/xen/arch/x86/include/asm/domain.h +++ b/xen/arch/x86/include/asm/domain.h @@ -272,6 +272,7 @@ struct time_scale { struct pv_domain { l1_pgentry_t **gdt_ldt_l1tab; + l1_pgentry_t **root_pt_l1tab; =20 atomic_t nr_l4_pages; =20 diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index d968bbbc73..efdf20f775 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -505,6 +505,13 @@ void share_xen_page_with_guest(struct page_info *page,= struct domain *d, nrspin_unlock(&d->page_alloc_lock); } =20 +#define pv_root_pt_idx(v) \ + ((v)->vcpu_id >> PAGETABLE_ORDER) + +#define pv_root_pt_pte(v) \ + ((v)->domain->arch.pv.root_pt_l1tab[pv_root_pt_idx(v)] + \ + ((v)->vcpu_id & (L1_PAGETABLE_ENTRIES - 1))) + void make_cr3(struct vcpu *v, mfn_t mfn) { struct domain *d =3D v->domain; @@ -524,6 +531,13 @@ void write_ptbase(struct vcpu *v) =20 if ( is_pv_vcpu(v) && v->domain->arch.pv.xpti ) { + mfn_t guest_root_pt =3D _mfn(MASK_EXTR(v->arch.cr3, PAGE_MASK)); + l1_pgentry_t *pte =3D pv_root_pt_pte(v); + + ASSERT(v =3D=3D current); + + l1e_write(pte, l1e_from_mfn(guest_root_pt, __PAGE_HYPERVISOR_RO)); + cpu_info->root_pgt_changed =3D true; cpu_info->pv_cr3 =3D __pa(this_cpu(root_pgt)); if ( new_cr4 & X86_CR4_PCIDE ) diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c index 2a445bb17b..1b025986f7 100644 --- a/xen/arch/x86/pv/domain.c +++ b/xen/arch/x86/pv/domain.c @@ -288,6 +288,21 @@ static void pv_destroy_gdt_ldt_l1tab(struct vcpu *v) 1U << GDT_LDT_VCPU_SHIFT); } =20 +static int pv_create_root_pt_l1tab(struct vcpu *v) +{ + return create_perdomain_mapping(v->domain, + PV_ROOT_PT_MAPPING_VCPU_VIRT_START(v), + 1, v->domain->arch.pv.root_pt_l1tab, + NULL); +} + +static void pv_destroy_root_pt_l1tab(struct vcpu *v) + +{ + destroy_perdomain_mapping(v->domain, + PV_ROOT_PT_MAPPING_VCPU_VIRT_START(v), 1); +} + void pv_vcpu_destroy(struct vcpu *v) { if ( is_pv_32bit_vcpu(v) ) @@ -297,6 +312,7 @@ void pv_vcpu_destroy(struct vcpu *v) } =20 pv_destroy_gdt_ldt_l1tab(v); + pv_destroy_root_pt_l1tab(v); XFREE(v->arch.pv.trap_ctxt); } =20 @@ -311,6 +327,13 @@ int pv_vcpu_initialise(struct vcpu *v) if ( rc ) return rc; =20 + if ( v->domain->arch.pv.xpti ) + { + rc =3D pv_create_root_pt_l1tab(v); + if ( rc ) + goto done; + } + BUILD_BUG_ON(X86_NR_VECTORS * sizeof(*v->arch.pv.trap_ctxt) > PAGE_SIZE); v->arch.pv.trap_ctxt =3D xzalloc_array(struct trap_info, X86_NR_VECTOR= S); @@ -346,10 +369,12 @@ void pv_domain_destroy(struct domain *d) =20 destroy_perdomain_mapping(d, GDT_LDT_VIRT_START, GDT_LDT_MBYTES << (20 - PAGE_SHIFT)); + destroy_perdomain_mapping(d, PV_ROOT_PT_MAPPING_VIRT_START, PV_ROOT_PT= _MAPPING_ENTRIES); =20 XFREE(d->arch.pv.cpuidmasks); =20 FREE_XENHEAP_PAGE(d->arch.pv.gdt_ldt_l1tab); + FREE_XENHEAP_PAGE(d->arch.pv.root_pt_l1tab); } =20 void noreturn cf_check continue_pv_domain(void); @@ -371,6 +396,12 @@ int pv_domain_initialise(struct domain *d) goto fail; clear_page(d->arch.pv.gdt_ldt_l1tab); =20 + d->arch.pv.root_pt_l1tab =3D + alloc_xenheap_pages(0, MEMF_node(domain_to_node(d))); + if ( !d->arch.pv.root_pt_l1tab ) + goto fail; + clear_page(d->arch.pv.root_pt_l1tab); + if ( levelling_caps & ~LCAP_faulting && (d->arch.pv.cpuidmasks =3D xmemdup(&cpuidmask_defaults)) =3D=3D N= ULL ) goto fail; @@ -381,6 +412,11 @@ int pv_domain_initialise(struct domain *d) if ( rc ) goto fail; =20 + rc =3D create_perdomain_mapping(d, PV_ROOT_PT_MAPPING_VIRT_START, + PV_ROOT_PT_MAPPING_ENTRIES, NULL, NULL); + if ( rc ) + goto fail; + d->arch.ctxt_switch =3D &pv_csw; =20 d->arch.pv.xpti =3D is_hardware_domain(d) ? opt_xpti_hwdom : opt_xpti_= domu; diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-of= fsets.c index 630bdc3945..c1ae5013af 100644 --- a/xen/arch/x86/x86_64/asm-offsets.c +++ b/xen/arch/x86/x86_64/asm-offsets.c @@ -80,6 +80,7 @@ void __dummy__(void) =20 #undef OFFSET_EF =20 + OFFSET(VCPU_id, struct vcpu, vcpu_id); OFFSET(VCPU_processor, struct vcpu, processor); OFFSET(VCPU_domain, struct vcpu, domain); OFFSET(VCPU_vcpu_info, struct vcpu, vcpu_info_area.map); diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S index df015589ce..c1377da7a5 100644 --- a/xen/arch/x86/x86_64/entry.S +++ b/xen/arch/x86/x86_64/entry.S @@ -162,7 +162,15 @@ FUNC_LOCAL(restore_all_guest) and %rsi, %rdi and %r9, %rsi add %rcx, %rdi + + /* + * The address in the vCPU cr3 is always mapped in the per-domain + * pv_root_pt virt area. + */ + imul $PAGE_SIZE, VCPU_id(%rbx), %esi + movabs $PV_ROOT_PT_MAPPING_VIRT_START, %rcx add %rcx, %rsi + mov $ROOT_PAGETABLE_FIRST_XEN_SLOT, %ecx mov root_table_offset(SH_LINEAR_PT_VIRT_START)*8(%rsi), %r8 mov %r8, root_table_offset(SH_LINEAR_PT_VIRT_START)*8(%rdi) --=20 2.40.1