From nobody Sat Dec 27 07:08:59 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E55C54C63A for ; Fri, 22 Dec 2023 23:52:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mCSP2jn9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703289155; x=1734825155; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=H0KNXcx7kYiuLNhk2lubKN7E5oHJBEIlL2/mP4OOsUg=; b=mCSP2jn9kr4WoHyXb5ln9Tj0+CopPr6EAkE6svaPxvzArippZoBd1/90 5vyzyZq7q7dgYx2HgisuXWg/uN5ajLiH11TFNSmaYWfmUGc0K/0WolU85 lFqcuHAtzF6xnOcMTAgkFGX5kcHEptayo3SM+k+UUSuMja50qfv3XLgJi g2HHX4XxzQmSo9yQ/kz1KycJhJJoyzq4xs6X3eJmVN8IdKzboU/lZzLYW 8GNBf1vOFbTuvPSbLt3oHHCI9iFQhwbBCgF5HffTRrsAsU/HsZM1aGrwX DKToOnq5IRMoMUsMMbxFdBUE3xLkurp9ZKoTkx521gBLwgqdVb6RaA1xS w==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="3414396" X-IronPort-AV: E=Sophos;i="6.04,297,1695711600"; d="scan'208";a="3414396" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 15:52:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="726961507" X-IronPort-AV: E=Sophos;i="6.04,297,1695711600"; d="scan'208";a="726961507" Received: from jeroenke-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.249.35.180]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 15:52:29 -0800 Received: by box.shutemov.name (Postfix, from userid 1000) id 7502410A4E1; Sat, 23 Dec 2023 02:52:12 +0300 (+03) From: "Kirill A. Shutemov" To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org Cc: "Rafael J. Wysocki" , Peter Zijlstra , Adrian Hunter , Kuppuswamy Sathyanarayanan , Elena Reshetova , Jun Nakajima , Rick Edgecombe , Tom Lendacky , "Kalra, Ashish" , Sean Christopherson , "Huang, Kai" , Baoquan He , kexec@lists.infradead.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 15/16] x86/mm: Introduce kernel_ident_mapping_free() Date: Sat, 23 Dec 2023 02:52:07 +0300 Message-ID: <20231222235209.32143-16-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231222235209.32143-1-kirill.shutemov@linux.intel.com> References: <20231222235209.32143-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The helper complements kernel_ident_mapping_init(): it frees the identity mapping that was previously allocated. It will be used in the error path to free a partially allocated mapping or if the mapping is no longer needed. The caller provides a struct x86_mapping_info with the free_pgd_page() callback hooked up and the pgd_t to free. Signed-off-by: Kirill A. Shutemov Reviewed-by: Kai Huang --- arch/x86/include/asm/init.h | 3 ++ arch/x86/mm/ident_map.c | 73 +++++++++++++++++++++++++++++++++++++ 2 files changed, 76 insertions(+) diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h index cc9ccf61b6bd..14d72727d7ee 100644 --- a/arch/x86/include/asm/init.h +++ b/arch/x86/include/asm/init.h @@ -6,6 +6,7 @@ =20 struct x86_mapping_info { void *(*alloc_pgt_page)(void *); /* allocate buf for page table */ + void (*free_pgt_page)(void *, void *); /* free buf for page table */ void *context; /* context for alloc_pgt_page */ unsigned long page_flag; /* page flag for PMD or PUD entry */ unsigned long offset; /* ident mapping offset */ @@ -16,4 +17,6 @@ struct x86_mapping_info { int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_pa= ge, unsigned long pstart, unsigned long pend); =20 +void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd); + #endif /* _ASM_X86_INIT_H */ diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c index 968d7005f4a7..3996af7b4abf 100644 --- a/arch/x86/mm/ident_map.c +++ b/arch/x86/mm/ident_map.c @@ -4,6 +4,79 @@ * included by both the compressed kernel and the regular kernel. */ =20 +static void free_pte(struct x86_mapping_info *info, pmd_t *pmd) +{ + pte_t *pte =3D pte_offset_kernel(pmd, 0); + + info->free_pgt_page(pte, info->context); +} + +static void free_pmd(struct x86_mapping_info *info, pud_t *pud) +{ + pmd_t *pmd =3D pmd_offset(pud, 0); + int i; + + for (i =3D 0; i < PTRS_PER_PMD; i++) { + if (!pmd_present(pmd[i])) + continue; + + if (pmd_leaf(pmd[i])) + continue; + + free_pte(info, &pmd[i]); + } + + info->free_pgt_page(pmd, info->context); +} + +static void free_pud(struct x86_mapping_info *info, p4d_t *p4d) +{ + pud_t *pud =3D pud_offset(p4d, 0); + int i; + + for (i =3D 0; i < PTRS_PER_PUD; i++) { + if (!pud_present(pud[i])) + continue; + + if (pud_leaf(pud[i])) + continue; + + free_pmd(info, &pud[i]); + } + + info->free_pgt_page(pud, info->context); +} + +static void free_p4d(struct x86_mapping_info *info, pgd_t *pgd) +{ + p4d_t *p4d =3D p4d_offset(pgd, 0); + int i; + + for (i =3D 0; i < PTRS_PER_P4D; i++) { + if (!p4d_present(p4d[i])) + continue; + + free_pud(info, &p4d[i]); + } + + if (pgtable_l5_enabled()) + info->free_pgt_page(pgd, info->context); +} + +void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd) +{ + int i; + + for (i =3D 0; i < PTRS_PER_PGD; i++) { + if (!pgd_present(pgd[i])) + continue; + + free_p4d(info, &pgd[i]); + } + + info->free_pgt_page(pgd, info->context); +} + static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page, unsigned long addr, unsigned long end) { --=20 2.41.0